CC BY-NC-ND 4.0 · Semin Hear 2021; 42(03): 237-247
DOI: 10.1055/s-0041-1735132
Review Article

Motion Sensors in Automatic Steering of Hearing Aids

Eric Branda
1   WS Audiology, Piscataway, New Jersey
,
Tobias Wurzbacher
2   WS Audiology, Erlangen, Germany
› Author Affiliations
 

Abstract

A requirement for modern hearing aids is to evaluate a listening environment for the user and automatically apply appropriate gain and feature settings for optimal hearing in that listening environment. This has been predominantly achieved by the hearing aids' acoustic sensors, which measure acoustic characteristics such as the amplitude and modulation of the incoming sound sources. However, acoustic information alone is not always sufficient for providing a clear indication of the soundscape and user's listening needs. User activity such as being stationary or being in motion can drastically change these listening needs. Recently, hearing aids have begun utilizing integrated motion sensors to provide further information to the hearing aid's decision-making process when determining the listening environment. Specifically, accelerometer technology has proven to be an appropriate solution for motion sensor integration in hearing aids. Recent investigations have shown benefits with integrated motion sensors for both laboratory and real-world ecological momentary assessment measurements. The combination of acoustic and motion sensors provides the hearing aids with data to better optimize the hearing aid features in anticipation of the hearing aid user's listening needs.


#

It is no question that speech intelligibility is a priority for hearing aid processing. Most often, a hearing aid fitting begins by optimizing the audibility of speech in a quiet listening environment. The hearing aid settings for this initial fitting establish a foundation for the hearing aid user's overall amplification. However, it is expected that the hearing aid user will have further demands on speech intelligibility in various other listening environments and situations.

Competing background noise is a leading cause of dissatisfaction with hearing aid use[1] and is consequently one of the greatest challenges for fitting hearing aids. The surrounding listening environment can mask the target speech signal, resulting in a decreased signal-to-noise ratio (SNR) that directly and negatively impacts speech intelligibility. This poorer SNR may be the result of directly competing signals or even temporal delays created by reverberation. In these cases, further adjustments to the initial hearing aid settings are often required. These adjustments could include increasing gain for specific speech frequencies or increasing the performance of certain features like directional microphones and noise reduction algorithms.

An additional consideration for these challenging listening environments is that it is not always reasonable to expect the hearing aid user to identify the variations in these noisy environments and then manually apply changes to the hearing aid settings to address these variations. The user may be aware that the listening environment has changed and is more difficult, but an awareness of the underlying influences on the listening environment may be a struggle to realize.

Considering this obstacle to finding the optimal adjustment of gain and features for a listening environment raises the need for the hearing aid to accurately analyze the acoustic soundscape. The hearing aid needs to first classify the listening environment and then use this information to apply the appropriate processing adjustments that will optimize the listening performance for the hearing aid user. This type of acoustic soundscape analysis has seen changes throughout the years.

Listening Environment Analysis

Traditionally, hearing aids have been somewhat limited in classifying different listening environments (see the article by Hayes in this issue for more information about environmental classifiers). Acoustic factors such as signal modulation and level have been key contributors to the scene analysis. For example, a difference in the modulation rate and depth between speech and other nonspeech sounds can help determine the listening environment. And, of course, increases in level, especially with nonspeech signals, further determine the aggressiveness of noise management in a hearing aid.

More recently, other factors have become prevalent in identifying the listening environment. For example, the SNR can be used to further determine if the listening environment may be challenging, even if the overall sound levels are low. The ambient modulation of other signals may correspond to more stationary interferers like air conditioning or more dynamic interferers like a baby crying. The direction of arrival for a given sound may also indicate its importance as a target signal or an interferer. For example, one may generally assume that target signals arrive from the front. But if a server at a restaurant were to approach the hearing aid user from the side, this new speech signal may be considered target speech rather than interfering speech.

When the hearing aid identifies or classifies these changing listening conditions, different features are utilized. As noise levels increase, the hearing aid's noise reduction algorithms may become more aggressive to better reduce nonspeech signals, especially in the gaps between speeches. Additionally, directional microphones may adapt to focus better on speech signals and reduce unwanted background noise. Some hearing aid algorithms that transition the directionality to full bilateral beamformers have been shown to provide advantages over unilateral beamformers in noise[2] (for more information about beamformers, see the articles by Derleth et al and by Andersen et al in this issue). Consequently, some hearing aid users with this configuration of beamforming may have better speech intelligibility in noise than normal-hearing listeners.[3] Other directional technologies that focus on speech arriving from specific directions such as directly to the side or behind have been shown to provide more benefit in these listening environments than traditional adaptive directional microphones.[4] In the vast majority of the aforementioned examples, the hearing aids have identified the challenging listening environment based on acoustic sensors and automatically applied specific parameters within their given feature-set to optimize speech intelligibility.

These automated features are ideal when the listening environment fits the user's listening needs. However, it is not always the case that environmental acoustics provide all of the necessary input required for the hearing aid user.

If we consider an outdoor cafe scenario with a hearing aid user and a conversation partner, two separate listening needs can be quickly identified. While sitting and conversing, the hearing aid user is likely facing the conversation partner. In this case, the hearing aids would optimally employ noise reduction and directionality to suppress the environmental sounds and focus more specifically on the conversation partner. However, if both parties stand up to leave and are still conversing, they are likely both facing forward to watch where they are going while continuing to communicate. In this case, the conversation partner is not directly in the line of sight of the hearing aid user. The environmental acoustics have not changed, but the user's listening needs have changed. Directionality, in this case, might be somewhat detrimental to both communication and possibly even to safety. A less directional response is needed despite the acoustic indications, and the hearing aids need a method to identify that requirement.

To accommodate a broader range of communication scenarios, motion sensors have recently been implemented in hearing aids to help them appropriately classify the listening environment. Motion sensors can determine that a hearing aid user is moving. These data can then be used as part of the hearing aid's decision-making process to classify the listening environment and apply the most appropriate settings.


#

Motion Sensors

The use of motion sensors is a relatively recent addition to hearing aid technology. As recently as 2017, hearing aids used wireless communication with smartphones to receive motion data from the mobile phone. Recognizing the benefits of this synergy, interest grew in incorporating motion sensors directly in hearing aids. With an integrated motion sensor, several restrictions were eliminated from the phone-based motion sensor system. For one, the user would need to constantly have the phone with them to utilize the motion sensor. Additionally, when the user switched to a new phone, the phone's new processing parameters could result in different timing effects for the hearing aid user, thus changing the performance to which they were accustomed.

With improvements in miniaturization and hearing aid chip design, integration of motion sensors in the hearing aid itself was realized in 2019. This eliminated the need for the hearing aid user to depend on the smartphone to provide motion information.

Three different types of motion sensors are potential candidates for hearing aid use: (1) the magnetometer, (2) the gyroscope, and (3) the accelerometer. All three measure different aspects of motion and have varying advantages and disadvantages for hearing aid use. Ideally, any type of motion sensor must be robust and reliable. The motion sensor must also be sensitive to real-time head movements, as can be expected during a conversation. Additionally, the motion sensor should be sensitive to inertial movements to track the physical activity and motion of the hearing aid user.

The magnetometer measures the environmental magnetic field. The measurement unit is tesla or gauss. Without any disturbances, it measures the orientation according to the earth's magnetic field and can be interpreted as an electronic compass. The nature of its operation makes it susceptible to other magnetic interferences, which raises the question of the reliability of this type of motion sensor. Even minor soldering artifacts or variations during manufacturing may deform the magnetic field, thereby requiring proper individual compensation and calibration.

The second type of motion sensor is the gyroscope. The gyroscope measures the turning rate, that is, the speed of a rotation around one axis in radians per second. Hence, it would be ideal for detecting the hearing aid user's head turns if he faces another conversation partner. In terms of size, it is the largest of the three mentioned sensor types.

The magnetometer and the gyroscope are not optimal for hearing aid use, especially when considering power consumption—both are power-hungry. Specifically, the current needed for a magnetometer is at least in the order of 50 µA and for a gyroscope at least 420 µA. If we consider that the typical hard-wired signal processing in a hearing aid takes anywhere from 350 to 550 µA, the gyroscope alone could take more processing power than all of the hearing aid's adaptive features combined.

The third type of sensor considered is the accelerometer. As the name implies, this type of sensor measures acceleration in terms of standard gravity G. When compared with the other types of motion sensors, attributes of physical size and power consumption show a clear advantage for hearing aid use. As shown in [Fig. 1], the motion sensor is quite small, supporting miniaturization needs within a hearing aid. Current motion sensors measure 2 mm × 2 mm × 0.95 mm. Additionally, the power consumption of such a motion sensor is less than 15 µA. So, considering the size, power consumption, and measurement robustness, an accelerometer is the best choice for integrating a motion sensor into a hearing aid ([Fig. 1]).

Zoom Image
Figure 1 A contemporary accelerometer compared in size to a standard 10A-size hearing aid battery.

State-of-the-art triaxial accelerometers measure the acceleration forces ax , ay , az for each orthogonal axis x, y, and z in the three-dimensional space ([Fig. 2]). The orientation of the sensor axis is defined in our context as wearing the hearing aid on the ear and looking straight forward to the horizon in an upward position. The nose of the hearing aid user points to the negative x coordinate, the y-axis points to the right side parallel to the ground, and the z-direction is pointing upward in the sky ([Fig. 2]).

Zoom Image
Figure 2 Orthogonal axes x, y, and z for measuring acceleration forces.

Even if the hearing aid user is just standing and looking straight ahead, the accelerometer will still measure an acceleration: the earth's gravitational field G, which is 9.81 m/s2 (meters per second squared) [32.17 ft/s2]. Often the raw accelerometer measures are normalized to G. Hence, the accelerometer of a standing hearing aid user would measure in this case. If the user were to lay down on a couch on the left or right side, the accelerometer measure would read And similarly, if the user were to lay down on the front/back, a_x . In general, if no movement is present, the accelerometer values report the sensor's orientation relative to earth's gravity. Additionally, looking at the individual axes, the total amount of acceleration is expressed as . This yields a one-dimensional pattern of movement.

More interesting than the static case is measuring the accelerations during the hearing aid user's motion or, more specifically, different patterns of motion. The most practical type of movement detection for hearing aid users is walking. The changes in the individual axes create a very clear pattern of motion for the hearing aid user. If one changes the type of movement from walking to bicycle riding, the amount of change for each axis creates very distinct patterns of movement. [Fig. 3] shows different motion patterns associated with sitting, walking, and cycling ([Fig. 3]).

Zoom Image
Figure 3 Variations in motion patterns from gravitational acceleration over time for sitting, walking, and cycling.

In many ways, this type of movement can be viewed similarly with the oscillation of sound waves. Higher amplitudes in a sound wave indicate a higher intensity sound, just as frequency is visualized with the number of complete cycles per second. Hearing aids are very efficient at identifying variations in the slow-varying amplitude modulation (the temporal envelope) to differentiate primarily between speech and nonspeech signals. This type of acoustic detection has been successfully extended to identify other listening environments such as music and car engines. In the same way acoustic sensors classify sounds as speech or nonspeech, the accelerometer can detect changes in motion to identify different patterns of movement. [Fig. 4] illustrates the difference in waveforms between a spoken vowel and a car engine as well as differences in motion waveforms between walking and sitting. Just as traditional hearing aid processing can accurately identify the differences between the acoustic signals (see the article by Hayes in this issue), they can now accurately identify the differences between the motion signals ([Fig. 4]).

Zoom Image
Figure 4 Comparison of sound and accelerometer patterns over time. Top row: Sound pressure amplitude for sounds of a vowel /u/ and a car noise. Bottom row: Acceleration for a walking and a sitting scene.

As mentioned previously, the hearing aid will often change processing characteristics based on environmental acoustics. For example, in a speech-in-noise listening environment, it is very common to apply directionality to help suppress competing background noise and focus on target speech from the front. However, with the addition of the accelerometer, motion can be included in the classification of the listening environment. By identifying a movement pattern, the hearing aid can factor that data into the decision process. This provides a more accurate estimation of the hearing aid user's needs. In a case of speech-in-noise where directionality might be applied, if the listening environment is classified as speech-in-noise with walking, the hearing aid can reduce the amount of directionality to allow for more environmental awareness (see the article by Jespersen et al in this issue for a discussion about the importance of environmental awareness).

By measuring the three axes (x, y, and z), the accelerometer is efficient at identifying several characteristics of motion ranging from multiple stationary positions to several types of movement. All of these measures can be useful for hearing aid function and improved classification of a listening environment.


#

Evidence

To date, there has been little published research on the use of motion sensors in hearing aids. This is primarily due to motion sensors being incorporated only recently into hearing aids. Froehlich et al[5] reported on the use of an accelerometer integrated into a hearing aid platform. Two separate investigations were conducted to examine both laboratory data and real-world data as measured via ecological momentary assessment (EMA).

In the laboratory trial, 13 participants (7 females, 6 males) with bilateral, symmetrical downward-sloping mild-to-moderate hearing loss were seated in an acoustic scene designed to simulate a listening environment where a hearing aid user may be walking by a street with two conversation partners. The ambient street noise was presented at 65 dBA from azimuths of 45, 135, 225, and 315 degrees. With the participant facing the 0-degree azimuth, speech from a male talker was presented from 110 degrees and from a female speaker at 250 degrees ([Fig. 5]). Speech from both conversation partners was presented at 68 dBA. The hearing aids were configured to motion sensor “on” and “off” conditions. In the “off” condition, the motion sensor was deactivated, meaning that the hearing aids could only make soundscape determinations based on acoustic information alone. As expected, this scenario represents a speech-in-noise listening environment for which noise reduction and directionality would both be applied. The “on” condition was with the hearing aid and accelerometer forced into an active condition. Because the participant was seated in the center of the loudspeaker array, actual motion was limited. Therefore, the hearing aid was configured to react as if the motion sensor had detected that the participant was walking. The expected hearing aid behavior would be to reduce directionality due to the incoming motion data ([Fig. 5]).

Zoom Image
Figure 5 Laboratory speaker setup for traffic scenario. Background traffic noise was presented from 45-, 135-, 225-, and 315-degree azimuths at 65 dBA SPL with male and female speech from 110- and 250-degree azimuths, respectively, at 68 dBA.

The participants were asked to rate both speech understanding and perceived listening effort for speech from the conversation partners for both the accelerometer “on” and “off” conditions. For both conditions, performance ratings were significantly better for the “on” condition (p < 0.05, see [Fig. 6]). Recall that in a real-world experience, the user would be in motion to trigger the same “on” response from the accelerometer and target speech would likely not be directly in front of the user while walking. These laboratory results support the efficacy of adding a motion sensor to the hearing aid's classification system so that directionality can be advantageously reduced to benefit the user in specific listening environments ([Fig. 6]). Besides the subjective evaluation, the effect of the motion sensors can be demonstrated with technical measures. [Fig. 7] displays a measured polar pattern of a monaural beamformer from a hearing aid placed on the left ear. The method is based on the Hagerman and Olofsson[6] approach and used the extension by Aubreville and Petrausch.[7] The green curve (color online) depicts an omnidirectional characteristic at the KEMAR head for a frequency of 1 kHz. The 0-degree azimuth is the looking direction, and the levels are normalized to that direction. Since the hearing aid is placed on the left side of the head, the right side is attenuated by the head shadow. This beam pattern is good if there is no background noise. In contrast, the red curve shows the directional pattern of a fully active monaural directional beamformer. Sound sources to the back and sides are attenuated, while there is only negligible attenuation in the main viewing direction. For maximum speech intelligibility, this beam pattern should be applied when there is a direct-facing conversation partner and the environment is very noisy. Between these two extreme cases —omnidirectional and monaural directional beamformer characteristics—the hearing aid has to automatically select the best settings by combining traditional acoustic analyses with analyses of the user's motion behavior. The blue curve shows the directional pattern for a moderately noisy environment when motion data are not considered. On the other hand, the black curve was measured in the identical acoustic scene, but with motion data being considered. The acoustic effect is that the left frontal hemisphere is less attenuated and reaches the omnidirectional characteristic. The assumed benefits for the hearing aid user are more spatial awareness while walking and more directivity to support better speech understanding while not walking. The hearing aid handles this decision-making automatically. Furthermore, by incorporating the walking decision into the hearing aid settings, the traditional paradigm “my conversation partner is always in front of me” is extended to include other communication situations. For example, during a walk, the conversation partners are generally to the side. In this situation, the method described here will yield a 3-dB SNR improvement compared with a classical approach without a motion sensor ([Fig. 7]).

Zoom Image
Figure 6 Mean ratings and 95th percentile confidence values for both speech understanding and listening effort. The 13-point scale was from 1 = strongly disagree to 7 = strongly agree. The acoustic scene was background traffic noise with speech coming from the 110- and 250-degree azimuths at a +3 dB SNR. See [Fig. 5] for speaker configuration. (From Froehlich et al,[5] used with permission.)
Zoom Image
Figure 7 Polar patterns measured on the left ear of KEMAR for four conditions: automatic with no movement, full monaural directional, omnidirectional, and automatic walking.

Along with the laboratory data, Froehlich et al also reported real-world outcomes using EMA data. EMA was conducted at multiple locations with background noise present, including home, buildings, public transportation, and outdoor settings. Other listening environments included outside in quiet as well as both standing and walking on a busy street. Participants evaluated the listening conditions for their experiences in terms of loudness, sound quality, speech understanding, listening effort, naturalness, direction of a sound, distance of a sound, and overall satisfaction. Participants rated their understanding in the environments on a 9-point scale ranging from 1 = nothing to 9 = everything, with the middle rating being 5 = sufficient.

Results for understanding while moving in the three background noise conditions in home, building, and outdoor settings were evaluated. Ratings of 6 (“rather much”) or higher were considered indicative of understanding speech in those environments. Ratings exceeded this benchmark 80% of the time for all three listening environments, with movement in the home receiving the highest ratings.

Another condition evaluated in the study was the listening environment of having a conversation while walking along a busy street. Considering communication and safety needs, this condition might be highly relevant to motion sensor use. As part of the study's EMA portion, participants in this busy street condition were asked the following questions: Is the listening environment natural? Is the acoustic scene perception appropriate? What is the overall satisfaction for speech understanding?

Results of this EMA portion showed over 90% “yes” or “mostly yes” responses related to naturalness, 100% “yes” or “mostly yes” responses for orientation, and 88% satisfaction for speech understanding in this condition.

Study results in both laboratory and real-world conditions demonstrate an efficacious and effective use of the motion sensor as part of a hearing aid classification process.


#

Summary

The use of motion sensors, in particular accelerometers, in hearing aids is a relatively new technological feature. Power consumption and size are primary factors in considering any new feature for hearing aid use. These factors are especially relevant for motion sensors. With a discrete size and low power consumption, accelerometers are an optimal technology for integrating into hearing aids. With the addition of motion sensors, hearing aids have more data in the decision process for anticipating the hearing aid user's listening requirements.

Motion sensors integrated into hearing aids show promise for other uses outside of standard hearing benefit. An application for fall detection has seen some use in the marketplace (see the article by Fabry and Bhowmik in this issue for more details on this application of motion sensors). But the benefits can go beyond this. The stable placement on the ear and ability to detect various patterns of movement offers applications to health care. Hence, one possible application of a motion sensor could be tracking the general head position over the day. In doing so, one can determine how long or often the user is in an upright position or quantify the resting times for each side. The simple knowledge of how long a family member or loved one has been stationary can help in providing care when not at home. Additionally, using a motion sensor to track movement, such as walking time, can help monitor one's health (see the article by Fabry and Bhowmik in this issue for more details on this application of motion sensors). These are secondary to the hearing aids' primary goal of helping the user to hear better but can support overall health.

Hearing aids have traditionally relied on acoustic sensors to identify the user's listening needs in a given listening environment. However, as indicated, reliance solely on the acoustic soundscape is insufficient for addressing all of the user's listening needs. Given the example of a restaurant situation with background noise composed of conversations, dishes, people moving, heating and air conditioning systems, and other unexpected sounds, different listening needs can be identified. One patron wearing hearing aids and sitting with a guest at a table has a primary need to hear and understand the conversation partner. A waiter wearing hearing aids walking in the same restaurant has completely different needs. For both users, the acoustic soundscape is virtually identical, but the additional factor of the user's motion is a critical difference for the hearing aids' performance goals. Extending beyond this example, multiple listening environments can be better defined by considering motion. With the introduction of movement as detected by the accelerometer, the hearing aid can provide a more appropriate configuration of noise processing settings in anticipation of the hearing aid user's primary listening needs.


#
#

Conflict of Interest

None declared.

  • References

  • 1 Picou EM. MarkeTrak 10 (MT10) survey results demonstrate high satisfaction with and benefits from hearing aids. Semin Hear 2020; 41 (01) 021-036
  • 2 Picou EM, Ricketts TA. An evaluation of hearing aid beamforming microphone arrays in a noisy laboratory setting. J Am Acad Audiol 2019; 30 (02) 131-144
  • 3 Powers T, Froehlich M. Clinical results with a new wireless binaural directional hearing system. Hearing Review 2014; 21 (11) 32-34
  • 4 Wu Y-H, Stangl E, Bentler RA, Stanziola RW. The effect of hearing aid technologies on listening in an automobile. J Am Acad Audiol 2013; 24 (06) 474-485
  • 5 Froehlich M, Branda E, Freels K. New dimensions in automatic steering for hearing aids: clinical and real-world findings. Hearing Review 2019; 26 (11) 32-36
  • 6 Hagerman B, Olofsson Å. A method to measure the effect of noise reduction algorithms using simultaneous speech and noise. Acta Acust United Acust 2004; 90 (02) 356-361
  • 7 Aubreville M, Petrausch S. Directionality assessment of adaptive binaural beamforming with noise suppression in hearing aids. In: 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE; 2015: 211-215

Address for correspondence

Eric Branda, Au.D., Ph.D.
10 Constitution Avenue, Piscataway, NJ 08854

Publication History

Article published online:
24 September 2021

© 2021. The Author(s). This is an open access article published by Thieme under the terms of the Creative Commons Attribution-NonDerivative-NonCommercial License, permitting copying and reproduction so long as the original work is given appropriate credit. Contents may not be used for commercial purposes, or adapted, remixed, transformed or built upon. (https://creativecommons.org/licenses/by-nc-nd/4.0/)

Thieme Medical Publishers, Inc.
333 Seventh Avenue, 18th Floor, New York, NY 10001, USA

  • References

  • 1 Picou EM. MarkeTrak 10 (MT10) survey results demonstrate high satisfaction with and benefits from hearing aids. Semin Hear 2020; 41 (01) 021-036
  • 2 Picou EM, Ricketts TA. An evaluation of hearing aid beamforming microphone arrays in a noisy laboratory setting. J Am Acad Audiol 2019; 30 (02) 131-144
  • 3 Powers T, Froehlich M. Clinical results with a new wireless binaural directional hearing system. Hearing Review 2014; 21 (11) 32-34
  • 4 Wu Y-H, Stangl E, Bentler RA, Stanziola RW. The effect of hearing aid technologies on listening in an automobile. J Am Acad Audiol 2013; 24 (06) 474-485
  • 5 Froehlich M, Branda E, Freels K. New dimensions in automatic steering for hearing aids: clinical and real-world findings. Hearing Review 2019; 26 (11) 32-36
  • 6 Hagerman B, Olofsson Å. A method to measure the effect of noise reduction algorithms using simultaneous speech and noise. Acta Acust United Acust 2004; 90 (02) 356-361
  • 7 Aubreville M, Petrausch S. Directionality assessment of adaptive binaural beamforming with noise suppression in hearing aids. In: 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE; 2015: 211-215

Zoom Image
Figure 1 A contemporary accelerometer compared in size to a standard 10A-size hearing aid battery.
Zoom Image
Figure 2 Orthogonal axes x, y, and z for measuring acceleration forces.
Zoom Image
Figure 3 Variations in motion patterns from gravitational acceleration over time for sitting, walking, and cycling.
Zoom Image
Figure 4 Comparison of sound and accelerometer patterns over time. Top row: Sound pressure amplitude for sounds of a vowel /u/ and a car noise. Bottom row: Acceleration for a walking and a sitting scene.
Zoom Image
Figure 5 Laboratory speaker setup for traffic scenario. Background traffic noise was presented from 45-, 135-, 225-, and 315-degree azimuths at 65 dBA SPL with male and female speech from 110- and 250-degree azimuths, respectively, at 68 dBA.
Zoom Image
Figure 6 Mean ratings and 95th percentile confidence values for both speech understanding and listening effort. The 13-point scale was from 1 = strongly disagree to 7 = strongly agree. The acoustic scene was background traffic noise with speech coming from the 110- and 250-degree azimuths at a +3 dB SNR. See [Fig. 5] for speaker configuration. (From Froehlich et al,[5] used with permission.)
Zoom Image
Figure 7 Polar patterns measured on the left ear of KEMAR for four conditions: automatic with no movement, full monaural directional, omnidirectional, and automatic walking.