Keywords
mild bilateral hearing loss - unilateral hearing loss - children - speech perception
Prologue
Some would consider my introduction to Boys Town National Research Hospital (BTNRH)
and Pat Stelmachowicz in the 1980s serendipitous. As a graduate student in Audiology,
I attended an ASHA convention with a group of fellow students with the intention of
interviewing for Clinical Fellowship Year (CFY) positions at the upcoming end of our
program of study. As I stood waiting for an opportunity to interview with potential
sites, I saw an interviewer who was available and noted that the Boys Town Institute
for Communication Disorders in Children in Omaha, NE (as it was named in those days)
was interviewing for CFY positions in Audiology. I spoke to Dr. Michael Gorga, who
was conducting interviews about the position. After our interview, he said he would
like me to also speak with his colleague at the meeting. I later met with that colleague,
Dr. Pat Stelmachowicz. An experience that would shape my professional life began when
I was offered and accepted a CFY position. My intention was to stay for a few years
and move on. Today, as the end of my professional career is much closer than the beginning,
I am still at Boys Town National Research Hospital and I can say without any doubt
that Pat Stelmachowicz played one of the major roles in the path my professional life
has followed.
Pat Stelmachowicz was one of the leading researchers in the area of amplification
for children with hearing loss. Her extensive history of NIH-funded research and research
publications are evidence of her significant contributions to the field of pediatric
audiology. Those records tell only part of the story, though. Pat also was an amazing
leader, mentor, collaborator, and friend. She encouraged audiologists (of whom I am
thankful to include myself) to be involved in research and research assistants to
become actively involved in the work being conducted in her lab. I was given the opportunity
to work part-time in her lab and, as an audiologist, was a co-author on many papers
with her within a few years of starting at BTNRH. It is one of the great honors of
my career that I was able to work for and with her throughout much of our respective
careers.
The results of Pat's work, including her many collaborations, have been impactful
for researchers, clinicians and families, and manufacturers of hearing aids. Working
with clinicians in the Audiology Department at BTNRH gave her continued insight into
the questions in pediatric audiology that needed answers. Pat took those questions
to the lab, conducting well-thought out and rigorous research, the outcomes of which
could be translated back to clinical services. Through her example as a researcher
and her guidance as a mentor, she taught about how to do translational work well so
that it would positively impact the outcomes for children with hearing loss and their
families. Work examining the impact of high-frequency audibility and the effects of
amplification on speech understanding for children led to changes in pediatric hearing-aid
fitting strategies and in hearing-aid signal processing to improve access to the speech
signal and optimize amplification for infants and young children. An important goal
of her work was always to improve access for children with hearing loss in the real
world. She inspired those goals in so many of those of us who have worked with her
over her career.
It has been important to me to take the many lessons I learned from Pat and use them
to inform clinical practice and research going forward. We continue to need research
that examines how children with hearing loss understand speech in complex listening
environments that are representative of those found in realistic environments and
to work to improve their real-world communication access. This is especially true
for children who have typically been underserved, including those with mild bilateral
or unilateral hearing loss.
The environments in which children communicate during daily life are complex. In those
environments, children's speech understanding can be impacted by many factors, including
ones related to talker(s), the message(s) to be understood, the acoustic environment
in which speech is being presented, and the listeners themselves.[1]
[2]
[3]
[4]
[5] At the same time, children are still developing auditory skills that are important
for speech understanding.[6]
[7] These many factors are likely to interact in complex and dynamic ways during listening
and understanding.
Hearing loss can reduce auditory input for children, potentially impacting the development
of auditory and speech/language skills that support learning.[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15] Nearly 15% of children between 6 and 16 years of age in the United States have hearing
loss >16 dB in one or both ears.[16] Assuming prevalence rates of at least 5%,[16]
[17] a minimum of 2.7 million of children in public and private schools from kindergarten
to 12th grade may be impacted by mild bilateral or unilateral hearing loss (UHL).[18] Audiologically, mild bilateral hearing loss (MBHL) has been broadly defined as a
three- or four-frequency pure-tone average from >15 to <45 dB HL or thresholds >25 dB
HL at one or more frequencies above 2 kHz or thresholds >25 dB HL at one or more frequencies
above 2 kHz in one or both ears in both ears. UHL has been defined as three-frequency
pure-tone average >20 dB HL or thresholds >25 dB HL at one or more frequencies above
2 kHz in the poorer ear and ≤15 dB HL in the better ear.[17]
[19]
Research has shown that children with MBHL or UHL can exhibit poorer speech recognition
than peers with normal hearing (NH) in noise and reverberation.[9]
[20]
[21]
[22]
[23]
[24]
[25] As with children who have greater degrees of hearing loss, the ability of children
with MBHL or UHL to understand speech across environments with varying acoustic characteristics
may impact other areas of development including academics, speech/language, and social/emotional
skills.[8]
[9]
[14]
[24]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
Children with MBHL and UHL often are grouped together under the category of minimal/mild
hearing loss.[17]
[34] However, these two groups of children represent heterogeneous populations who may
demonstrate similar performance (e.g., poor speech understanding in complex listening
conditions), but for whom the underlying mechanisms for these deficits are likely
to be different. For example, in multi-source environments, binaural cues can help
listeners locate and separate auditory signals and improve their ability to hear and
understand target signals in background noise.[35]
[36] For children with UHL, benefits of binaural processing are reduced, with potential
negative effects on speech recognition.[37]
[38]
[39] In the same environments, children with MBHL have reduced access to signals from
both ears. Early work by Davis and colleagues[28] revealed that children with MBHL may not differ from children with greater degrees
of loss in some areas of language or academic achievement. Because children with MBHL
may not use hearing aids or use them inconsistently,[40]
[41] audibility in complex listening conditions could be more similar to that of children
with greater degrees of hearing loss who are using amplification than to children
with NH, with negative consequences on functional outcomes. For both groups, reduced
access to speech signals may impede the listener's ability to follow conversations
with multiple talkers or speech from a distance, or to access speech via overhearing.
Understanding the interactions between MBHL and UHL and speech understanding in complex
listening/learning environments is essential to maximizing communication access for
these children.
Clinical audiology and laboratory speech-recognition measures that present percent-correct
performance with a single talker in a background of steady-state noise or multi-talker
babble without visual cues can be useful diagnostic tools, but they do not represent
the complex listening tasks children with MBHL and UHL regularly encounter.[21]
[22]
[23]
[42] To address speech understanding in complex environments, tasks that more closely
assess the demands of these environments and the effects of hearing loss are needed.
In such environments, children with MBHL or UHL may need to expend more cognitive
effort than their peers with NH. Given a finite capacity for attending to and processing
auditory information,[43] increases in the cognitive effort allocated to processing acoustic aspects of the
speech signal will leave children with MBHL or UHL fewer resources for other cognitive
processes that are important for speech understanding and learning.
Listening Effort
Listening effort has been defined as the cognitive resources needed for listening
tasks, including understanding spoken speech.[44]
[45] Measures of listening effort may be able to provide information about speech understanding
in adverse environments that is not evident from measures of speech recognition alone.
Listening effort has been measured using self-reports, as well as behavioral and physiological
measures.[45] Results from tasks examining listening effort in children with NH and children with
hearing loss have been mixed.[46]
[47]
[48]
[49]
[50]
[51]
[52]
[53]
[54]
[55]
[56]
[57]
[58]
One behavioral measure that has been used to address listening effort in children[45] is the dual-task paradigm. In this paradigm, a listener is asked to maintain performance on a primary task
(e.g., speech perception) while simultaneously performing a secondary task (e.g.,
reacting to a flashing light). As the primary task becomes more difficult and the
listener attempts to maintain their level of performance, decreases in the secondary
task are interpreted as an increase in listening effort due to limited cognitive resources.
Only a few studies have used dual-task paradigms to examine listening effort in children
with MBHL or UHL. Hicks and Tharpe[49] conducted a study of children who were hard of hearing that included children with
mild bilateral and high-frequency hearing losses. The dual-task paradigm included
word recognition in quiet and noise (primary task) and a button push for a randomly
presented light (secondary task). Although results for the primary task decreased
somewhat as conditions became more difficult during the dual-task presentations, mean
scores were reported to be at or above 85%. Results showing longer reaction times
for the secondary task for children with hearing loss versus those with NH across
all conditions suggested that the children with hearing loss could be expending more
effort during the dual task. However, the absence of significant changes in reaction
time for either group with changes in signal-to-noise ratio (SNR) suggested that those
changes to the acoustic conditions may not have increased relative difficulty for
the primary task.
McFadden and Pittman[53] tested children with NH and children with MBHL or UHL in quiet and noise using word
categorization as the primary task and completion of a dot-to-dot game as the secondary
task. The children with hearing loss showed a small but statistically significant
decrease in primary task performance during the dual-task conditions in noise when
compared to the single task in quiet. However, neither these children nor those with
NH showed changes in the secondary task. The authors suggested that the children with
MBHL or UHL may not have realized that their performance on the primary task was decreasing.
Given their high levels of performance on the primary task (from 100% for baseline
to 94% in the highest levels of noise), the lack of a perceived or actual need for
increased effort is a possibility. Alternatively, the authors suggested that children
may have focused more on the secondary task, as easier than the primary task during
dual-task performance.
As suggested from the studies discussed earlier, the population being studied as well
as the complexity of the primary and secondary tasks in dual-task paradigms may play
a role in the measured impact on cognitive effort. Three studies, using similar primary
and secondary tasks, illustrate this possibility in a dual-task paradigm. Stelmachowicz
and colleagues[57] used a dual-task paradigm to assess listening effort in a study examining the effects
of stimulus bandwidth on the auditory skills of children with NH and children with
mild-moderately severe hearing loss (primary task: word recognition; secondary task:
digit recall). Results revealed a decrement in performance on the secondary task in
the dual- versus single-task conditions for both groups. However, there was no effect
of the varying bandwidth for the primary-task stimuli on secondary-task performance
for either group, suggesting that listening effort did not change as access to auditory
information in the speech signal changed. Choi et al[47] used the same two tasks to examine the ability of children with NH to switch attention
between primary and secondary tasks. Thus, in the study of Choi et al, only the instructions
to participants varied. They found that allocation of attention to tasks during the
dual-task paradigm did not change regardless of which was supposed to be primary or
secondary. Choi et al suggested that the children in their study may have focused
on the easier task (speech recognition) during dual tasks, regardless of instructions.
Howard et al[50] also used word recognition as the primary task and digit recall as the secondary
task in a study examining listening effort in children with NH at varying noise levels
intended to represent classrooms. Children were able to maintain performance in the
primary task in the dual-task condition but showed a decrease in performance on the
secondary task as SNR decreased. Howard et al hypothesized that better acoustic conditions
in the previous studies may not have led to the increased listening effort they found
in their study.
Verbal response time is another behavioral measure that has been used to assess the cognitive effort expended
by children during speech perception tasks, with good success.[48]
[52]
[54] As the time to process incoming speech increases, fewer cognitive resources may
be available to process ongoing input. Verbal response time typically is reported
as the time between the presentation of the speech stimulus and the listener's spoken
response. As conditions become more difficult (e.g., decreases in stimulus bandwidth),
increases in verbal response times are taken as an increase in the cognitive effort
required to process and repeat the stimuli.
Few studies to date have evaluated listening effort using verbal response time in
children with MBHL or UHL, compared to peers with NH.[51]
[55] Lewis et al[51] had children repeat speech stimuli with three levels of linguistic complexity (vowel–consonant–vowels
for which only the consonant changed, words, and sentences) at three SNRs (5, 0, −5 dB).
Speech recognition was measured as percent correct. Verbal response time was assessed
in two ways. Onset time was reported as the delay from the end of the stimulus to
the initial vocalization. Total duration of correct responses was measured as the
onset time plus the length of the utterance (time from the initial vocalization to
the end of the response). The latter measure was included to assess the time required
to process the entire stimulus and complete a spoken response. Speech recognition
results revealed that performance for both groups improved with increasing SNR. Children
with NH performed better than children with MBHL or UHL for consonants and sentences
but not for words. The pattern of verbal response times was complex with multiple
interactions. However, both onset time and total duration increased as SNR decreased,
suggesting increased effort. There was no effect of hearing status on onset time or
total duration, suggesting that the children with MBHL or UHL were not expending greater
effort than their peers with NH to perform these tasks. It is possible that the stimuli
and/or the noise levels were not sufficiently difficult to result in differences in
effort across groups.
Oosthuizen et al[55] tested children with unaidable UHL and children with NH using a triple digit recognition
task in quiet and in noise (−12 dB SNR) with speech presented from three loudspeaker
locations (midline, direct, indirect). For listeners with UHL, direct represented
a loudspeaker placed toward the ear with NH and indirect represented one toward the
ear with hearing loss. Differences in verbal response times across conditions were
taken as differences in listening effort. There were no differences in performance
across groups in quiet; in noise the children with UHL performed more poorly than
those with NH for all location conditions, with the greatest difference for indirect
presentation. Verbal response times also did not differ across groups in quiet. In
noise, verbal response times were longer for children with UHL for the indirect presentation.
These results support increased listening effort for children with unaidable UHL under
some listening conditions.
It is also possible that for some tasks, verbal response time may be a more sensitive
measure of listening effort than dual-task paradigms. McGarrigle et al[54] examined the effect of the measurement tool used to assess listening effort in children
with hearing loss. For the dual task, they used speech recognition (primary) and response
time to a visual stimulus (secondary). The time to onset of the verbal response for
the speech task in isolation represented verbal response time. Testing was completed
at three SNRs selected to represent easy, moderate, and hard acoustic conditions.
Results revealed no differences across acoustic conditions or groups for the secondary
task in the dual-task paradigm. Verbal response times, on the other hand, were slower
at poorer SNRs and for children with hearing loss relative to children with NH.
Comprehending Speech in Complex Environments
Comprehending Speech in Complex Environments
The effort required both to listen to and recognize speech signals in complex conditions
may leave children with MBHL or UHL with fewer resources for higher level cognitive
processing relative to children with NH. Griffin et al[58] reported that under challenging acoustic conditions, children with UHL performed
more poorly than peers with NH on a comprehension task when the SNR for testing each
participant had been personalized for the same level of performance (50%) for sentence
recognition. However, they did not find differences between groups on comprehension
in quiet or at a SNR chosen to represent typical levels in classrooms.
Classrooms are a common listening environment for children. During their early school
years, children are still developing skills needed to understand speech in noise and
reverberation[59]
[60]
[61] while, at the same time, they are attempting to listen and learn in environments
where acoustics may be poor[62]
[63]
[64]
[65]
[66] and communication needs complex. For children with NH in classroom environments,
studies have demonstrated greater effects of acoustic conditions on comprehension
and short-term memory than on speech recognition.[67] Children have greater difficulty than adults attending to speech in the presence
of other sounds, particularly other speech.[68] In classrooms, multiple talkers often are in different locations, resulting in unpredictable
auditory and visual information; some of the talkers will distract from and/or mask
target speech, negatively impacting understanding.[69]
[70]
[71]
Auditory and visual information from talkers interact during speech understanding.
Adding visual information has been shown to help adults and children without and with
hearing loss locate talkers in space and improve speech intelligibility, over auditory
input alone.[72]
[73]
[74]
[75]
[76]
[77] However, the addition of visual information also may negatively impact speech understanding
under some conditions, such as when cognitive demands are high.[78]
[79] Recent work by Al-Salim et al[80] examined speech recognition in children with NH, MBHL, UHL, and children who used
cochlear implants using an audiovisual non-word recognition task. They predicted that
access to visual cues would result in comparable performance for the children with
NH, MBHL, and UHL but that the children who used cochlear implants would perform more
poorly than the other groups. Children who used cochlear implants did perform more
poorly than those with NH. They also performed more poorly than the children with
MBHL, who did not differ from children with NH or children with UHL. The children
with UHL performed more poorly than the children with NH but, unexpectedly, did not
differ from the children who used cochlear implants. Further research is needed to
examine possible underlying mechanisms for the audiovisual deficits found in the children
with UHL.
Because reduced auditory access may influence both developing auditory skills and
the linguistic processes on which children with hearing loss need to rely, these children
may depend on visual cues to a greater extent than children with NH during real-world
listening tasks. As a result, their looking behaviors during such tasks may differ
from those of their peers with NH. During two-person communication tasks, Sandgren
et al[81] reported that children who were hard of hearing spent more time looking at the faces
of their partners with NH than the partners spent looking at their faces. Additionally,
in an examination of head orientation in classroom settings, Ricketts and Galster[70] found that children with hearing loss turned toward short utterances of non-target
talkers more often than did children with NH. Directing attention to the wrong talker
could have negative consequences for the child's ability to follow conversations or
to use visual cues to support speech understanding.
Knowledge of both auditory and visual factors that can affect children with MBHL or
UHL in complex listening conditions like classrooms can enhance our understanding
of their impact on auditory and multimodal skills necessary for listening in these
environments. To evaluate speech understanding in challenging multi-talker audiovisual
situations such as those children will encounter, we developed tasks to simulate plausible
complex listening conditions with both auditory and visual input. Using these tasks,
speech understanding was examined in children with NH and children with MBHL and UHL.
Classroom Comprehension Task
Initial studies evaluated the effect of acoustic environment on speech understanding
during simple (speech recognition) and complex (comprehension) activities in a simulated
classroom environment.[82]
[83] Loudspeakers and video monitors were arranged around a participant's location in
the center of the listening space. To assess comprehension, participants listened
to lines from an age-appropriate play read either by a teacher and four students reproduced
over the monitors and loudspeakers located around the listener or by the teacher located
at 0 degrees azimuth relative to the listener. Looking behavior was monitored during
the multi-talker comprehension task. To assess speech recognition, participants were
asked to repeat meaningful sentences presented auditory-only by a single talker either
from the loudspeaker at 0 degrees azimuth or randomly from the five loudspeakers.
Testing first was completed with children (8–12 years) and adults with NH in noise
and reverberation typical of classroom conditions.[82] Half completed the single-talker comprehension task and half the multi-talker comprehension
task. Children performed significantly more poorly than adults in both single- and
multi-talker comprehension conditions, and scores were significantly lower for the
multi-talker condition than for the single-talker condition. In contrast, for the
speech-recognition task, all participants scored above 95% correct, with no significant
differences across age or listening condition. Looking behaviors revealed that children
looked around more than adults during the multi-talker comprehension condition.
Although the initial acoustic conditions resulted in near-ceiling performance for
the sentence recognition task, they had been chosen to allow examination of the differential
effects of the two tasks under typical acoustic conditions found in regular classrooms.
Effects of acoustic condition were further examined with a second group of participants.
Children (8 and 11 years) and adults with NH performed the speech recognition and
either the single- or multi-talker comprehension task as in the first study but under
more adverse acoustic conditions (data from the first experiment represented the favorable
acoustical environment). Although scores on the speech-recognition task decreased
for all listeners in the more adverse acoustical environments, all listeners scored
above 82%. For comprehension, performance in the multi-talker condition decreased
for all age groups as acoustics became poorer. Younger children performed more poorly
than older children and adults in either comprehension condition or in all acoustic
environments, suggesting that they may be expending greater effort during the task.
Younger children also looked around more than older children and adults. However,
looking increased for all listeners in the most adverse acoustic condition, suggesting
that adults may also use this strategy under difficult listening conditions. Overall,
there was wide variability in looking behaviors during the task with no significant
relationship between looking behavior and comprehension.
In the first two experiments, participants were allowed to look around as much or
as little as they felt would help them during the task. We also wanted to examine
if a requirement to look would impact performance for adults relative to children.
Children (8–11 years) and adults with NH participated in modified versions of the
above tasks.[83] During the comprehension task in noise and reverberation, participants were instructed
to attempt to look at each talker as they spoke (looking required). Results were compared
to those of children and adults from the previous study for whom looking was not required.
Results revealed that adults and 11-year-olds for whom looking had not been required
performed better than those for whom looking was required. There were no other age-group
differences across the two conditions. The previous study had not examined looking
behavior during the auditory-only speech recognition task, so looking effects on speech
recognition were examined with the SNR modified to avoid potential ceiling effects.
Participants were instructed to locate the loudspeaker presenting the sentences for
half of the presentations and to look straight ahead for the other half. Adults exhibited
better speech recognition than children, but there were no effects of looking behavior.
Together, the above studies suggest that both age and task may impact looking behavior
and speech understanding in adults and children with NH. Acoustic effects had fewer
consequences for sentence recognition than for tasks requiring comprehension and recall.
Findings related to looking behaviors and their impact on comprehension were mixed.
When given the option, children tended to look more than adults. Initial results suggested
that these behaviors did not impact comprehension. However, if older children and
adults were required to looked, their performance on the comprehension tasks was poorer
than that of same-age participants for whom looking was not required.
Classroom discussions could pose greater difficulties for children with MBHL or UHL
than for those with NH. To address this issue, Lewis et al[42] used the simulated classroom tasks described earlier to evaluate a group of 8- to
12-year-olds with MBHL or UHL and a matched group of children with NH in the multi-talker
comprehension and speech recognition tasks. It was hypothesized that children with
MBHL or UHL would perform more poorly than those with NH on the comprehension task
but not on the speech recognition task. It also was hypothesized that they would attempt
to look at talkers more during the comprehension task.
Performance for the speech recognition task was high for both groups, with all except
two children with hearing loss having scores of ≥89% correct. Results for the comprehension
task revealed significant effects of both hearing status and age; children with MBHL
or UHL performed more poorly than children with NH and younger children (8–10 years)
performed more poorly than older children (11–12 years).[a]
Surprisingly, there were no significant hearing status differences for looking behaviors
during the comprehension task, suggesting that the children with MBHL or UHL were
not making more looking attempts than those with NH. In addition, looking behavior
did not predict comprehension. It is possible that some of the children may have been
attempting to maximize speech understanding by looking either more or less while others
may not have known what looking behaviors would be most beneficial in this fast-changing
discussion. Further research is needed to address these issues for children with NH
and those with MBHL or UHL.
Small Group Comprehension Task
Speech-understanding difficulties in classrooms may also occur when children are divided
into small groups to complete tasks. During these activities, noise levels may be
higher than the average levels reported for occupied classrooms.[63] During such tasks, there will be multiple talkers and listeners must determine who,
when, and where to listen and look. They also must be able to ignore distracting speech
and noise both within and outside of the group. Doing so requires auditory and visual
as well as language and cognitive skills.
Lewis et al[84]
[85] examined looking behavior and comprehension using a small group multi-talker task.
Children with NH or with MBHL or UHL were asked to follow audiovisual instructions
for placing objects on a mat presented in noise under three contexts with increasing
perceptual complexity. In the first (single talker, ST), one talker provided instructions
from a video monitor in front of the child. In the second (multi-talker, MT), instructions
were presented individually by four talkers on monitors in front of the child. In
the third (multi-talk with comments, MTC), the four talkers presented instructions
as well as non-instruction comments, sometimes interrupting each other. This task
required processing of auditory and visual information that may or may not be pertinent
for the task at hand. Children also needed to be able to correctly follow the instructions
that were given. An eye tracker monitored looking behavior during the task.
In the study examining the impact of MBHL or UHL on performance,[85] it was hypothesized that those children would perform more poorly on the task than
children with NH. Depending on how audibility might differentially impact children
with MBHL or UHL, similar or different performances for the two groups were both options.
It was also hypothesized that children with MBHL or UHL would look at talkers more
during the task, which could help or harm performance. Results revealed best performance
when there was a single talker, followed by multiple talkers taking turns and, finally,
multiple talkers in a more complex dialogue. Children with MBHL or UHL performed more
poorly than children with NH; there were no differences between the two hearing loss
groups. Looking behavior was analyzed in three ways. On average, children looked at
a screen when that talker presented instructions less than 20% of the time, with no
significant differences across conditions or hearing status. Variations in looking
over time were assessed by examining transitions from one screen to another (gaze
switching) during the two multi-talker conditions. There was more gaze-switching during
the MTC condition than during the MT condition. In addition, the increase in the rate
of gaze-switching from MT to MTC was greater for children with MBHL than for those
with UHL. When overall looking was assessed (percent of time looking at any screen),
children with NH were found to look at screens significantly more often (mean = 84.9%)
than children with MBHL or UHL (mean = 63%). Interestingly, the pattern of looking
was also different for the children with MBHL or UHL, with about half showing patterns
similar to those of children with NH and half looking at screens much less. There
were no differences between children with MBHL and UHL. There also was no relationship
between looking behavior and performance.
Summary
Pat Stelmachowicz's research was highly translational, impacting clinical services,
research, and the hearing healthcare industry. Her work examined speech understanding
across a variety of speech inputs and acoustic conditions. A goal of that work was
to optimize amplification for children with hearing loss, ultimately improving communication
access. Pat's work influenced our research with children with MBHL or UHL, as we sought
to address their speech understanding in complex environments.
Simple speech recognition testing is insufficient to address the difficulties children
with MBHL or UHL may encounter when attempting to understand speech in the real world.
There is a need for measures that better represent complex listening demands.
Results of the few studies using behavioral measures to assess listening effort in
children with MBHL or UHL suggest that tasks and conditions that are more sensitive
to the subtle difficulties these children will experience in complex environments
may be needed. Lewis et al,[51] for example, used speech recognition tasks with short-duration stimuli (i.e., words)
or high-context sentences. Such stimuli may not have created a greater cognitive load
for the children with MBHL or UHL, with the result that verbal response times were
not impacted. In addition, the SNRs in that study were more advantageous than those
used by Ooosthuizen et al,[55] who showed effects of UHL on verbal response times in some conditions. Behavioral
tasks that are more cognitively demanding, without being so difficult that children
give up, may be necessary to differentiate listening effort for this population. Physiological
measures such as pupillometry are being used to examine listening effort in individuals
with hearing loss[86]
[87] and are a promising option for studies evaluating listening effort in children with
MBHL and UHL.
Comprehension measures have shown performance differences in complex environments
for children with MBHL or UHL when compared to peers with NH. Both the classroom comprehension
task[42] and the small group comprehension task[85] reviewed here revealed that MBHL or UHL can impact comprehension in complex multi-talker
audiovisual tasks. Poorer comprehension can occur even when performance on speech
recognition tasks remains high. However, these tasks did not differentiate between
the children with MBHL or UHL. The absence of differences during the comprehension
tasks could have resulted, in part, from the tasks themselves. The classroom comprehension
task required the children to follow multiple talkers for 10 minutes, process what
all of the talkers were saying, and hold that information in memory to answer questions
at the end. The small group comprehension task required children to process spoken
instructions while ignoring irrelevant speech and visual information from talkers,
and to properly follow the instructions that were being given throughout the task.
For either task, misunderstanding or missing parts of the information could result
in equally poor comprehension for both groups of children, even if the underlying
reasons for those difficulties differed.
A number of factors also may account for the looking behaviors exhibited by children
with NH and children with MBHL or UHL during the complex listening tasks. For the
classroom comprehension task, the absence of a relationship between looking behavior
and comprehension may be related to the nature of the listening task. Numerous rapid
changes among talkers who were located around the listener could mean that attempts
to look at talkers while they were speaking were not always successful. In this task,
the difficulty could occur for those with NH as well as those with MBHL or UHL. Analysis
of looking behaviors provides support for this possibility in that, even for children
who attempted to look more, the proportion of time they were actually looking at talkers
as the talkers spoke was less than 50%. Differences in performance may result from
some individuals (e.g., adults and older children in the earlier studies) making decisions
about looking in an attempt to benefit comprehension. Younger children and/or children
with MBHL or UHL, however, may still be learning how to best use listening and looking
during complex multi-talker tasks.
For the small group task, results for looking behavior depended on what aspect of
looking behavior was being analyzed. Both children with NH and children with MBHL
or UHL spent little time looking directly at the talkers' screens when they were providing
instructions. Although this suggests they were not focusing directly on the talkers,
children with NH were more likely to look at the screens in general and accessing
visual information peripherally cannot be ruled out. The finding that general looking
behaviors of children with MBHL and UHL were divided between high and low looking
while those with NH primarily exhibited high looking, suggests more varied strategies
for looking and listening for the children with hearing loss. Continued research is
needed to examine how strategies for looking and listening can interact with type
of hearing loss (MBHL vs. UHL) and the types of complex tasks to impact comprehension.
Ongoing work can help identify those children with MBHL or UHL who could be at greater
risk for difficulties understanding speech in real-world environments. Future research
may help identify areas of need for educational support and for the development of
intervention strategies that improve communication access in a variety of environments,
optimizing learning.