J Am Acad Audiol 2017; 28(09): 823-837
DOI: 10.3766/jaaa.16158
Articles
Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

Listening Effort and Speech Recognition with Frequency Compression Amplification for Children and Adults with Hearing Loss

Marc A. Brennan
*   Amplification and Perception Laboratory, Boys Town National Research Hospital, Omaha, NE
,
Dawna Lewis
*   Amplification and Perception Laboratory, Boys Town National Research Hospital, Omaha, NE
,
Ryan McCreery
*   Amplification and Perception Laboratory, Boys Town National Research Hospital, Omaha, NE
,
Judy Kopun
*   Amplification and Perception Laboratory, Boys Town National Research Hospital, Omaha, NE
,
Joshua M. Alexander
†   Ear Laboratory, Purdue University, West Lafayette, IN
› Author Affiliations
Further Information

Corresponding author

Marc Brennan
University of Nebraska-Lincoln
Lincoln, NE 68583

Publication History

Publication Date:
26 June 2020 (online)

 

Abstract

Background:

Nonlinear frequency compression (NFC) can improve the audibility of high-frequency sounds by lowering them to a frequency where audibility is better; however, this lowering results in spectral distortion. Consequently, performance is a combination of the effects of increased access to high-frequency sounds and the detrimental effects of spectral distortion. Previous work has demonstrated positive benefits of NFC on speech recognition when NFC is set to improve audibility while minimizing distortion. However, the extent to which NFC impacts listening effort is not well understood, especially for children with sensorineural hearing loss (SNHL).

Purpose:

To examine the impact of NFC on recognition and listening effort for speech in adults and children with SNHL.

Research Design:

Within-subject, quasi-experimental study. Participants listened to amplified nonsense words that were (1) frequency-lowered using NFC, (2) low-pass filtered at 5 kHz to simulate the restricted bandwidth (RBW) of conventional hearing aid processing, or (3) low-pass filtered at 10 kHz to simulate extended bandwidth (EBW) amplification.

Study Sample:

Fourteen children (8–16 yr) and 14 adults (19–65 yr) with mild-to-severe SNHL.

Intervention:

Participants listened to speech processed by a hearing aid simulator that amplified input signals to fit a prescriptive target fitting procedure.

Data Collection and Analysis:

Participants were blinded to the type of processing. Participants' responses to each nonsense word were analyzed for accuracy and verbal-response time (VRT; listening effort). A multivariate analysis of variance and linear mixed model were used to determine the effect of hearing-aid signal processing on nonsense word recognition and VRT.

Results:

Both children and adults identified the nonsense words and initial consonants better with EBW and NFC than with RBW. The type of processing did not affect the identification of the vowels or final consonants. There was no effect of age on recognition of the nonsense words, initial consonants, medial vowels, or final consonants. VRT did not change significantly with the type of processing or age.

Conclusion:

Both adults and children demonstrated improved speech recognition with access to the high-frequency sounds in speech. Listening effort as measured by VRT was not affected by access to high-frequency sounds.


#

PURPOSE

The loss of high-frequency audibility contributes to poorer speech recognition and increased listening effort in listeners with sensorineural hearing loss (SNHL) compared with listeners with normal hearing (NH) ([Rakerd et al, 1996]; [Stelmachowicz et al, 2001]; [Hicks and Tharpe, 2002]). The limited high-frequency gain available in hearing aids (5–6 kHz: [Dillon, 2001]), herein referred to as restricted bandwidth (RBW), combined with the drop in speech level and increased hearing loss with higher frequencies means that listeners with SNHL still may exhibit poorer audibility in the higher frequencies despite using amplification ([Kimlinger et al, 2015]). Poor high-frequency audibility is more problematic for children than for adults because children require greater audibility of high-frequency sounds than adults to obtain equivalent speech recognition ([Stelmachowicz et al, 2001], [2007]). Nonlinear frequency compression (NFC) recodes high-frequency sounds at lower frequencies, where better audibility of speech can be achieved; however, this lowering results in spectral distortion ([McDermott, 2011]) which may limit the benefit of NFC. The goal of this research was to examine the influence of access to high-frequency speech sounds via extended bandwidth (EBW) and NFC and of age (children versus adults) on speech recognition and listening effort. Understanding the influence of NFC and EBW on speech recognition and listening effort in children and adults could impact treatment approaches toward both age groups.


#

SPEECH RECOGNITION

Findings are mixed on the effect of NFC relative to RBW on speech recognition and potential relationships are complex ([Simpson et al, 2005], [2006]; [Glista et al, 2009]; [Wolfe et al, 2010], [2011], [2015]; [Glista et al, 2012]; [Alexander, 2013]; [Arehart et al, 2013]; [Ching et al, 2013]; [McCreery et al, 2013], [2014]; [Souza et al, 2013]; [Alexander et al, 2014]; [Bentler et al, 2014]; [Hopkins et al, 2014]; [John et al, 2014]; [Ellis and Munro, 2015]; [Kokx-Ryan et al, 2015]; [Picou et al, 2015]). In general, benefit from NFC is better when access to high-frequency sounds is increased with NFC ([McCreery et al, 2013], [2014]), but spectral distortion is minimized ([Souza et al, 2013]) as well as in listeners with greater high-frequency hearing loss ([Glista et al, 2009]; [Souza et al, 2013]; [Brennan et al, 2014]; but see [Kokx-Ryan et al, 2015]). Benefit is more likely to occur for stimuli where high-frequency audibility contributes to recognition ([Wolfe et al, 2010], [2011]; [Hopkins et al, 2014]; [Kokx-Ryan et al, 2015]), with decreases in recognition sometimes occurring for specific consonants or vowels ([Kokx-Ryan et al, 2015]; [Alexander, 2016]). Lastly, individual variability in the ability to use the frequency-compressed information ([Glista et al, 2009]; [Arehart et al, 2013]; [Souza et al, 2013]; [Ellis and Munro 2015]) and acclimatization ([Wolfe et al, 2011]; [Glista et al, 2012]; [Dickinson et al, 2014]; [Hopkins et al, 2014]) may have also contributed to disparate findings across studies.

Similar to NFC, findings are mixed on the benefit of extending the bandwidth of amplification (EBW) beyond that traditionally available with hearing-aid amplification (i.e., RBW). Increasing the bandwidth of amplification has been found to improve speech recognition for both children and adults with SNHL ([Ching et al, 1998]; [Stelmachowicz et al, 2001], [2007]; [Hornsby et al, 2011]); however, the benefit of EBW can be reduced in listeners with greater high-frequency hearing loss ([Ching et al, 1998], [2001]; [Hogan and Turner, 1998]; [Turner and Cummings, 1999]).

Experience with amplification may also contribute to benefit from NFC and EBW for both children and adults. Children who are identified and treated with amplification at a younger age experience better outcomes than children identified at an older age ([McCreery et al, 2015]; [Tomblin et al, 2015]). Children with greater hearing aid use show better speech recognition than their peers with less hearing aid use, when controlling for degree of hearing loss ([McCreery et al, 2015]). Owing to their greater experience with amplification, it might be expected that children who are fit with amplification at a younger age or those with greater hearing aid use benefit more from the provision of high-frequency amplification. Adults with greater hearing aid use might also be expected to benefit more from the provision of high-frequency amplification because of acclimation (e.g., [Glista et al, 2012]).


#

LISTENING EFFORT

Listening effort refers to the cognitive energy required to understand speech ([Pichora-Fuller et al, 2016]). Consistent with Kahneman’s limited capacity model of cognitive effort [Kahneman (1973)], adults and children with SNHL may devote more listening effort to understanding speech than listeners with NH ([Rakerd et al, 1996]; [Hicks and Tharpe, 2002]; but see [Ohlenforst et al, 2017]). Consequently, less cognitive capacity may be available for other tasks such as word learning ([Pittman, 2008]), and over the course of a day, increased listening effort may lead to greater fatigue in children ([Hornsby et al, 2014]). Because hearing aids are the most common rehabilitative device for individuals with SNHL, understanding the effects of amplification on listening effort is critical in developing methods of signal processing that improve speech understanding, increase word learning, and reduce fatigue by reducing listening effort.

Compared with a condition without amplification, hearing aids can reduce listening effort in adults ([Downs, 1982]; [Gatehouse and Gordon, 1990]; [Humes et al, 1999]; [Picou et al, 2013]; [Hornsby, 2013]; but see [Ohlenforst et al, 2017]). Reductions in listening effort have been measured across different types of hearing aid signal processing, including noise reduction ([Sarampalis et al, 2009]; [Desjardins and Doherty, 2014]; [Gustafson et al, 2014]; but see [Alcántara et al, 2003]; [Brons et al, 2013]) and spectral enhancement ([Baer et al, 1993]). However, despite the importance of high-frequency audibility for speech recognition ([Stelmachowicz et al, 2001]; [2007]), [Stelmachowicz et al (2007)] found that, compared with RBW, EBW did not decrease listening effort for a dual-task paradigm for children with and without hearing loss. The authors argued that the change in bandwidth was sufficient to improve the perception of words but not large enough to reduce listening effort. A potential limitation of that study was that the younger children might not have been able to direct attention toward the primary task ([Choi et al, 2008]), which would have limited the impact of bandwidth manipulations on the allocation of cognitive resources. Behavioral estimates of listening effort have included dual-task paradigms, VRTs, and self-reported ratings (e.g., [Norman and Bobrow, 1975]; [Humes et al, 1999]; [Stelmachowicz et al, 2007]; [Lewis et al, 2016]). For verbal-response time (VRT) measures, listening effort is defined as the time between the speech onset or offset and the response onset. A shorter response time is assumed to reflect a high-quality speech signal, thus requiring fewer resources—i.e., less listening effort. A longer response time is assumed to reflect a low-quality speech signal, requiring more resources and resulting in greater listening effort ([Norman and Bobrow, 1975]; [Pisoni et al, 1987]; [Houben et al, 2013]; [McCreery and Stelmachowicz, 2013]; [Gustafson et al, 2014]; [Lewis et al, 2016]). There is, however, some disagreement on the nature of the relationship of verbal processing time to listening effort. For example, [McGarrigle et al (2014)] suggested that a low-quality signal might cause individuals to respond more quickly as a result of more focused attention (also see [Pichora-Fuller et al, 2016]).

When measured by VRT, listening effort increases as the signal-to-noise ratio and/or audible bandwidth decrease ([McCreery and Stelmachowicz, 2013]; [Lewis et al, 2016]). Although McCreery and Stelmachowicz showed that VRT increased as high-frequency audibility decreased, the effects of frequency lowering on listening effort have not been well documented in the literature. Both extent to which NFC improves high-frequency audibility and introduces distortion likely impact the amount of listening effort exerted by a listener when using NFC. One hypothesis is that NFC might decrease listening effort because of increased audibility ([McCreery et al, 2014]). An alternative hypothesis is that increased listening effort from the distortion created by NFC ([Arehart et al, 2013]) could counteract decreases in listening effort resulting from improvements in audibility. In regard to speech recognition, benefit appears to be maximized when the maximum audible input frequency is set to each listener’s maximum audible output frequency ([McCreery et al, 2013]). This procedure is currently used in clinical settings and has been previously documented to minimize distortion introduced by NFC ([Alexander, 2013]). However, the extent to which this fitting procedure influences listening effort is unknown. [Kokx-Ryan et al (2015)] compared the effect of NFC versus no NFC on listening effort in adults with SNHL. NFC was set using three settings that varied in strength; audibility was not quantified. There was no difference in listening effort between NFC on and off for either speech in quiet or in noise when measured using a dual-task paradigm.

The current study builds on previous work by examining the effects of NFC on speech recognition and VRT measured in a group of children and adults. NFC was compared with a condition that simulated the bandwidth in a typical hearing aid (5 kHz, RBW) and with a condition with an EBW (10 kHz). NFC was set using a procedure that maps the maximum audible input frequency with NFC to each listener’s maximum audible output frequency with a traditional hearing aid (RBW). Speech stimuli consisted of consonant-vowel-consonant nonsense syllables with high-frequency consonants. Previous work has demonstrated that forward masking is greater than backward masking (e.g., [Buss et al, 1999]); consequently, we hypothesized that the benefit of high-frequency audibility might be less for the final than initial consonants. By using a fitting procedure that potentially minimized the negative effects of distortion ([Alexander, 2013]), it was hypothesized that NFC might be beneficial compared with RBW because of increased audibility. Owing to increased high-frequency audibility, listening effort was hypothesized to be lower with EBW compared with RBW. For speech recognition, we hypothesized that nonsense-syllable recognition would be better for conditions with greater high-frequency audibility (EBW, NFC) than a condition with lower high-frequency audibility (RBW)—with benefit being greater for EBW than NFC, because of less distortion. For equivalent speech recognition, children require greater audibility of high-frequency sounds than adults (e.g., [Stelmachowicz et al, 2001]); consequently, we hypothesized that children would benefit more from the provision of high-frequency speech sounds (EBW and NFC) than adults. Because other studies have documented changes in the recognition of specific consonants ([Kokx-Ryan et al, 2015]) and vowels ([Alexander, 2016]) with NFC, we also examined the recognition of the individual consonants and vowels across the three bandwidth conditions. Lastly, we examined the potential contribution of degree of high-frequency hearing loss, age at which hearing loss was identified, age of amplification, and hearing aid use to benefit with NFC and EBW.


#

METHOD

Participants

This study was approved by the Institutional Review Board for Boys Town National Research Hospital, and assent or consent was obtained for all participants. Children and adults were paid $15 per hour for their participation. The children also received a book at the completion of the study. Using G*Power (v3.1), we estimated that an effect size (ηp 2) of 0.05 would be detectable when 29 participants were tested (power = 80%, α = 0.05, number of groups = 2, number of measurements = 3, correlation among repeated measures = 0.5, and nonsphericity correction = 1). The number of measurements corresponded to the number of processing conditions tested. An effect size was not estimated for the linear mixed models (for limitations of power analyses see, for example, [Lenth, 2001]). Fourteen children (mean 11 yr, median 11 yr, and range 7–16 yr) with SNHL and 16 adults (mean: 54 yr, median: 59 yr, and range: 19–65 yr) with SNHL were recruited. One adult was subsequently excluded because of abnormally poor nonsense-word identification (scores >3.4 standard deviations [SDs] below the mean for the adult participants). All testing took place inside a double-walled sound booth. Additional equipment and standardized tests used in the completion of this project are listed in [Table 1]. Children’s speech articulation accuracy and expressive vocabulary were screened using the Goldman Fristoe Test of Articulation-2 and the Expressive Vocabulary Test-A, respectively. Children were required to have scores within 2 SD of the normative mean for their age to be included. Using this criterion for both tests, none of the children were excluded. All of the children used spoken English as their primary communication mode. Except for one child who did not wear amplification, all of the children wore bilateral hearing aids. The children who used amplification wore their hearing aids an average of 11 hours per day. Additional demographic information for the children is shown in [Table 2]. Five of the adults wore hearing aids (4 bilateral and 1 monaural) for an average of 12 hours per day. Two had hearing aids with NFC activated. Parents were asked if they had additional learning or language concerns for their children—none had additional concerns. Participants’ hearing thresholds were tested ([ASHA, 2005]) and are plotted in [Figure 1].

Table 1

Equipment and Software Used in This Study

Equipment

Model

Company

Location

Articulation Test

Goldman Fristoe Test of Articulation-2

Pearson Education Inc

San Antonio, TX

Vocabulary Test

Expressive Vocabulary Test-A

Pearson Education Inc

San Antonio, TX

Computer

Optiplex 755

Dell

Round Rock, TX

Headphones

HD-25

Sennheiser

Wedemark, Germany

2-cc Coupler

IEC 711

Larson Davis

Provo, UT

Manikin

Knowles Electronic Manikin for Research

Knowles Electronics

Itasca, IL

Soundcard

Lynx Two B

Lynx Studio Technology

Costa Mesa, CA

Sound Mixer

MiniMon Mon800

Behringer

Kirchardt, Germany

Headphone Amplifier

HP4

PreSonus

Baton Rouge, LA

6-cc coupler

System AEC101

Larson Davis

Provo, Utah

Boom Microphone

Beta 53

Shure

Chicago, IL

Video Recorder

Vixia R21 HD CMOS

Canon

Melville, NY

Video Software

Debut Video Capture

NCH Software

Greenwood Village, CO

Sound Level Meter

System 824

Larson Davis

Provo, UT

Table 2

Demographic Information for the Child Participants

Participant Number

Age ID

Age Amp

R/L/B

NFC

Mean Hours Per Day

Support Services

1

1

NA

NA

NA

0

N

2

0

3

B

Y

5.7

N

3

4

4

B

Y

10.3

N

4

2

2

B

Y

12

FM

5

0

2.5

B

N

5.0

FM

6

NA

NA

NA

N

5.0

FM/SLP

7

3

3

B

N

14.0

FM

8

2.5

2.5

B

Y

12.0

FM/SLP

9

0

7

B

Y

9.3

FM/SLP

10

4

4

B

N

14.0

FM

11

4

4

B

N

12.7

FM

12

0

0.25

B

N

13.6

N

13

0

3

B

N

24.0

N

14

4

5

B

N

5.7

FM

Mean

2.42

3.94

10.5

Notes: Participant 1 did not wear hearing aids, and information in columns 2–4 was missing for participant 6. Age ID = age in years at which each child was identified with hearing loss. Age Amp = age in years at which each child started wearing amplification. R/L/B = right, left, or binaural amplification. Mean hours per day is the average hearing aid use per day. For the NFC column, Y indicates the child used amplification with NFC activated, N that the child did not. FM = frequency-modulated remote microphone hearing assistance technology use in school; NA = not available; SLP = speech-language therapy.


Zoom Image
Figure 1 Hearing thresholds (dB HL) for children and adults. Left and right ears are shown in the left and right panels, respectively. Box boundaries represent the 25th and 75th percentiles, error bars represent the 10th and 90th percentiles, horizontal lines represent the medians, and filled circles represent the means.

#

Stimuli

Stimuli were 310 consonant-vowel-consonant nonsense words that had a phonotactic probability within 1 SD of the mean probability split of the nonsense words in [McCreery and Stelmachowicz (2011)]. Phonotactic probability refers to the frequency with which phonological sequences occur in a given position for words in a language and was computed as the biphone sum (consonant-vowel, vowel-consonant) using a phonotactic probability calculator ([Storkel and Hoover, 2010]). The stimuli were spoken by a 22-yr-old female from the Midwest. Consonants were the fricatives, stops, affricates, and nasals /b/, /ʧ/, /d/, /ð/, /f/, /g/, /ŋ/, /ʤ/, /k/, /m/, /n/, /p/, /s/, /ʃ/, /t/, /θ/, /v/, /z/. Vowels consisted of /a/, /e/, /u/, /o/, and /æ/. The nonsense words were split into three lists of 100 nonsense words and a practice list of 10 nonsense words. [Table 3] lists the number of consonants per list.

Table 3

Number of Consonants per List

Initial Consonants

List

b

ʧ

d

ð

f

g

ŋ

ʤ

k

m

n

p

s

ʃ

t

θ

v

z

1

7

8

11

5

5

7

0

5

6

2

5

6

4

6

10

5

4

4

2

4

8

2

5

5

7

0

5

5

17

9

3

5

6

6

5

4

4

3

7

7

13

5

5

3

0

5

0

8

7

7

5

6

10

4

4

4

Total

18

23

26

15

15

17

0

15

11

27

21

16

14

18

26

14

12

12

Final Consonants

1

6

5

5

5

8

4

1

4

4

8

4

8

6

7

6

6

8

5

2

9

4

6

4

8

3

2

5

6

5

5

7

6

8

4

6

7

5

3

6

5

6

5

9

3

2

4

4

5

3

9

6

7

7

7

7

5

Total

21

14

17

14

25

10

5

13

14

18

12

24

18

22

17

19

22

15


#

Amplification

The hearing aid simulator consisted of NFC, filter bank analysis, wide dynamic range compression (WDRC), channel-specific output-limiting compression, and broadband output-limiting compression. The simulator was implemented in MATLAB (R2009b; The MathWorks, Natick, MA) as described in detail by [Alexander and Masterson (2015)] and by [McCreery et al (2013)]. The output-limiting compression circuits used a 1-msec attack time, 50-msec release time, 10:1 compression ratio, and frequency-specific compression thresholds that were prescribed by Desired Sensation Level (DSL) ([Scollie et al, 2005]) or 105 dB SPL, whichever was lower. The WDRC circuit used a 5-msec attack time and 50-msec release time. The WDRC ratios and compression thresholds were those prescribed by DSL for each participant, with linear amplification below the compression thresholds. The filter bank consisted of eight one-third octave-band filters with center frequencies spaced between 0.25 and 6.3 kHz. The NFC circuit used an algorithm described by [Simpson et al (2005)] and others ([Alexander, 2016]; [Brennan et al, 2014]; [McCreery et al, 2013], [2014]).

The hearing aid simulator was programmed individually for each ear and participant to simulate three amplification conditions: RBW (5 kHz filter cut-off frequency), EBW (10 kHz filter cut-off frequency), and NFC (10 kHz maximum input frequency). Starting with EBW, the simulator gain was set to meet DSL adult and DSL child prescriptive targets ([Scollie et al, 2005]) for the adult and child participants, respectively. Using the headphones for this study, a Knowles Electronics Manikin for Acoustic Research (KEMAR) transfer function was derived by comparing the output levels for pure tones at the octave and interoctave frequencies from 250 to 8000 Hz in a 2-cc coupler with that in an IEC 711 Zwislocki Coupler and KEMAR. Output levels were then simulated by measuring the root-mean-squared output for speech using the carrot passage from Audioscan (Dorchester, ON, Canada) presented at 60 dB SPL, filtered using one-third octave-wide filters ([ANSI, 2004]) and then adjusted to within 5 dB of the prescribed target. Minimum gain was limited to 0 dB after accounting for the KEMAR transfer function. To prevent overdriving the headphones, maximum gain was limited to 65 dB. A 1024-tap low-pass filter was applied at 5000 Hz that reduced the output by 80 dB at 5500 Hz to create the RBW amplification condition.

NFC settings were selected that would map the maximum audible input frequency (10 kHz) to the maximum audible output frequency of the RBW condition (5 kHz) by using the Sound-Recover Fitting Assistant v1.10 (Joshua Alexander, Purdue University, West Lafayette, IN). Using this method to fit NFC has been documented to improve speech recognition ([McCreery et al, 2013], [2014]) compared with an RBW condition. The available start frequencies and compression ratios in the hearing aid simulator were limited to those available in the Phonak Fitting Software at the time this study was completed and one intermediate setting (start frequency = 2700 and compression ratio = 2.3). The maximum audible input frequency with NFC was 8240 Hz for all participants (maximum audible output frequency = 5 kHz, start frequency = 3.8 kHz, and compression ratio = 2.6), except two participants who had a maximum audible input frequency of 6960 Hz with NFC (maximum audible output frequency = 4 kHz, start frequency = 2.7 kHz, and compression ratio = 2.3). More details about the NFC processing in the hearing aid simulator and the fitting method are described by [Alexander (2016)]; [Brennan et al (2014)] and [McCreery et al (2013], [2014)].

Audibility was assessed by computing the Speech Intelligibility Index (SII) with RBW and EBW and a modified version of the SII for computing audibility with NFC (SII-NFC) ([McCreery et al, 2014]). The sound pressure level for frequency bands one-third octave-wide ([ANSI, 2004]) were computed for each fricative and vowel. Participant thresholds were interpolated to the center frequencies for the one-third octave filters ([Pittman and Stelmachowicz, 2000]), converted to dB SPL ([Bentler and Pavlovic, 1989]), adjusted to account for the internal noise spectrum ([ANSI, 1997]), and transformed to one-third octave band levels ([Pavlovic, 1987]). SII was then computed using the ANSI one-third octave band SII procedure with the importance weights for nonsense words ([ANSI, 1997]).


#

Procedure

Stimuli were presented at 60 dB SPL to the input of the hearing aid simulator, converted from a digital to analog signal using a sound card, routed to a sound mixer, amplified with a headphone amplifier, and presented binaurally via headphones. The presentation level was calibrated to a 1-kHz pure tone using a sound level meter and headphones attached to a 6-cc coupler. Participants wore a head-worn boom microphone and were seated in a sound booth in front of a table with a video recorder. Participants were instructed to repeat back the “made-up words.” To maintain attention, pictures of animals were displayed on a monitor after each trial. If the examiner judged the response to be unclear, the participant was instructed to repeat back what they said. Participant responses were video and audio recorded for off-line analysis. The video signal was converted to a digital signal using Debut Video Capture and saved as MPEG-4 files at 640 × 480 resolution and 30 frames per second. The audio signal from the microphone was converted from an analog to digital signal at a 44.1 kHz sampling rate and with 32-bit depth. The word lists and processing conditions were counterbalanced using a Graeco-Latin square design with random presentation of stimuli within the word lists. Data presentation and collection were conducted using custom software written at Boys Town National Research Hospital.


#

Scoring

Three raters (one undergraduate student, two audiologists) transcribed and scored (correct/incorrect) the nonsense word, initial consonants, medial vowels, and final consonants and time marked the onset (msec) of the participant responses for each nonsense word. Responses for blends (i.e., /ts/ instead of /s/) were scored as incorrect. If the participant uttered two responses, the rater used the second response. VRT was measured as the time between the onset of the stimulus and the onset of the response for each token and was initially calculated using a PRAAT script (Ver.5.3.51; [Boersma and Weenink, 2013]). The onset of all responses selected by the software was reviewed and remarked if necessary by the raters. The raters judged the onset of the response based on the waveform, spectrogram and audio playback. Speech fillers such as “umm” and “uh,” false starts, stutters, and nonspeech sounds (breathing, yawns, etc.) that occurred before the nonsense word was spoken were not included when marking the response. VRT was not measured for responses during which the listener began to speak before the end of the stimulus.

All three raters scored participant responses and VRT for two participants. For the nonsense words, the Cohen’s Kappa for scoring among the three reviewers was between 0.89 and 0.92. For verbal processing time, Pearson’s r was between 0.88 and 0.98. Given the excellent interrater reliability ([Landis and Koch, 1977]) for these two participants, only a single rater (i.e., the same rater) scored responses and measured VRT for the remainder of the participants. (A second rater scored 10% of those participants as a reliability check and, if there was a disagreement between rater 1 and rater 2, a third rater served as a tie-breaker. For adult participants, 12% required a tiebreaker [the first two raters disagreed]. For child participants, 16% required a tiebreaker.) For trials in which that rater was uncertain, a second rater scored the trial. If the two raters disagreed about any position (initial, vowel, final) in the scoring of the nonword, a third rater scored the response. In cases where a third rater was required, the response for which two raters agreed was accepted as the score. If there was disagreement between two scorers in VRT by more than 50 msec, a third rater also judged the response time for that trial. The final response time was taken as the average of the two response times that were within 50 msec of each other. The first rater was unsure about 1.3% of the responses. VRTs >2 sec were excluded as being outlier responses ([Ratcliff, 1993]). The shortest reaction time was 531 msec; consequently, none of the reaction times were considered for removal because of being fast guesses (<200 msec: [Whelan, 2008]).


#
#

RESULTS

Speech Recognition

[Figure 2] depicts the proportion of correctly identified nonsense words (i.e., whole-word scoring), and initial consonants, medial vowels, and final consonants within each nonsense word. Because of the lack of variance, medial vowels were not included in the following statistical analysis. A multivariate analysis of variance (ANOVA) was completed using within-subject factors of measure (whole words, initial consonants, and final consonants) and processing (RBW, EBW, and NFC) and a between-subject factor of age (child and adult). This analysis is shown in [Table 4]. Multivariate effects of processing on proportion correct were significant. Neither age nor the interaction of processing and age were statistically significant. As shown in [Table 4], univariate effects of processing for the identification of the nonsense words and initial consonants were significant, but the effect of processing on final consonants was not significant. Post hoc testing using Tukey’s test of honestly significant difference (HSD = 0.028) showed that the identification of nonsense words was significantly better with EBW (mean [M] = 0.695, standard error [SE] = 0.014) and NFC (M = 0.706, SE = 0.011) than with RBW (M = 0.664, SE = 0.014) but was not statistically different for EBW versus NFC. Similarly, the identification of initial consonants (HSD = 0.016) was significantly better with EBW (M = 0.864, SE = 0.008) and NFC (M = 0.877, SE = 0.006) than with RBW (M = 0.847, SE = 0.008) but not for EBW versus NFC. These results demonstrated that the adults and children identified the nonsense words and initial consonants better when high-frequency audibility was increased with EBW or NFC compared with a condition with less high-frequency audibility (RBW). The type of processing did not affect the identification of vowels or final consonants.

Zoom Image
Figure 2 Proportion correct identification for each processing condition in children and adults. The measure depicted is indicated in each panel. Box boundaries represent the 25th and 75th percentiles, error bars represent the 10th and 90th percentiles, horizontal lines represent the medians, and filled circles represent the means.
Table 4

Multivariate ANOVA for Repeated Measures for Proportion Correct Identification

Multivariate Analysis

λ

df

F

p

ηp 2

Processing

0.539

6, 22

3.1

0.022

0.461

Age

0.797

3, 25

2.1

0.124

0.203

Processing x Age

0.671

6, 22

1.8

0.146

0.329

Univariate analysis

 Processing – whole-word scoring

2, 54

7.0

0.002

0.206

 Processing – initial consonants

2, 54

10.1

<0.001

0.272

 Processing – final consonants

2, 54

1.3

0.276

0.047

Note: λ = Wilks’s Lambda.


[Figure 3] displays the proportion correct for each consonant, collapsed across the two age groups because of the lack of a significant difference in the previous analysis. Performance was higher for the nasals, affricates, and stops than for the fricatives, which is similar to the pattern identified by [Alexander (2016)] for consonants in the medial position. Two repeated-measures ANOVAs, one each for initial and final consonants, were completed using within-subject factors of consonant and processing. This analysis is shown in [Table 5]. For the initial consonants, /k/ was excluded because this token was absent from one of the three lists (i.e., it was not presented to several participants in the initial position). For the initial consonants, there were significant effects of processing and consonant, and a significant interaction of the consonant with processing. The minimum significant difference using Tukey’s HSD was 0.131. The recognition of /s/ and /z/ was significantly better with EBW than with RBW and with NFC than with RBW. No other consonants were significantly different between the three processing conditions. For the final consonants, the effect of processing was not significant. The effect of the consonant and the interaction of processing and consonant were statistically significant. The minimum significant difference using Tukey’s HSD was 0.249. Despite the significant interaction, none of the consonants were significantly different with post hoc testing between the three processing conditions. The largest improvements in phoneme recognition between EBW and RBW were with /s/ (0.21) and /z/ (0.23). Similarly, the largest improvements in phoneme recognition occurred with /s/ (0.18) and /z/ (0.25) from RBW to NFC processing.

Zoom Image
Figure 3 Proportion correct for each consonant in each processing condition. Initial consonants shown in the top panel and final consonants shown in the bottom panel. Arranged in order from fricatives, nasals, affricates, and stops. Within each category, voiceless is followed by voiced. Proportion correct is collapsed across the two age groups. Error bars represent 1 SD.
Table 5

Repeated-Measures ANOVA for Proportion Correct Identification of the Initial and Final Consonants

df

F

p

ηp 2

Initial Consonants

 Processing

2, 56

16.1

<0.001

0.364

 Consonant

3.2, 90.6

54.0

<0.001

0.659

 Processing x Consonant

9.7, 272.8

5.3

<0.001

0.159

Final Consonants

 Processing

2, 56

1.6

0.205

0.055

 Consonant

4.7, 132.3

63.1

<0.001

0.693

 Processing x Consonant

10.4, 293

2.9

<0.001

0.093


#

SII

To determine if the increase in SII varied by consonant, two repeated-measures ANOVAs, one for initial consonants and another for final consonants, were completed using within-subject factors of consonant and processing. Results of this analysis are shown in [Table 6]. The patterns of the statistical findings were the same for initial and final consonants. There were significant main effects of processing and consonant. The interaction of processing and consonant was not statistically significant, consistent with the SII increasing by an equivalent amount for every consonant with both NFC and EBW compared with RBW.

Table 6

Repeated-measures ANOVA for SII with Initial and Final Consonants

df

F

p

ηp 2

Initial Consonants

 Processing

1.5, 40.8

252.1

<0.001

0.900

 Consonant

2.0, 56.9

108.7

<0.001

0.795

 Processing x Consonant

3.4, 94.9

1.3

0.252

0.047

Final Consonants

 Processing

1.6, 44.2

232.6

<0.001

0.893

 Consonant

2.0, 55.6

93.4

<0.001

0.769

 Processing x Consonant

3.3, 92.1

1.8

0.142

0.061

Note: For the initial consonants, /k/ was excluded.



#

VRT

[Figure 4] depicts results for the VRTs for correct whole words. A linear mixed model with a random intercept term for each participant, as reported in [Baayen and Milin (2015)] and [Houben et al (2013)], and processing and age as the fixed effects, was completed on VRT for the correct responses. The linear mixed model was completed using the R statistical package ([R Core Team, 2016]) with the linear mixed-effects 4 and lmerTest packages ([Bates et al, 2015]). This inclusion of random intercepts for each participant in the model allowed us to account for variability associated with differences in average response times across participants and correlations between measures from the same participants across conditions. The results for the linear mixed model are shown in [Table 7]. The two-way interaction of age with processing was not significant. VRT did not change significantly with the type of processing or age.

Zoom Image
Figure 4 VRT for all responses. Response times are provided for each processing condition, as depicted by the legend. Box boundaries represent the 25th and 75th percentiles, error bars represent the 10th and 90th percentiles, horizontal lines represent the medians, and filled circles represent the means.
Table 7

Linear Mixed Model for VRT with Only the Correct Responses

Linear Mixed Models

Difference (msec)

df

t

p

Whole Words

 EBW – RBW

−22.3

58

−1.18

0.242

 NFC – EBW

−2.0

58

−0.12

0.916

 Adult – Child

3.6

37

0.08

0.937

 EBW x Age

9.8

58

0.36

0.719

 NFC x Age

5.3

58

0.19

0.847

/s/ and /z/

 EBW – RBW

−19.8

142

−1.00

0.320

 NFC – RBW

13.3

142

0.67

0.503

 consonant (/z/ - /s/)

−20.6

142

−1.00

0.317

 EBW x consonant

−9.7

142

−0.34

0.735

 NFC x consonant

−33.3

142

−1.16

0.246

Note: The difference column displays the beta values for the model.


The largest improvements in consonant recognition occurred for /s/ and /z/. VRT for these consonants were compared across processing conditions using a linear mixed model, shown in [Table 7], with a random intercept term for each participant. None of the main effects or interactions were statistically significant, suggesting that VRT did not vary across the three processing conditions for /s/ and /z/.


#

Prediction of Benefit from High-Frequency Amplification

To assess the benefit of high-frequency amplification by degree of hearing loss, benefit was calculated by subtracting the proportion correct nonsense-word recognition for RBW from proportion correct for EBW and for NFC. Likewise, differences in VRT were calculated by subtracting VRT for EBW and NFC from RBW. High-frequency pure-tone average thresholds (PTA: 2, 4, and 6 kHz) were computed because previous research ([Brennan et al, 2014]) found that preference for NFC could be partially explained by this measure of PTA. The benefit of high-frequency amplification was then plotted against PTA, as shown in [Figure 5]. In this figure, values above 0 indicate that performance was better with EBW (left column) or NFC (right column) than with RBW. In general, participants with a better (lower) PTA demonstrated a larger benefit for nonsense-word recognition from the provision of EBW or NFC compared with RBW. However, one participant with a low PTA (36 dB HL) did not benefit from the provision of NFC and instead showed a .10-point decrement in performance. Because this participant showed a different pattern than all other participants, we conducted an analysis with and without this data point. Specifically, the effect of PTA on speech recognition and VRT with high-frequency amplification was evaluated using a mixed linear model ([Table 8])—with the same software and package described earlier. With the outlier, adults and children with less severe high-frequency hearing loss (lower PTA) showed significantly more benefit for nonsense-word recognition with EBW than those with greater hearing loss—but not for NFC. Without the outlier, participants with lower PTA showed significantly more benefit for nonsense-word recognition with both EBW and NFC. PTA did not significantly predict VRT.

Zoom Image
Figure 5 Benefit of high-frequency amplification by degree of hearing loss. EBW minus RBW shown in the left column, and NFC minus RBW shown in the right column. Benefit for nonsense-word recognition and VRT is shown in the top and bottom rows, respectively.
Table 8

Linear Mixed Models for Benefit of High-Frequency Amplification for Nonsense-Word Recognition and VRT

Nonsense-Word Recognition with Outlier

Difference (Proportion Correct)

df

t

p

 EBW – RBW

0.22

54

2.36

0.022

 NFC – RBW

0.15

54

1.67

0.099

 PTA

−0.002

45

−0.92

0.362

 EBW x PTA

−0.0035

54

−2.02

0.048

 NFC x PTA

−0.0021

54

−1.23

0.225

Nonsense-word recognition without outlier

 EBW – RBW

0.268

52

2.84

0.006

 NFC – RBW

0.313

52

3.31

0.002

 PTA

<−0.001

39

−0.28

0.781

 EBW x PTA

−0.005

52

−2.52

0.015

 NFC x PTA

−0.005

52

−2.83

0.007

Verbal reaction time

 EBW – RBW

−107.0

54

−1.00

0.321

 NFC – RBW

133.6

54

1.25

0.216

 PTA

−2.3

34

−0.66

0.515

 EBW x PTA

1.7

54

0.84

0.402

 NFC x PTA

−2.6

54

−1.25

0.214

Note: The difference column displays the beta values for the model.


As an exploratory analysis, additional potential predictor variables (age of hearing loss identification and age of amplification) for the children with SNHL were assessed using bivariate correlations. None of the variables significantly predicted benefit from high-frequency amplification (EBW-RBW and NFC-RBW) for word recognition or VRT (0.25 < p < 0.89). Lastly, hearing aid use for children and adults as a single group was not significantly correlated with benefit from high-frequency amplification (0.59 < p < 0.63).


#
#

DISCUSSION

Speech Recognition

For speech recognition, we hypothesized that nonsense-syllable recognition would be better for conditions with greater high-frequency audibility (EBW, NFC) than for a condition with lower high-frequency audibility (RBW). As shown in previous work ([Glista et al, 2009]; [Wolfe et al, 2010], [2011]; [McCreery et al, 2013]; [Alexander et al, 2014]; [McCreery et al, 2014]), both EBW and NFC improved nonsense-word recognition compared with RBW. This improvement was because of better identification of the initial consonants. Recognition of vowels and final consonants did not differ across processing conditions.

The improved recognition for initial but not final consonants with EBW and NFC was possibly related to the vowel masking the final but not initial consonants (e.g., [Buss et al, 1999]). The improvements in audibility (SII and SII-NFC) with EBW and NFC compared with RBW were similar with each stimulus type. The largest increase in correct fricative recognition with EBW and NFC occurred for /s/ and /z/ in our study—consistent with [Alexander et al (2014)], [Alexander (2016)], [Kokx-Ryan et al (2015)], [Stelmachowicz et al (2007)] and with other studies that observed improved perception of plurals ([Glista et al, 2009], [2012]; [Wolfe et al, 2010]). The high correct recognition with RBW processing for /ʃ/ was consistent with [Alexander et al (2014)] (see the top of Table 2 in Alexander et al) and [Hillock-Dunn et al (2014)]. We also observed high correct recognition for nasals, affricates, and stops for all three processing conditions. The range of degradation in perception was smaller than the range of improvement in perception, with the greatest decrements occurring for /θ/ and /ð/. Together these findings suggest that the benefit of increased audibility for nonsense words with EBW and NFC is primarily limited to the fricatives /s/ and /z/.

[Alexander (2016)] found that NFC decreased vowel recognition with low start frequencies (<2.2 kHz), but was unaltered by higher start frequencies (2.8 and 4.0 kHz). Our results supported this finding because vowel recognition was uniformly high (see [Figure 2]) for our start frequencies which were 2.7 and 3.8 kHz. Alexander noted that the lower start frequency of 1.6 kHz with a lower compression ratio was a greater detriment to vowel perception than using a higher start frequency with a higher compression ratio and that this was likely because of the shifts in the 2nd formant for vowels with the lower start frequency. By using higher start frequencies in this study, we likely avoided degraded vowel recognition due to shifts in the formant frequencies.

We observed that benefit for nonsense-word recognition for EBW compared with RBW was significantly greater for those with less severe high-frequency SNHL. This finding is consistent with prior speech-recognition data ([Ching et al, 1998], [2001]; [Hogan and Turner, 1998]; [Turner and Cummings, 1999]; [Hornsby et al, 2011]). Whereas the same relationship was observed when NFC was compared with RBW, this relationship was only statistically significant with the removal of one outlier. Our results support the notion that listeners with greater hearing loss in the high frequencies are less able to take advantage of greater bandwidth, potentially because of factors such as dead regions ([Mackersie et al, 2004] but see [Cox et al, 2011]; [Preminger et al, 2005]) or less contribution of audibility to speech recognition for listeners with greater hearing loss ([Ching et al, 1998], [2001]; [Hogan and Turner, 1998]; [Hornsby et al, 2011]). However, our data differ from [Souza et al (2013)] who found that those with greater hearing loss were more likely to show improved speech recognition with NFC. One possible reason for the discrepant findings is that Souza et al had more listeners with severe SNHL than our study, and it was the listeners with severe SNHL that showed a benefit in their study. Another possible reason is that our study fit NFC to maximize audibility while limiting the amount of distortion whereas Souza et al systematically varied the NFC parameters (start frequency and compression ratio) to examine the influence of varying the amount of spectral distortion with NFC on speech recognition.


#

Listening Effort

We hypothesized that NFC compared with RBW and EBW might benefit listening effort compared with RBW because of increased access to high-frequency speech information. Owing to increased high-frequency audibility, listening effort was hypothesized to be lower with EBW compared with RBW. Contrary to our hypotheses, we did not measure a significant difference in VRT across processing conditions or age groups. The lack of a difference in VRT with change in high-frequency access is consistent with [Stelmachowicz et al (2007)], who found that extending the high-frequency bandwidth did not decrease listening effort for their stimuli or task and with [Kokx-Ryan et al (2015)], who found that listening effort, measured in adults with SNHL using a dual-task paradigm, did not differ with NFC on or off. We have extended the findings of Kokx-Ryan and colleagues to children with SNHL. Together, these findings are consistent with a systematic review by [Ohlenforst et al (2017)], which did not find sufficient evidence to support the claim that SNHL or amplification impact listening effort. However, [McCreery and Stelmachowicz (2013)] found that listening effort decreased as the bandwidth was increased for children with NH who were listening to speech in varying levels of noise. In addition, results from [Lewis et al (2016)] suggested decreased listening effort in children with NH or mild SNHL for speech-recognition tasks as signal-to-noise ratio increased. Taken together, these studies suggest that listening effort, as measured by response time, may differ depending on the experimental task as well as the population being examined.

There are several possible reasons for the lack of change in VRT with NFC in the current study. The benefit of increased audibility on listening effort may have been offset by the increased distortion with NFC, leading to the null effect that was observed. This explanation, however, does not seem likely because EBW also did not result in a difference in VRT. It is important to note that the NFC start frequencies and compression ratios used in this study did not result in the same amount of degradation of the signal as those used in other studies (e.g., [Souza et al, 2013]). Use of higher compression ratios or lower start frequencies might have resulted in increased listening effort. Future studies may benefit from additional measures of listening effort, such as physiologically based methods (e.g., [Bess et al, 2016]).


#
#

CONCLUSIONS

These data suggest an improvement in initial consonant recognition for nonsense syllables when the bandwidth is extended with either EBW or NFC compared with a RBW condition. A concurrent improvement in listening effort, as measured by VRT, did not occur. Both adults and children benefitted equally from high-frequency amplification with improved nonsense-syllable recognition. An equivalent amount of listening effort was measured in children and adults. These findings suggest that the clinical procedure used in this study of mapping the maximum input frequency to the maximum audible output frequency is beneficial for speech recognition, without improvements or decrements in this estimate of listening effort—verbal reaction time.


#

Abbreviations

ANOVA: analysis of variance
DSL: Desired Sensation Level
EBW: extended bandwidth
HL: hearing level
HSD: honestly significant difference
KEMAR: Knowles Electronics Manikin for Acoustic Research
M: mean
NFC: nonlinear frequency compression
NH: normal hearing
PTA: pure-tone average
RBW: restricted bandwidth
SD: standard deviation
SE: standard error
SII: Speech Intelligibility Index
SNHL: sensorineural hearing loss
VRT: verbal-response time
WDRC: wide dynamic range compression


#

No conflict of interest has been declared by the author(s).

Acknowledgments

We thank Brenda Hoover, Brianna Byllesby, Alex Baker, and Evan Cordrey for their assistance in data collection and analysis, Kanae Nishi for providing the programming to analyze VRT and suggestions regarding stimulus development, and Emily Buss for helpful suggestions throughout the project. We especially thank Pat Stelmachowicz as the original principal investigator on this project.

This work was supported by the following NIH grants: R01 DC04300, R01 DC013591, P20 GM109023, and P30 DC-4662.


This work was presented at the 2015 American Speech Language Hearing Association Convention, Denver, CO.


  • REFERENCES

  • Alcántara JL, Moore BC, Kühnel V, Launer S. 2003; Evaluation of the noise reduction system in a commercial digital hearing aid. Int J Audiol 42 (01) 34-42
  • Alexander JM. 2013; Individual variability in recognition of frequency-lowered speech. Semin Hear 34: 86-109
  • Alexander JM. 2016; Nonlinear frequency compression: influence of start frequency and input bandwidth on consonant and vowel recognition. J Acoust Soc Am 139 (02) 938-957
  • Alexander JM, Kopun JG, Stelmachowicz PG. 2014; Effects of frequency compression and frequency transposition on fricative and affricate perception in listeners with normal hearing and mild to moderate hearing loss. Ear Hear 35 (05) 519-532
  • Alexander JM, Masterson K. 2015; Effects of WDRC release time and number of channels on output SNR and speech recognition. Ear Hear 36 (02) e35-e49
  • American National Standards Institute (ANSI) 1997. American National Standard Methods for Calculation of the Speech Intelligibility Index. ANSI S3.5-1997. New York, NY: ANSI;
  • American National Standards Institute (ANSI) 2004. Specification for Octave-Band and Fractional-Octave-Band Analog and Digital Filters. ANSI S1.11-2004. New York, NY: ANSI;
  • American Speech-Language-Hearing Association (ASHA) 2005. Guidelines for manual pure-tone threshold audiometry. Rockville, MD: ASHA;
  • Arehart KH, Souza P, Baca R, Kates JM. 2013; Working memory, age, and hearing loss: susceptibility to hearing aid distortion. Ear Hear 34 (03) 251-260
  • Baayen RH, Milin P. 2015; Analyzing reaction times. Int J Psychol Res (Medellin) 3: 12-28
  • Baer T, Moore BCJ, Gatehouse S. 1993; Spectral contrast enhancement of speech in noise for listeners with sensorineural hearing impairment: effects on intelligibility, quality, and response times. J Rehabil Res Dev 30 (01) 49-72
  • Bates D, Maechler M, Bolker B, Walker S. 2015; Fitting linear mixed-effects models using lme4. J Stat Softw 67: 1-48
  • Bentler RA, Pavlovic CV. 1989; Transfer functions and correction factors used in hearing aid evaluation and research. Ear Hear 10 (01) 58-63
  • Bentler R, Walker E, McCreery R, Arenas RM, Roush P. 2014; Nonlinear frequency compression in hearing aids: impact on speech and language development. Ear Hear 35 (04) e143-e152
  • Bess FH, Gustafson SJ, Corbett BA, Lambert EW, Camarata SM, Hornsby BWY. 2016; Salivary cortisol profiles of children with hearing loss. Ear Hear 37 (03) 334-344
  • Boersma P, Weenink D. 2013 Praat: Doing Phonetics by Computer [Computer program]. Version 5.3.51. http://www.praat.org/ . Accessed November 8, 2013
  • Brennan MA, McCreery R, Kopun J, Hoover B, Alexander J, Lewis D, Stelmachowicz PG. 2014; Paired comparisons of nonlinear frequency compression, extended bandwidth, and restricted bandwidth hearing aid processing for children and adults with hearing loss. J Am Acad Audiol 25 (10) 983-998
  • Brons I, Houben R, Dreschler WA. 2013; Perceptual effects of noise reduction with respect to personal preference, speech intelligibility, and listening effort. Ear Hear 34 (01) 29-41
  • Buss E, Hall 3rd JW, Grose JH, Dev MB. 1999; Development of adult-like performance in backward, simultaneous, and forward masking. J Speech Lang Hear Res 42 (04) 844-849
  • Ching TY, Day J, Zhang V, Dillon H, Van Buynder P, Seeto M, Hou S, Marnane V, Thomson J, Street L, Wong A, Burns L, Flynn C. 2013; A randomized controlled trial of nonlinear frequency compression versus conventional processing in hearing aids: speech and language of children at three years of age. Int J Audiol 52 (Suppl 2) S46-S54
  • Ching TYC, Dillon H, Byrne D. 1998; Speech recognition of hearing-impaired listeners: predictions from audibility and the limited role of high-frequency amplification. J Acoust Soc Am 103 (02) 1128-1140
  • Ching TY, Dillon H, Katsch R, Byrne D. 2001; Maximizing effective audibility in hearing aid fitting. Ear Hear 22 (03) 212-224
  • Choi S, Lotto A, Lewis D, Hoover B, Stelmachowicz P. 2008; Attentional modulation of word recognition by children in a dual-task paradigm. J Speech Lang Hear Res 51 (04) 1042-1054
  • Cox RM, Alexander GC, Johnson J, Rivera I. 2011; Cochlear dead regions in typical hearing aid candidates: prevalence and implications for use of high-frequency speech cues. Ear Hear 32 (03) 339-348
  • Desjardins JL, Doherty KA. 2014; The effect of hearing aid noise reduction on listening effort in hearing-impaired adults. Ear Hear 35 (06) 600-610
  • Dickinson AM, Baker R, Siciliano C, Munro KJ. 2014; Adaptation to nonlinear frequency compression in normal-hearing adults: a comparison of training approaches. Int J Audiol 53 (10) 719-729
  • Dillon H. 2001. Hearing Aids. New York, NY: Thieme;
  • Downs DW. 1982; Effects of hearing and use on speech discrimination and listening effort. J Speech Hear Disord 47 (02) 189-193
  • Ellis RJ, Munro KJ. 2015; Predictors of aided speech recognition, with and without frequency compression, in older adults. Int J Audiol 54 (07) 467-475
  • Gatehouse S, Gordon J. 1990; Response times to speech stimuli as measures of benefit from amplification. Br J Audiol 24 (01) 63-68
  • Glista D, Scollie S, Bagatto M, Seewald R, Parsa V, Johnson A. 2009; Evaluation of nonlinear frequency compression: clinical outcomes. Int J Audiol 48 (09) 632-644
  • Glista D, Scollie S, Sulkers J. 2012; Perceptual acclimatization post nonlinear frequency compression hearing aid fitting in older children. J Speech Lang Hear Res 55 (06) 1765-1787
  • Gustafson S, McCreery R, Hoover B, Kopun JG, Stelmachowicz P. 2014; Listening effort and perceived clarity for normal-hearing children with the use of digital noise reduction. Ear Hear 35 (02) 183-194
  • Hicks CB, Tharpe AM. 2002; Listening effort and fatigue in school-age children with and without hearing loss. J Speech Lang Hear Res 45 (03) 573-584
  • Hillock-Dunn A, Buss E, Duncan N, Roush PA, Leibold LJ. 2014; Effects of nonlinear frequency compression on speech identification in children with hearing loss. Ear Hear 35 (03) 353-365
  • Hogan CA, Turner CW. 1998; High-frequency audibility: benefits for hearing-impaired listeners. J Acoust Soc Am 104 (01) 432-441
  • Hopkins K, Khanom M, Dickinson AM, Munro KJ. 2014; Benefit from non-linear frequency compression hearing aids in a clinical setting: the effects of duration of experience and severity of high-frequency hearing loss. Int J Audiol 53 (04) 219-228
  • Hornsby BW. 2013; The effects of hearing aid use on listening effort and mental fatigue associated with sustained speech processing demands. Ear Hear 34 (05) 523-534
  • Hornsby BWY, Johnson EE, Picou E. 2011; Effects of degree and configuration of hearing loss on the contribution of high- and low-frequency speech information to bilateral speech understanding. Ear Hear 32 (05) 543-555
  • Hornsby BWY, Werfel K, Camarata S, Bess FH. 2014; Subjective fatigue in children with hearing loss: some preliminary findings. Am J Audiol 23 (01) 129-134
  • Houben R, van Doorn-Bierman M, Dreschler WA. 2013; Using response time to speech as a measure for listening effort. Int J Audiol 52 (11) 753-761
  • Humes LE, Christensen L, Thomas T, Bess FH, Hedley-Williams A, Bentler R. 1999; A comparison of the aided performance and benefit provided by a linear and a two-channel wide dynamic range compression hearing aid. J Speech Lang Hear Res 42 (01) 65-79
  • John A, Wolfe J, Scollie S, Schafer E, Hudson M, Woods W, Wheeler J, Hudgens K, Neumann S. 2014; Evaluation of wideband frequency responses and nonlinear frequency compression for children with cookie-bite audiometric configurations. J Am Acad Audiol 25 (10) 1022-1033
  • Kahneman D. 1973. Attention and Effort. Englewood Cliffs, NJ: Prentice-Hall, Inc.;
  • Kimlinger C, McCreery R, Lewis D. 2015; High-frequency audibility: the effects of audiometric configuration, stimulus type, and device. J Am Acad Audiol 26 (02) 128-137
  • Kokx-Ryan M, Cohen J, Cord MT, Walden TC, Makashay MJ, Sheffield BM, Brungart DS. 2015; Benefits of nonlinear frequency compression in adult hearing aid users. J Am Acad Audiol 26 (10) 838-855
  • Landis JR, Koch GG. 1977; The measurement of observer agreement for categorical data. Biometrics 33 (01) 159-174
  • Lenth RV. 2001; Some practical guidelines for effective sample size determination. Am Stat 55: 187-193
  • Lewis D, Schmid K, O’Leary S, Spalding J, Heinrichs-Graham E, High R. 2016; Effects of noise on speech recognition and listening effort in children with normal-hearing and children with mild bilateral or unilateral hearing loss. J Speech Lang Hear Res 59 (05) 1218-1232
  • Mackersie CL, Crocker TL, Davis RA. 2004; Limiting high-frequency hearing aid gain in listeners with and without suspected cochlear dead regions. J Am Acad Audiol 15 (07) 498-507
  • McCreery RW, Alexander J, Brennan MA, Hoover B, Kopun J, Stelmachowicz PG. 2014; The influence of audibility on speech recognition with nonlinear frequency compression for children and adults with hearing loss. Ear Hear 35 (04) 440-447
  • McCreery RW, Brennan MA, Hoover B, Kopun J, Stelmachowicz PG. 2013; Maximizing audibility and speech recognition with nonlinear frequency compression by estimating audible bandwidth. Ear Hear 34 (02) e24-e27
  • McCreery RW, Stelmachowicz PG. 2011; Audibility-based predictions of speech recognition for children and adults with normal hearing. J Acoust Soc Am 130 (06) 4070-4081
  • McCreery RW, Stelmachowicz PG. 2013; The effects of limited bandwidth and noise on verbal processing time and word recall in normal-hearing children. Ear Hear 34 (05) 585-591
  • McCreery RW, Walker EA, Spratford M, Oleson J, Bentler R, Holte L, Roush P. 2015; Speech recognition and parent ratings from auditory development questionnaires in children who are hard of hearing. Ear Hear 36 (Suppl 1) 60S-75S
  • McDermott HJ. 2011; A technical comparison of digital frequency-lowering algorithms available in two current hearing aids. PLoS One 6 (07) e22358
  • McGarrigle R, Munro KJ, Dawes P, Stewart AJ, Moore DR, Barry JG, Amitay S. 2014; Listening effort and fatigue: what exactly are we measuring? A British Society of Audiology Cognition in Hearing Special Interest Group ‘white paper’. Int J Audiol 53 (07) 433-440
  • Norman DA, Bobrow DG. 1975; On data-limited and resource-limited processes. Cognit Psychol 7: 44-64
  • Ohlenforst B, Zekveld AA, Jansma EP, Wang Y, Naylor G, Lorens A, Lunner T, Kramer SE. 2017; Effects of hearing impairment and hearing aid amplification on listening effort: a systematic review. Ear Hear 38 (03) 267-281
  • Pavlovic CV. 1987; Derivation of primary parameters and procedures for use in speech intelligibility predictions. J Acoust Soc Am 82 (02) 413-422
  • Pichora-Fuller MK, Kramer SE, Eckert MA, Edwards B, Hornsby BW, Humes LE, Lemke U, Lunner T, Matthen M, Mackersie CL, Naylor G, Phillips NA, Richter M, Rudner M, Sommers MS, Tremblay KL, Wingfield A. 2016; Hearing impairment and cognitive energy: the framework for understanding effortful listening (FUEL). Ear Hear 37 (Suppl 1) 5S-27S
  • Picou EM, Marcrum SC, Ricketts TA. 2015; Evaluation of the effects of nonlinear frequency compression on speech recognition and sound quality for adults with mild to moderate hearing loss. Int J Audiol 54 (03) 162-169
  • Picou EM, Ricketts TA, Hornsby BW. 2013; How hearing aids, background noise, and visual cues influence objective listening effort. Ear Hear 34 (05) e52-e64
  • Pisoni DB, Manous LM, Dedina MJ. 1987; Comprehension of natural and synthetic speech: effects of predictability on the verification of sentences controlled for intelligibility. Comput Speech Lang 2 (3-4) 303-320
  • Pittman AL. 2008; Short-term word-learning rate in children with normal hearing and children with hearing loss in limited and extended high-frequency bandwidths. J Speech Lang Hear Res 51 (03) 785-797
  • Pittman AL, Stelmachowicz PG. 2000; Perception of voiceless fricatives by normal-hearing and hearing-impaired children and adults. J Speech Lang Hear Res 43 (06) 1389-1401
  • Preminger JE, Carpenter R, Ziegler CH. 2005; A clinical perspective on cochlear dead regions: intelligibility of speech and subjective hearing aid benefit. J Am Acad Audiol 16 (08) 600-613, quiz 631–632
  • R Core Team 2016. R Foundation for Statistical Computing. Vienna, Austria:
  • Rakerd B, Seitz PF, Whearty M. 1996; Assessing the cognitive demands of speech listening for people with hearing losses. Ear Hear 17 (02) 97-106
  • Ratcliff R.. 1993; Methods for dealing with reaction time outliers. Psychol Bull 114 (03) 510-532
  • Sarampalis A, Kalluri S, Edwards B, Hafter E. 2009; Objective measures of listening effort: effects of background noise and noise reduction. J Speech Lang Hear Res 52 (05) 1230-1240
  • Scollie S, Seewald R, Cornelisse L, Moodie S, Bagatto M, Laurnagaray D, Beaulac S, Pumford J. 2005; The Desired Sensation Level multistage input/output algorithm. Trends Amplif 9 (04) 159-197
  • Simpson A, Hersbach AA, McDermott HJ. 2005; Improvements in speech perception with an experimental nonlinear frequency compression hearing device. Int J Audiol 44 (05) 281-292
  • Simpson A, Hersbach AA, McDermott HJ. 2006; Frequency-compression outcomes in listeners with steeply sloping audiograms. Int J Audiol 45 (11) 619-629
  • Souza PE, Arehart KH, Kates JM, Croghan NB, Gehani N. 2013; Exploring the limits of frequency lowering. J Speech Lang Hear Res 56 (05) 1349-1363
  • Stelmachowicz PG, Lewis DE, Choi S, Hoover B. 2007; Effect of stimulus bandwidth on auditory skills in normal-hearing and hearing-impaired children. Ear Hear 28 (04) 483-494
  • Stelmachowicz PG, Pittman AL, Hoover BM, Lewis DE. 2001; Effect of stimulus bandwidth on the perception of /s/ in normal- and hearing-impaired children and adults. J Acoust Soc Am 110 (04) 2183-2190
  • Storkel HL, Hoover JR. 2010; An online calculator to compute phonotactic probability and neighborhood density on the basis of child corpora of spoken American English. Behav Res Methods 42 (02) 497-506
  • Tomblin JB, Harrison M, Ambrose SE, Walker EA, Oleson JJ, Moeller MP. 2015; Language outcomes in young children with mild to severe hearing loss. Ear Hear 36 (Suppl 1) 76S-91S
  • Turner CW, Cummings KJ. 1999; Speech audibility for listeners with high-frequency hearing loss. Am J Audiol 8 (01) 47-56
  • Whelan R. 2008; Effective analysis of reaction time data. Psychol Rec 58: 475-482
  • Wolfe J, John A, Schafer E, Hudson M, Boretzki M, Scollie S, Woods W, Wheeler J, Hudgens K, Neumann S. 2015; Evaluation of wideband frequency responses and non-linear frequency compression for children with mild to moderate high-frequency hearing loss. Int J Audiol 54 (03) 170-181
  • Wolfe J, John A, Schafer E, Nyffeler M, Boretzki M, Caraway T. 2010; Evaluation of nonlinear frequency compression for school-age children with moderate to moderately severe hearing loss. J Am Acad Audiol 21 (10) 618-628
  • Wolfe J, John A, Schafer E, Nyffeler M, Boretzki M, Caraway T, Hudson M. 2011; Long-term effects of non-linear frequency compression for children with moderate hearing loss. Int J Audiol 50 (06) 396-404

Corresponding author

Marc Brennan
University of Nebraska-Lincoln
Lincoln, NE 68583

  • REFERENCES

  • Alcántara JL, Moore BC, Kühnel V, Launer S. 2003; Evaluation of the noise reduction system in a commercial digital hearing aid. Int J Audiol 42 (01) 34-42
  • Alexander JM. 2013; Individual variability in recognition of frequency-lowered speech. Semin Hear 34: 86-109
  • Alexander JM. 2016; Nonlinear frequency compression: influence of start frequency and input bandwidth on consonant and vowel recognition. J Acoust Soc Am 139 (02) 938-957
  • Alexander JM, Kopun JG, Stelmachowicz PG. 2014; Effects of frequency compression and frequency transposition on fricative and affricate perception in listeners with normal hearing and mild to moderate hearing loss. Ear Hear 35 (05) 519-532
  • Alexander JM, Masterson K. 2015; Effects of WDRC release time and number of channels on output SNR and speech recognition. Ear Hear 36 (02) e35-e49
  • American National Standards Institute (ANSI) 1997. American National Standard Methods for Calculation of the Speech Intelligibility Index. ANSI S3.5-1997. New York, NY: ANSI;
  • American National Standards Institute (ANSI) 2004. Specification for Octave-Band and Fractional-Octave-Band Analog and Digital Filters. ANSI S1.11-2004. New York, NY: ANSI;
  • American Speech-Language-Hearing Association (ASHA) 2005. Guidelines for manual pure-tone threshold audiometry. Rockville, MD: ASHA;
  • Arehart KH, Souza P, Baca R, Kates JM. 2013; Working memory, age, and hearing loss: susceptibility to hearing aid distortion. Ear Hear 34 (03) 251-260
  • Baayen RH, Milin P. 2015; Analyzing reaction times. Int J Psychol Res (Medellin) 3: 12-28
  • Baer T, Moore BCJ, Gatehouse S. 1993; Spectral contrast enhancement of speech in noise for listeners with sensorineural hearing impairment: effects on intelligibility, quality, and response times. J Rehabil Res Dev 30 (01) 49-72
  • Bates D, Maechler M, Bolker B, Walker S. 2015; Fitting linear mixed-effects models using lme4. J Stat Softw 67: 1-48
  • Bentler RA, Pavlovic CV. 1989; Transfer functions and correction factors used in hearing aid evaluation and research. Ear Hear 10 (01) 58-63
  • Bentler R, Walker E, McCreery R, Arenas RM, Roush P. 2014; Nonlinear frequency compression in hearing aids: impact on speech and language development. Ear Hear 35 (04) e143-e152
  • Bess FH, Gustafson SJ, Corbett BA, Lambert EW, Camarata SM, Hornsby BWY. 2016; Salivary cortisol profiles of children with hearing loss. Ear Hear 37 (03) 334-344
  • Boersma P, Weenink D. 2013 Praat: Doing Phonetics by Computer [Computer program]. Version 5.3.51. http://www.praat.org/ . Accessed November 8, 2013
  • Brennan MA, McCreery R, Kopun J, Hoover B, Alexander J, Lewis D, Stelmachowicz PG. 2014; Paired comparisons of nonlinear frequency compression, extended bandwidth, and restricted bandwidth hearing aid processing for children and adults with hearing loss. J Am Acad Audiol 25 (10) 983-998
  • Brons I, Houben R, Dreschler WA. 2013; Perceptual effects of noise reduction with respect to personal preference, speech intelligibility, and listening effort. Ear Hear 34 (01) 29-41
  • Buss E, Hall 3rd JW, Grose JH, Dev MB. 1999; Development of adult-like performance in backward, simultaneous, and forward masking. J Speech Lang Hear Res 42 (04) 844-849
  • Ching TY, Day J, Zhang V, Dillon H, Van Buynder P, Seeto M, Hou S, Marnane V, Thomson J, Street L, Wong A, Burns L, Flynn C. 2013; A randomized controlled trial of nonlinear frequency compression versus conventional processing in hearing aids: speech and language of children at three years of age. Int J Audiol 52 (Suppl 2) S46-S54
  • Ching TYC, Dillon H, Byrne D. 1998; Speech recognition of hearing-impaired listeners: predictions from audibility and the limited role of high-frequency amplification. J Acoust Soc Am 103 (02) 1128-1140
  • Ching TY, Dillon H, Katsch R, Byrne D. 2001; Maximizing effective audibility in hearing aid fitting. Ear Hear 22 (03) 212-224
  • Choi S, Lotto A, Lewis D, Hoover B, Stelmachowicz P. 2008; Attentional modulation of word recognition by children in a dual-task paradigm. J Speech Lang Hear Res 51 (04) 1042-1054
  • Cox RM, Alexander GC, Johnson J, Rivera I. 2011; Cochlear dead regions in typical hearing aid candidates: prevalence and implications for use of high-frequency speech cues. Ear Hear 32 (03) 339-348
  • Desjardins JL, Doherty KA. 2014; The effect of hearing aid noise reduction on listening effort in hearing-impaired adults. Ear Hear 35 (06) 600-610
  • Dickinson AM, Baker R, Siciliano C, Munro KJ. 2014; Adaptation to nonlinear frequency compression in normal-hearing adults: a comparison of training approaches. Int J Audiol 53 (10) 719-729
  • Dillon H. 2001. Hearing Aids. New York, NY: Thieme;
  • Downs DW. 1982; Effects of hearing and use on speech discrimination and listening effort. J Speech Hear Disord 47 (02) 189-193
  • Ellis RJ, Munro KJ. 2015; Predictors of aided speech recognition, with and without frequency compression, in older adults. Int J Audiol 54 (07) 467-475
  • Gatehouse S, Gordon J. 1990; Response times to speech stimuli as measures of benefit from amplification. Br J Audiol 24 (01) 63-68
  • Glista D, Scollie S, Bagatto M, Seewald R, Parsa V, Johnson A. 2009; Evaluation of nonlinear frequency compression: clinical outcomes. Int J Audiol 48 (09) 632-644
  • Glista D, Scollie S, Sulkers J. 2012; Perceptual acclimatization post nonlinear frequency compression hearing aid fitting in older children. J Speech Lang Hear Res 55 (06) 1765-1787
  • Gustafson S, McCreery R, Hoover B, Kopun JG, Stelmachowicz P. 2014; Listening effort and perceived clarity for normal-hearing children with the use of digital noise reduction. Ear Hear 35 (02) 183-194
  • Hicks CB, Tharpe AM. 2002; Listening effort and fatigue in school-age children with and without hearing loss. J Speech Lang Hear Res 45 (03) 573-584
  • Hillock-Dunn A, Buss E, Duncan N, Roush PA, Leibold LJ. 2014; Effects of nonlinear frequency compression on speech identification in children with hearing loss. Ear Hear 35 (03) 353-365
  • Hogan CA, Turner CW. 1998; High-frequency audibility: benefits for hearing-impaired listeners. J Acoust Soc Am 104 (01) 432-441
  • Hopkins K, Khanom M, Dickinson AM, Munro KJ. 2014; Benefit from non-linear frequency compression hearing aids in a clinical setting: the effects of duration of experience and severity of high-frequency hearing loss. Int J Audiol 53 (04) 219-228
  • Hornsby BW. 2013; The effects of hearing aid use on listening effort and mental fatigue associated with sustained speech processing demands. Ear Hear 34 (05) 523-534
  • Hornsby BWY, Johnson EE, Picou E. 2011; Effects of degree and configuration of hearing loss on the contribution of high- and low-frequency speech information to bilateral speech understanding. Ear Hear 32 (05) 543-555
  • Hornsby BWY, Werfel K, Camarata S, Bess FH. 2014; Subjective fatigue in children with hearing loss: some preliminary findings. Am J Audiol 23 (01) 129-134
  • Houben R, van Doorn-Bierman M, Dreschler WA. 2013; Using response time to speech as a measure for listening effort. Int J Audiol 52 (11) 753-761
  • Humes LE, Christensen L, Thomas T, Bess FH, Hedley-Williams A, Bentler R. 1999; A comparison of the aided performance and benefit provided by a linear and a two-channel wide dynamic range compression hearing aid. J Speech Lang Hear Res 42 (01) 65-79
  • John A, Wolfe J, Scollie S, Schafer E, Hudson M, Woods W, Wheeler J, Hudgens K, Neumann S. 2014; Evaluation of wideband frequency responses and nonlinear frequency compression for children with cookie-bite audiometric configurations. J Am Acad Audiol 25 (10) 1022-1033
  • Kahneman D. 1973. Attention and Effort. Englewood Cliffs, NJ: Prentice-Hall, Inc.;
  • Kimlinger C, McCreery R, Lewis D. 2015; High-frequency audibility: the effects of audiometric configuration, stimulus type, and device. J Am Acad Audiol 26 (02) 128-137
  • Kokx-Ryan M, Cohen J, Cord MT, Walden TC, Makashay MJ, Sheffield BM, Brungart DS. 2015; Benefits of nonlinear frequency compression in adult hearing aid users. J Am Acad Audiol 26 (10) 838-855
  • Landis JR, Koch GG. 1977; The measurement of observer agreement for categorical data. Biometrics 33 (01) 159-174
  • Lenth RV. 2001; Some practical guidelines for effective sample size determination. Am Stat 55: 187-193
  • Lewis D, Schmid K, O’Leary S, Spalding J, Heinrichs-Graham E, High R. 2016; Effects of noise on speech recognition and listening effort in children with normal-hearing and children with mild bilateral or unilateral hearing loss. J Speech Lang Hear Res 59 (05) 1218-1232
  • Mackersie CL, Crocker TL, Davis RA. 2004; Limiting high-frequency hearing aid gain in listeners with and without suspected cochlear dead regions. J Am Acad Audiol 15 (07) 498-507
  • McCreery RW, Alexander J, Brennan MA, Hoover B, Kopun J, Stelmachowicz PG. 2014; The influence of audibility on speech recognition with nonlinear frequency compression for children and adults with hearing loss. Ear Hear 35 (04) 440-447
  • McCreery RW, Brennan MA, Hoover B, Kopun J, Stelmachowicz PG. 2013; Maximizing audibility and speech recognition with nonlinear frequency compression by estimating audible bandwidth. Ear Hear 34 (02) e24-e27
  • McCreery RW, Stelmachowicz PG. 2011; Audibility-based predictions of speech recognition for children and adults with normal hearing. J Acoust Soc Am 130 (06) 4070-4081
  • McCreery RW, Stelmachowicz PG. 2013; The effects of limited bandwidth and noise on verbal processing time and word recall in normal-hearing children. Ear Hear 34 (05) 585-591
  • McCreery RW, Walker EA, Spratford M, Oleson J, Bentler R, Holte L, Roush P. 2015; Speech recognition and parent ratings from auditory development questionnaires in children who are hard of hearing. Ear Hear 36 (Suppl 1) 60S-75S
  • McDermott HJ. 2011; A technical comparison of digital frequency-lowering algorithms available in two current hearing aids. PLoS One 6 (07) e22358
  • McGarrigle R, Munro KJ, Dawes P, Stewart AJ, Moore DR, Barry JG, Amitay S. 2014; Listening effort and fatigue: what exactly are we measuring? A British Society of Audiology Cognition in Hearing Special Interest Group ‘white paper’. Int J Audiol 53 (07) 433-440
  • Norman DA, Bobrow DG. 1975; On data-limited and resource-limited processes. Cognit Psychol 7: 44-64
  • Ohlenforst B, Zekveld AA, Jansma EP, Wang Y, Naylor G, Lorens A, Lunner T, Kramer SE. 2017; Effects of hearing impairment and hearing aid amplification on listening effort: a systematic review. Ear Hear 38 (03) 267-281
  • Pavlovic CV. 1987; Derivation of primary parameters and procedures for use in speech intelligibility predictions. J Acoust Soc Am 82 (02) 413-422
  • Pichora-Fuller MK, Kramer SE, Eckert MA, Edwards B, Hornsby BW, Humes LE, Lemke U, Lunner T, Matthen M, Mackersie CL, Naylor G, Phillips NA, Richter M, Rudner M, Sommers MS, Tremblay KL, Wingfield A. 2016; Hearing impairment and cognitive energy: the framework for understanding effortful listening (FUEL). Ear Hear 37 (Suppl 1) 5S-27S
  • Picou EM, Marcrum SC, Ricketts TA. 2015; Evaluation of the effects of nonlinear frequency compression on speech recognition and sound quality for adults with mild to moderate hearing loss. Int J Audiol 54 (03) 162-169
  • Picou EM, Ricketts TA, Hornsby BW. 2013; How hearing aids, background noise, and visual cues influence objective listening effort. Ear Hear 34 (05) e52-e64
  • Pisoni DB, Manous LM, Dedina MJ. 1987; Comprehension of natural and synthetic speech: effects of predictability on the verification of sentences controlled for intelligibility. Comput Speech Lang 2 (3-4) 303-320
  • Pittman AL. 2008; Short-term word-learning rate in children with normal hearing and children with hearing loss in limited and extended high-frequency bandwidths. J Speech Lang Hear Res 51 (03) 785-797
  • Pittman AL, Stelmachowicz PG. 2000; Perception of voiceless fricatives by normal-hearing and hearing-impaired children and adults. J Speech Lang Hear Res 43 (06) 1389-1401
  • Preminger JE, Carpenter R, Ziegler CH. 2005; A clinical perspective on cochlear dead regions: intelligibility of speech and subjective hearing aid benefit. J Am Acad Audiol 16 (08) 600-613, quiz 631–632
  • R Core Team 2016. R Foundation for Statistical Computing. Vienna, Austria:
  • Rakerd B, Seitz PF, Whearty M. 1996; Assessing the cognitive demands of speech listening for people with hearing losses. Ear Hear 17 (02) 97-106
  • Ratcliff R.. 1993; Methods for dealing with reaction time outliers. Psychol Bull 114 (03) 510-532
  • Sarampalis A, Kalluri S, Edwards B, Hafter E. 2009; Objective measures of listening effort: effects of background noise and noise reduction. J Speech Lang Hear Res 52 (05) 1230-1240
  • Scollie S, Seewald R, Cornelisse L, Moodie S, Bagatto M, Laurnagaray D, Beaulac S, Pumford J. 2005; The Desired Sensation Level multistage input/output algorithm. Trends Amplif 9 (04) 159-197
  • Simpson A, Hersbach AA, McDermott HJ. 2005; Improvements in speech perception with an experimental nonlinear frequency compression hearing device. Int J Audiol 44 (05) 281-292
  • Simpson A, Hersbach AA, McDermott HJ. 2006; Frequency-compression outcomes in listeners with steeply sloping audiograms. Int J Audiol 45 (11) 619-629
  • Souza PE, Arehart KH, Kates JM, Croghan NB, Gehani N. 2013; Exploring the limits of frequency lowering. J Speech Lang Hear Res 56 (05) 1349-1363
  • Stelmachowicz PG, Lewis DE, Choi S, Hoover B. 2007; Effect of stimulus bandwidth on auditory skills in normal-hearing and hearing-impaired children. Ear Hear 28 (04) 483-494
  • Stelmachowicz PG, Pittman AL, Hoover BM, Lewis DE. 2001; Effect of stimulus bandwidth on the perception of /s/ in normal- and hearing-impaired children and adults. J Acoust Soc Am 110 (04) 2183-2190
  • Storkel HL, Hoover JR. 2010; An online calculator to compute phonotactic probability and neighborhood density on the basis of child corpora of spoken American English. Behav Res Methods 42 (02) 497-506
  • Tomblin JB, Harrison M, Ambrose SE, Walker EA, Oleson JJ, Moeller MP. 2015; Language outcomes in young children with mild to severe hearing loss. Ear Hear 36 (Suppl 1) 76S-91S
  • Turner CW, Cummings KJ. 1999; Speech audibility for listeners with high-frequency hearing loss. Am J Audiol 8 (01) 47-56
  • Whelan R. 2008; Effective analysis of reaction time data. Psychol Rec 58: 475-482
  • Wolfe J, John A, Schafer E, Hudson M, Boretzki M, Scollie S, Woods W, Wheeler J, Hudgens K, Neumann S. 2015; Evaluation of wideband frequency responses and non-linear frequency compression for children with mild to moderate high-frequency hearing loss. Int J Audiol 54 (03) 170-181
  • Wolfe J, John A, Schafer E, Nyffeler M, Boretzki M, Caraway T. 2010; Evaluation of nonlinear frequency compression for school-age children with moderate to moderately severe hearing loss. J Am Acad Audiol 21 (10) 618-628
  • Wolfe J, John A, Schafer E, Nyffeler M, Boretzki M, Caraway T, Hudson M. 2011; Long-term effects of non-linear frequency compression for children with moderate hearing loss. Int J Audiol 50 (06) 396-404

Zoom Image
Figure 1 Hearing thresholds (dB HL) for children and adults. Left and right ears are shown in the left and right panels, respectively. Box boundaries represent the 25th and 75th percentiles, error bars represent the 10th and 90th percentiles, horizontal lines represent the medians, and filled circles represent the means.
Zoom Image
Figure 2 Proportion correct identification for each processing condition in children and adults. The measure depicted is indicated in each panel. Box boundaries represent the 25th and 75th percentiles, error bars represent the 10th and 90th percentiles, horizontal lines represent the medians, and filled circles represent the means.
Zoom Image
Figure 3 Proportion correct for each consonant in each processing condition. Initial consonants shown in the top panel and final consonants shown in the bottom panel. Arranged in order from fricatives, nasals, affricates, and stops. Within each category, voiceless is followed by voiced. Proportion correct is collapsed across the two age groups. Error bars represent 1 SD.
Zoom Image
Figure 4 VRT for all responses. Response times are provided for each processing condition, as depicted by the legend. Box boundaries represent the 25th and 75th percentiles, error bars represent the 10th and 90th percentiles, horizontal lines represent the medians, and filled circles represent the means.
Zoom Image
Figure 5 Benefit of high-frequency amplification by degree of hearing loss. EBW minus RBW shown in the left column, and NFC minus RBW shown in the right column. Benefit for nonsense-word recognition and VRT is shown in the top and bottom rows, respectively.