Subscribe to RSS
DOI: 10.3766/jaaa.16157
Listener Performance with a Novel Hearing Aid Frequency Lowering Technique
Corresponding author
Publication History
Publication Date:
26 June 2020 (online)
Abstract
Background:
Sloping hearing loss imposes limits on audibility for high-frequency sounds in many hearing aid users. Signal processing algorithms that shift high-frequency sounds to lower frequencies have been introduced in hearing aids to address this challenge by improving audibility of high-frequency sounds.
Purpose:
This study examined speech perception performance, listening effort, and subjective sound quality ratings with conventional hearing aid processing and a new frequency-lowering signal processing strategy called frequency composition (FC) in adults and children.
Research Design:
Participants wore the study hearing aids in two signal processing conditions (conventional processing versus FC) at an initial laboratory visit and subsequently at home during two approximately six-week long trials, with the order of conditions counterbalanced across individuals in a double-blind paradigm.
Study Sample:
Children (N = 12, 7 females, mean age in years = 12.0, SD = 3.0) and adults (N = 12, 6 females, mean age in years = 56.2, SD = 17.6) with bilateral sensorineural hearing loss who were full-time hearing aid users.
Data Collection and Analyses:
Individual performance with each type of processing was assessed using speech perception tasks, a measure of listening effort, and subjective sound quality surveys at an initial visit. At the conclusion of each subsequent at-home trial, participants were retested in the laboratory. Linear mixed effects analyses were completed for each outcome measure with signal processing condition, age group, visit (prehome versus posthome trial), and measures of aided audibility as predictors.
Results:
Overall, there were few significant differences in speech perception, listening effort, or subjective sound quality between FC and conventional processing, effects of listener age, or longitudinal changes in performance. Listeners preferred FC to conventional processing on one of six subjective sound quality metrics. Better speech perception performance was consistently related to higher aided audibility.
Conclusions:
These results indicate that when high-frequency speech sounds are made audible with conventional processing, speech recognition ability and listening effort are similar between conventional processing and FC. Despite the lack of benefit to speech perception, some listeners still preferred FC, suggesting that qualitative measures should be considered when evaluating candidacy for this signal processing strategy.
#
INTRODUCTION
Sloping hearing loss, with poorer thresholds in the high frequencies, is a common configuration of hearing loss in adults and children ([Pittman and Stelmachowicz, 2003]; [Margolis and Saly, 2008]). Limited high-frequency gain, limited receiver bandwidth, and acoustic feedback present difficulties to audiologists fitting hearing aids for this prevalent configuration of hearing loss. In addition, severe or profound high-frequency hearing loss may result in limited benefit to speech perception even when audibility can be achieved at those frequencies ([Ching et al, 1998]; [Hogan and Turner, 1998]).
One way to address these challenges is with hearing aid signal processing techniques that shift high-frequency sounds to lower frequencies where hearing thresholds are better in listeners with sloping losses (see [Alexander, 2013] for review). One signal processing technique, frequency compression, remaps high-frequency input above a specified start frequency to lower frequencies in the output of the hearing aid, thereby leaving the frequency band below the start frequency unchanged. Another approach, frequency transposition, copies spectral content from a high-frequency source band and reproduces it in a lower frequency destination band.
Frequency composition (FC) is a frequency-lowering strategy recently introduced by Oticon, also known as Speech Rescue ([Angelo et al, 2015]). A variation that combines aspects of frequency transposition and frequency compression, this approach copies multiple adjacent segments from a high-frequency source band, compresses those bands and then overlaps and superimposes these segments in a low-frequency destination band. This method has theoretical advantages in comparison with other frequency-lowering approaches. It does not introduce distortion of vowel formant ratios by shifting the second formant as can occur with frequency compression settings with a start frequency below 2000 Hz ([Perreau et al, 2013]; [Kirby and Brown, 2015]). It also has a narrower destination band relative to transposition, reducing the likelihood of the shifted speech sounds masking other speech information already present in that band. The fitting software includes ten settings, with each corresponding to a different combination of source and destination start frequency and bandwidth. The recommended FC setting is the one that shifts the high-frequency source band to a destination band with an upper boundary just below the listener’s maximum audible frequency (MAF) with conventional hearing aid amplification. This MAF is defined as the point at which the average aided speech spectrum intersects with the listener’s hearing threshold to become inaudible. The fitting audiologist has the option to leave the source band unaltered, preserving conventional bandwidth, or to remove this band from hearing aid output. The fitting software also allows adjustment of the level (or “strength”) of the lowered high-frequency bands relative to the level of the speech signal in the destination band, with the default set to equal energy between the destination band level and the copied high-frequency bands.
We know of no peer-reviewed research on FC. A white paper ([Angelo et al, 2015]) showed significant improvements in consonant and word recognition in noise with FC compared with conventional amplification in adults with severe to profound hearing loss (N = 12, mean age = 54 years), but did not measure performance over time or in children. Whether listening effort improves with FC has not been established.
While no previous work has examined the effect of FC on speech recognition or listening effort, numerous studies have examined the effects of frequency transposition and compression. There is mixed evidence of speech perception benefit from frequency lowering in hearing aids. Most studies report small, but significant, improvements in recognition of high-frequency consonants, including plural /s/, with frequency lowering enabled compared with conventional amplification ([Auriemmo et al, 2009]; [Glista et al, 2009]; [Kuk et al, 2009]; [Smith et al, 2009]; [Wolfe et al, 2010]; [Souza et al, 2013]; [McCreery et al, 2014]; [Picou et al, 2015]). However, a subset of listeners may not benefit from frequency lowering or have poorer speech perception with this feature enabled ([Simpson et al, 2005]; [2006]; [Ellis and Munro, 2013]; [Alexander et al, 2014]). Several factors could contribute to potential benefit from frequency lowering. [Alexander et al (2014)] found that relative to conventional processing, frequency transposition resulted in poorer fricative and affricate perception in a group of listeners with mild to moderate loss, whereas frequency compression contributed to improved performance. Moderate frequency compression has been associated with increases and more aggressive settings with decreases in speech perception scores in listeners with limited high-frequency audibility ([Souza et al, 2013]). Children and adults may differ in their ability to benefit from frequency lowering ([Glista et al, 2009]); however, comparisons in frequency-lowering benefit between children and adults can be confounded by differences in hearing aid prescription and differences in speech recognition in conventional amplification. When speech recognition performance is similar between children and adults for conventional processing, the benefit obtained for frequency lowering by both age groups can be comparable ([McCreery et al, 2014]). There have also been multiple reports of speech perception benefit increasing after an acclimatization period with frequency lowering ranging in length from weeks to years ([Auriemmo et al, 2009]; [Kuk et al, 2009]; [Smith et al. 2009]; [Wolfe et al, 2011]; [Glista et al, 2012]). However, others have found no evidence of acclimatization after 24 weeks ([Smith et al, 2009]) or over 2 years of wearing a frequency-lowering hearing aid ([Hopkins et al, 2014]).
It could be the case that variability in benefit reported for frequency lowering in the previously mentioned studies resulted from differences in signal processing strategy, fitting approach, acclimatization, and degree and/or configuration of hearing loss across studies. Taken together, these findings would suggest that ideal candidates for frequency compression would be listeners with the most restricted high-frequency audibility, who would presumably benefit the most from access to high-frequency audibility, and that the optimal frequency lowering setting would maximally increase audibility without introducing perceptually unacceptable distortion ([McCreery et al, 2013]; [2014]; [Souza et al, 2013]; [Alexander, 2016]). It remains unclear to what degree acclimatization occurs on average or which factors relate to acclimatization benefit with frequency-lowering strategies when in complex listening environments.
In addition to speech perception, subjective sound quality can be affected by frequency lowering. Some reports have found that perceived quality of speech may improve with frequency compression relative to conventional processing with restricted high-frequency bandwidth ([Bohnert et al, 2010]; [Brennan et al, 2014]). Others have found that frequency compression resulted in overall poorer sound quality ratings and that the lowest ratings came from the listeners with greatest high-frequency audibility ([Souza et al, 2013]). One possible cause for this difference in findings is that Souza and colleagues used fewer sine components to represent the shifted high-frequency segments, which may have negatively impacted the perceived sound quality. Others have reported that listeners with hearing loss do not rate the quality of speech differently from conventional processing across a range of frequency compression settings, with the exception of poorer ratings for conditions with a start frequency of compression below 2 kHz ([Parsa et al, 2013]). Music perception has been reported to be either unaffected ([Parsa et al, 2013]; [Brennan et al, 2014]) or negatively affected ([Mussoi and Bentler, 2015]) by nonlinear frequency compression, with poorer ratings associated with more aggressive frequency compression settings ([Mussoi and Bentler, 2015]). The overall pattern of results suggests that conservative frequency-lowering settings may not negatively affect perceived sound quality but that lower start frequencies and more aggressive compression ratios do have the potential to negatively impact sound quality.
It is possible that in addition to any changes in speech recognition and sound quality associated with frequency-lowering signal processing, the mental effort required to listen to speech that has been processed in this fashion may also change. One measure of listening effort includes using dual task measures of the listener’s reaction time on a competing nonauditory task while simultaneously completing a speech-recognition task. Results from dual task measures are consistent with some models of working memory ([Barrouillet and Camos, 2007]), which propose that visual and auditory processing both depend on a common, finite store of attentional resources that must be split between competing tasks. Increasing the attentional demands of a task in one modality (auditory) would necessarily decrease the resources that could be applied to a task in another modality (visual), resulting in increases in reaction time and possible decreases in accuracy. For example, [Sarampalis et al (2009)] observed that increasing the difficulty of listening tasks, such as decreasing the signal to noise ratio (SNR) of stimuli in a speech recognition task, results in increases in reaction time on a secondary visual task in addition to possible decreased accuracy in the auditory task.
Use of amplification can reduce dual task reaction time in listeners with sensorineural hearing loss ([Hornsby, 2013]). Signal processing strategies such as digital noise reduction have been shown to significantly reduce reaction time compared with amplification alone ([Sarampalis et al, 2009]), although no significant differences in speech recognition ability were observed in the auditory task. Dual task paradigms might be sensitive to a kind of benefit from frequency-lowering signal processing, reduced listening effort, which may not be captured by speech perception testing alone. Conversely, frequency-lowering algorithms that improve audibility of high-frequency speech sounds while introducing excessive distortion may increase listening effort.
In this study, speech perception, listening effort, and sound quality measurements were completed using conventional hearing aid processing and with FC. Adult and child hearing aid users with bilateral sensorineural hearing loss were fit with experimental hearing aids. At an initial visit, individual performance with each type of processing was assessed using speech perception tasks for single words and words embedded in low-predictability sentences, a dual task experiment to estimate listening effort, and subjective sound quality surveys. To establish whether benefit improved over time, participants then wore the hearing aids in each signal processing condition (conventional processing and FC) at home during two approximately six-week long trials. The order of conditions at the initial visit and at-home trials were counterbalanced across individuals in a double-blind paradigm. At the conclusion of each at-home trial, participants were retested wearing the study aids in that condition. The following outcomes were hypothesized: (a) perception of high-frequency consonant sounds (particularly plural endings) and overall speech perception would be improved with FC relative to conventional amplification, (b) speech perception performance with FC would increase over time, (c) listeners with poorer high-frequency audibility would derive greater benefit from FC, (d) sound quality would be improved with FC enabled, (e) listening effort would be reduced in the FC condition, reflecting improved audibility with minimal distortion, and (f) there would be no differences in any of the outcomes between adults and children in benefit from FC.
#
METHODS
Participants
Children (N = 12, 7 females, mean age in years = 12.0, SD = 3.0) and adults (N = 12, 6 females, mean age in years = 56.2, SD = 17.6) with bilateral sensorineural hearing loss, defined as having no air-bone gaps >10 dB, participated in this study. All had at least 1 year of experience wearing hearing aids. All participants were native English speakers with no disabilities in addition to hearing loss. The range of hearing loss of the participants was limited by the recommended fitting range of the power behind-the-ear hearing aid used for the study, which was mild sloping to moderate loss at the lower limit and profound at the upper limit. Pure tone audiometry at octave frequencies from 250 to 8000 Hz in each ear was completed at the time of the first study visit ([Figure 1]; [Tables 1] and [2]). Participants received compensation for participation. All procedures were approved by the Institutional Review Board of Boys Town National Research Hospital. The number of participants needed for the study was estimated using a power analysis based on the reported effect sizes for frequency lowering (ηp 2 = 0.45) in a previous study from this laboratory with similar methods and participants ([McCreery et al, 2014]). The power analysis indicated that 24 participants were sufficient to detect this effect size with 80% power based on observed mean differences and variance from the previous study.


Note: Subject 3 had anotia of the left ear and could not be tested on that side by air conduction.
#
Hearing Aids/Fitting Method
Earmold impressions were taken, and earmolds were ordered for listeners who did not have conventional earmolds or whose earmolds had problems with retention or limited gain without feedback. The largest possible vent that did not result in feedback was selected for each ear using an adjustable select-a-vent (Westone, Colorado Springs, CO). Feedback management, which used phase inversion, frequency shift, and gain control, was completed for each listener after fitting the earmold to the behind-the-ear hearing aids used in the study. Real ear measures of aided audibility were then completed using a Verifit hearing aid analyzer (Audioscan, Dorchester, ON) for each participant with the experimental hearing aids in the conventional processing (CP) condition and using a calibrated speech passage presented at 60 dB SPL. Gain was adjusted to match Desired Sensation Level (DSL) prescriptive targets ([Scollie et al, 2005]); DSL v 5.0 Child targets were used with the child listeners and DSL v 5.0 Adult targets for the adult listeners. For some participants, available gain in the hearing aid was limited after running the feedback manager, in which case the gain was set as close as possible to the prescribed target. Aided speech intelligibility index (SII) was computed by the Verifit software. The average aided SII was 53 (SD = 13) in the right ear and 53 (SD = 15) in the left ear for the adult listeners and 67 (SD = 12) in the right ear and 65 (SD = 17) in the left ear for the child listeners. MAF, defined as the frequency where the aided average speech spectrum level for the 60 dB stimulus intersected the audiometric threshold contour, was measured and recorded for each participant. Average MAF in the adult listeners was 4132 Hz (SD = 1,106) in the right ear and 4085 Hz (SD = 1,262) in the left ear; average MAF in the child listeners was 5759 Hz (SD = 1,206) in the right ear and 5172 Hz (SD = 1,691) in the left ear.
To blind the experimenter, an audiologist who did not participate in the initial fitting or subsequent outcome measures copied the initial fitting to a second set of hearing aids of the same model and enabled FC in one of the two sets of hearing aids. The order of processing condition was counterbalanced across participants. The FC setting selected for each participant and each ear was the destination band setting with an upper frequency boundary closest to but still below the MAF. “Strength” of FC was set at Medium, which is the default intensity level for the copied segment; the high-frequency source band was left intact for all listeners, preserving conventional bandwidth.
Other advanced features of the hearing aids such as directional microphones and noise reduction were turned off. Children who used FM systems were given FM systems to use in school and/or at home that were compatible with the study hearing aids.
#
Outcome Measures
Participants completed speech recognition tasks, a dual task of listening effort, and sound quality surveys once in each hearing aid condition at the first visit and once at each subsequent visit in their current experimental hearing aid condition.
Speech Recognition
All speech stimuli were presented at 65 dB SPL at a distance of 1 m in the sound field at 0° azimuth using custom MATLAB software ([MathWorks, 2014], Natick, MA) through a PC, RME Babyface USB Audio Interface (RME Audio; Haimhausen, Germany), and a M-Audio BX8a loudspeaker (M-Audio; Cumberland, RI). Recordings of the speech stimuli used in the monosyllabic words and words in sentences tasks were based on tokens from a young adult female talker and recorded using custom software and a Shure 53 BETA head-worn boom microphone (Shure Incorporated, Niles, IL) at a sampling rate of 22050 Hz. The best exemplar of each token was selected from three recordings of each stimulus by an experienced listener on the research team based on the subjective intelligibility and recording quality of the samples. The stimuli were cropped to have 100 msec of quiet before and after each token. The tokens were then equated in root-mean-square level using Praat ([Boersma and Weenink, 2001]). Word and low-predictability sentence lists were balanced with respect to fricative and affricate content in word initial and word final position, and all words were selected to be within the child lexicon ([Storkel and Hoover, 2010]). The stimuli had high intelligibility in quiet for children with normal hearing and children who wore hearing aids as described in [Spratford et al (this issue)]. All participant responses on the speech recognition measures were scored in real time by an examiner with normal hearing present in the sound booth during the experiment.
#
#
Monosyllabic Words
Speech perception for monosyllabic words was evaluated using four lists of 50 words. A different list was used for each repetition of the task over the course of the study, and the order of lists was randomized across participants. Half of the words contained a plural morpheme. Participant responses were simultaneously scored online at three levels: base word correct, singular/plural ending correct, and total (base word + morpheme) correct.
#
Words Embedded in Sentences
Speech perception for words in a sentence context was evaluated using low-predictability sentences with a subject-verb-direct object-prepositional phrase frame (e.g., He puts the cats through the dream.). These syntactically correct, but semantically anomalous sentences restricted linguistic cues to whether the key word was plural. The participants were instructed to repeat the sentences in entirety, which was intended to increase the cognitive load of the task. Each of the 50 sentences in the four sentence lists contained a target word. As with the monosyllabic word task, a different sentence list was used with each repetition of the task, and the order of lists was randomized across participants. Half of the target words contained a plural marker. Again, participant responses were scored at three levels: base word correct, singular/plural ending correct, and total (base word + morpheme) correct.
#
Vowel-Consonant-Vowel
A subset of listeners (N = 11) completed an additional consonant identification task ([Turner et al, 1995]) in noise for each hearing aid condition to evaluate effects of FC on speech perception with competing noise backgrounds and with fewer linguistic context cues. This test consists of 16 consonant sounds, in an /aCa/ context, produced by two adult males and two adult females for a total of 64 trials per condition, presented in a background of speech spectrum noise at +0 and +10 dB SNR. The onset of the background noise was 500 msec before the onset of each speech stimulus and its offset was 500 msec after the offset of the speech stimulus.
Listening Effort
Participants also completed a novel dual task experiment that consisted of (a) recognition of recorded monosyllabic words in quiet and (b) categorization of a simultaneously presented visual stimulus. The speech stimuli were developed for a previous study ([McCreery et al, 2014]). This test was created to determine the effect of signal processing condition on listening effort. Stimulus presentation and data acquisition were completed using custom MATLAB software and the Psychophysics toolbox extensions ([Brainard, 1997]; [Pelli, 1997]; [Kleiner et al, 2007]). For the speech recognition task, participants were instructed to repeat a monosyllabic word heard in quiet, and an examiner scored whether the response was correct or incorrect. The program advanced to the next trial when the examiner entered a score for the current trial. There were 57 monosyllabic words per list. Each word was verified to be within the lexicon of the average first-grader ([Storkel and Hoover, 2010]). Each word contained one of nine fricatives or affricates (/s/, /z/, /f/, /v/, /ʧ/, /ʤ/, /ʃ/, /ʒ/, /θ/) in the initial or final position and six vowel contexts (/a/, / i/, /I/, /ε/, /u/, /ʌ/). Fricative and affricate content were balanced across lists in both initial and final positions. Overall percentage of words correct was calculated. For the visual task, participants were instructed not to press the “space” bar on a keyboard if the image presented was a particular Pokémon character (“Meowth”), and to press the “space” bar for all other characters ([Durston et al, 2002]). Half of the trials were “go” trials, where the correct response was to press “space” bar, and half were “no go” trials. Reaction times from keyboard responses were logged for each trial. Overall reaction time for this task was calculated as the mean recorded reaction time on “go” trials where the participant correctly responded. Longer reaction times for the visual task are indicative of greater listening effort in the speech recognition task.
#
Sound Quality
A sound quality survey ([Appendix A]) was completed for the current hearing aid signal processing condition following speech perception and listening effort measures in that condition. This survey probed listener’s perceived ease of understanding, sound quality, and overall acceptance of the current processing condition.


#
#
Statistical Approach
All statistical analyses were completed using the R statistical package ([R core team, 2016]), with the addition of the linear and nonlinear mixed effects package nlme ([Pinheiro et al, 2016]), and the nnet package’s multinomial function multinom ([Venables and Ripley, 2002]). Linear mixed effects models were used to analyze the repeated speech perception, dual task, and sound quality survey measure results. Predictors included the following: (a) hearing aid processing condition, (b) listener age group, (c) better-ear aided SII, (d) better-ear aided MAF, and (e) visit (prehome versus posthome listening trial). Model selection procedures were as follows: a model with interaction terms between age group, previsit versus postvisit, and hearing aid condition that was fit first, and if an interaction term was not significant it was removed from the model before further analysis. Statistical significance was defined as p values less than 0.05.
#
#
RESULTS
[Figure 2] shows the results of the word recognition task for each level of scoring. It was hypothesized that the speech perception performance would be greater for monosyllabic words, particularly the plural endings, with FC activated. None of the predictors were significant in the model for total (base word + morpheme) correct. However, in the model for base words correct, greater better-ear SII was associated with better performance (β = 0.26, SE = 0.075, p = 0.0009). There were no significant effects of age group, visit, or hearing aid condition. For plurals correct in monosyllabic words, greater better-ear MAF was associated with better performance (β = 0.0014, SE = 0.0007, p = 0.043). Again, none of the other predictors were significant.


It was hypothesized that speech perception performance would be greater for words in sentence contexts, with and without plural morphemes, with FC activated. In the model for total words correct in sentences, none of the predictors were significant. However, for base words correct, greater better-ear SII was associated with better performance (β = 0.29, SE = 0.063, p < 0.0001); there was also a significant interaction of hearing aid condition and visit, with performance in the FC condition approximately six percentage points poorer than with conventional amplification at the first visit and one and a half percentage points higher than conventional processing at the posthome trial visit (β = −3.35, SE = 1.53, p = 0.034). The model for plural morphemes correct showed that greater better-ear SII was associated with better performance (β = 0.27, SE = 0.05, p < 0.0001).
[Figure 3] shows the results of the consonant recognition in noise task for conventional processing and for FC activated. The purpose of the additional consonant recognition in noise task was to evaluate the effect of FC on recognition of high-frequency consonants under challenging listening conditions and with minimal linguistic context cues. However, none of the predictor variables were significant, though performance was poorer in the 0 dB SNR condition than in the +10 dB SNR condition.


[Figure 4] shows the results of the dual task for word recognition and reaction time. It was hypothesized that listening effort, as indicated by reaction time in the dual task, would be reduced with FC activated. The model for the number of words correct in the dual task experiment was related to better-ear aided SII (β = 0.32, SE = 0.10, p = 0.0038), with higher recognition scores associated with greater aided audibility. Reaction time was significantly slower in the children than in the adults (β = 0.21, SE = 0.09, p = 0.03). There was no significant effect of hearing aid signal processing condition on reaction time (β = −0.03, SE = 0.03, p = 0.39).


[Figure 5] shows the results of the sound quality survey for questions 1 through 5. The main hypothesis regarding judgments was that FC would be rated higher than conventional processing. For question 1 (“How many of the words did you understand?”) there was a significant main effect of visit, with better speech understanding ratings in the initial visit (β = 0.89, SE = 0.27, p = 0.0021); a significant main effect of condition with overall greater/better ratings with FC (β = 0.81, SE = 0.28, p = 0.0065); a significant main effect of age group, with higher ratings in children (β = 0.91, SE = 0.38, p = 0.027); and a significant main effect of MAF, with lower ratings in those with higher better-ear MAFs (β = −0.00023, SE = 0.00009, p = 0.017). There was a significant interaction of visit and hearing aid condition, with the FC rated 0.57 points lower than conventional processing at the first visit but rated 0.3 points higher at the postvisit on average (β = −1.36, SE = 0.37, p = 0.0007), which was in contrast to the overall pattern of slightly lower scores in the postvisit. There was also a significant interaction of hearing aid condition and age group, with the children rating FC, on average, 0.43 points lower than conventional processing and the adults rating FC 0.13 points higher than conventional processing (β = −1.10, SE = 0.42, p = 0.01).


None of the individual predictors were significant for questions 2 through 5. A multinomial logistic regression analysis was completed of participant responses to question 6, which concerned whether they would like to wear the hearing aid in the current signal processing condition. The likelihood of a “Yes” response was not significantly related to hearing aid condition, age group, better-ear aided SII, or better-ear aided MAF.
#
DISCUSSION
The pattern of results in this study showed no consistent benefit or deficit associated with FC across a variety of speech perception measures. This is in contrast with the results of [Angelo et al (2015)] which showed modest, but significant, benefit for consonant discrimination and word recognition with this same frequency-lowering signal processing strategy in a group of adult listeners. One possible explanation for this discrepancy is the difference in audiometric thresholds of the participants in these two studies: the average audiogram was severe-profound in Angelo et al. and moderate-severe in this study. [Alexander et al (2014)] and [Miller et al (2016)] showed poorer performance from linear frequency transposition compared with conventional amplification in a group of listeners with mild-moderate sensorineural hearing loss, suggesting that listeners with thresholds in this range may not all be candidates for transposition strategies. Aided audibility for the present group of listeners may also have been sufficient to allow ceiling or near-ceiling performance on the monosyllabic word recognition task with both types of processing conditions for some listeners, evident in [Figure 2], thereby minimizing the potential for improvements in speech perception scores in the FC condition. However, there were also no significant differences between signal processing conditions on the consonant discrimination task in the +10 or 0 dB SNR conditions and in the words in sentences task which were more challenging for these listeners than the monosyllabic word recognition task in quiet.
The finding that hearing aid condition was not a significant predictor of reaction time on the dual task does not support the hypothesis that FC affects listening effort. Our results did show significantly slower reaction times in the children than in the adults, which contrasts with reports of no significant differences in reaction time between normal hearing, typically developing children at least 9 years of age and adults ([Casey et al, 1997]). The words used in the speech perception side of the dual task were selected to be within the vocabulary of a typical child, and there were no significant differences between adults and children in word recognition accuracy on the primary task. Therefore, it does not appear that differences in reaction time relate to the children not having adequate vocabulary skills to complete the primary task. It could be the case that hearing loss early in development contributed to lower executive function skills in our child participants than would typically be observed in normal hearing children of the same age. This may have caused the children with hearing loss to be less able to divide attention between the two tasks or quickly switch between them, contributing to the age effect in reaction time relative to the adult listeners with postlingual onset of hearing loss.
The sound quality survey only showed significant effects for perceived accuracy of word understanding, with FC being rated more highly than conventional processing after the home trial. Why this particular question showed significant differences, in the absence of effects of processing condition on the speech perception tasks, is unclear. The lack of an overall effect on the remainder of survey items is consistent with the equivocal sound quality results with nonlinear frequency compression using conservative settings ([Bohnert et al, 2010]; [Parsa et al, 2013]; [Brennan et al, 2014]; [Mussoi and Bentler, 2015]).
On the survey question that did show an overall preference (“How many of the words did you understand?”), FC was rated negatively relative to conventional processing by the children and positively by the adults. Again, this age effect on overall preference did not coincide with any age-related differences in speech perception benefit from FC. The difference in ratings with respect to age group could reflect differences in aided audibility and MAF across the groups. Children are prescribed more gain than adults, which leads to differences in audibility between groups. Furthermore, the younger listeners had greater aided audibility and higher MAF, on average, for conventional processing compared with adults. Assuming many of the child listeners may have adequate high-frequency audibility with conventional amplification, FC may have been perceived as a form of distortion that contributed negatively to the intelligibility of speech.
Most speech recognition measures showed no changes in performance across visits for either signal processing condition. This finding was inconsistent with past studies that showed an acclimatization benefit with frequency-lowering signal processing over time ([Auriemmo et al, 2009]; [Kuk et al, 2009]; [Smith et al, 2009]; [Wolfe et al, 2011]; [Glista et al, 2012]). However, there was a single significant interaction of signal processing condition and visit number for base words correct in sentences. It could be the case, then, that optimal performance with frequency lowering is achieved after a period of acclimatization, whether or not there is benefit compared with performance with conventional processing. There were no significant interactions of signal processing condition and listener age group on any of the speech perception tasks. This result suggests that FC is not contraindicated for use with children in this age range.
One potential limitation of this study was the possible influence of the participants’ previous experience with frequency-lowering signal processing on the participants’ performance with FC. It is unclear what impact past exposure to different frequency-lowering processing strategies may have on speech perception, listening effort, and sound quality. Unfortunately, information on whether any other type of frequency-lowering processing was enabled in the participants’ hearing aids was not determined at the time of the first visit. Future research may also help determine whether acclimatization to frequency lowering only occurs at the time of first exposure to some iteration of this type of processing, or if additional periods of acclimatization may be expected any time a hearing aid user is introduced to an unfamiliar strategy.
#
CONCLUSIONS
Overall, these results indicate that when high-frequency speech sounds are made audible with conventional processing, as was the case with many of the participants in the present study, speech perception ability and listening effort are likely to be comparable with conventional processing and FC. Although there was no consistent benefit to speech perception with this signal processing strategy, some listeners preferred FC, suggesting that qualitative measures should be considered when evaluating candidacy for this signal processing strategy.
#
Abbreviations
#
No conflict of interest has been declared by the author(s).
This work was supported in part by the Oticon corporation, Denmark. Additional funding by NIH/NIDCD R01DC013591, and P30DC004662.
-
REFERENCES
- Alexander JM. (2013) Individual variability in recognition of frequency-lowered speech. In: Seminars in Hearing. vol. 34, no. 2, pp. 86–109. Stuttgart, Germany: Thieme Medical Publishers.
- Alexander JM. 2016; Nonlinear frequency compression: Influence of start frequency and input bandwidth on consonant and vowel recognition. J Acoust Soc Am 139 (02) 938-957
- Alexander JM, Kopun JG, Stelmachowicz PG. 2014; Effects of frequency compression and frequency transposition on fricative and affricate perception in listeners with normal hearing and mild to moderate hearing loss. Ear Hear 35 (05) 519-532
- Angelo K, Alexander JM, Christiansen TU, Simonsen CS, Jespersgaard CF. 2015 Oticon Frequency Lowering. [White Paper]. http://web.ics.purdue.edu/∼alexan14/Publications_files/SpeechRescue.pdf . Accessed September 13, 2016
- Auriemmo J, Kuk F, Lau C, Marshall S, Thiele N, Pikora M, Quick D, Stenger P. 2009; Effect of linear frequency transposition on speech recognition and production of school-age children. J Am Acad Audiol 20 (05) 289-305
- Barrouillet P, Camos V. 2007; The time-based resource-sharing model of working memory. The Cognitive Neuroscience of Working Memory 455: 59-80
- Boersma P, Weenink D. 2011 Praat version 5.2.21. http://www.fon.hum.uva.nl/praat/download_win.html . Accessed June 24, 2017
- Bohnert A, Nyffeler M, Keilmann A. 2010; Advantages of a non-linear frequency compression algorithm in noise. Eur Arch Otorhinolaryngol 267 (07) 1045-1053
- Brainard DH. 1997; The psychophysics toolbox. Spat Vis 10 (04) 433-436
- Brennan MA, McCreery R, Kopun J, Hoover B, Alexander J, Lewis D, Stelmachowicz PG. 2014; Paired comparisons of nonlinear frequency compression, extended bandwidth, and restricted bandwidth hearing aid processing for children and adults with hearing loss. J Am Acad Audiol 25 (10) 983-998
- Casey BJ, Trainor RJ, Orendi JL, Schubert AB, Nystrom LE, Giedd JN, Castellanos FX, Haxby JV, Noll DC, Cohen JD, Forman SD, Dahl RE, Rapoport JL. 1997; A developmental functional MRI study of prefrontal activation during performance of a go-no-go task. J Cogn Neurosci 9 (06) 835-847
- Ching TY, Dillon H, Byrne D. 1998; Speech recognition of hearing-impaired listeners: predictions from audibility and the limited role of high-frequency amplification. J Acoust Soc Am 103 (02) 1128-1140
- Durston S, Thomas KM, Yang Y, Uluğ AM, Zimmerman RD, Casey BJ. 2002; A neural basis for the development of inhibitory control. Dev Sci 5 (04) F9-F16
- Ellis RJ, Munro KJ. 2013; Does cognitive function predict frequency compressed speech recognition in listeners with normal hearing and normal cognition?. Int J Audiol 52 (01) 14-22
- Glista D, Scollie S, Bagatto M, Seewald R, Parsa V, Johnson A. 2009; Evaluation of nonlinear frequency compression: clinical outcomes. Int J Audiol 48 (09) 632-644
- Glista D, Scollie S, Sulkers J. 2012; Perceptual acclimatization post nonlinear frequency compression hearing aid fitting in older children. J Speech Lang Hear Res 55 (06) 1765-1787
- Hogan CA, Turner CW. 1998; High-frequency audibility: Benefits for hearing-impaired listeners. J Acoust Soc Am 104 (01) 432-441
- Hopkins K, Khanom M, Dickinson AM, Munro KJ. 2014; Benefit from non-linear frequency compression hearing aids in a clinical setting: the effects of duration of experience and severity of high-frequency hearing loss. Int J Audiol 53 (04) 219-228
- Hornsby BW. 2013; The effects of hearing aid use on listening effort and mental fatigue associated with sustained speech processing demands. Ear Hear 34 (05) 523-534
- Kirby BJ, Brown CJ. 2015; Effects of nonlinear frequency compression on ACC amplitude and listener performance. Ear Hear 36 (05) e261-e270
- Kuk F, Keenan D, Korhonen P, Lau CC. 2009; Efficacy of linear frequency transposition on consonant identification in quiet and in noise. J Am Acad Audiol 20 (08) 465-479
- Kleiner M, Brainard D, Pelli D. 2007; What’s new in Psychtoolbox-3? ECVP Abstract Supplement. Perception 36: 1-16
- Margolis RH, Saly GL. 2008; Distribution of hearing loss characteristics in a clinical population. Ear Hear 29 (04) 524-532
- MathWorks 2014. Matlab R2014a. Natick, Massachusetts:
- McCreery RW, Alexander J, Brennan MA, Hoover B, Kopun J, Stelmachowicz PG. 2014; The influence of audibility on speech recognition with nonlinear frequency compression for children and adults with hearing loss. Ear Hear 35 (04) 440-447
- McCreery RW, Brennan MA, Hoover B, Kopun J, Stelmachowicz PG. 2013; Maximizing audibility and speech recognition with nonlinear frequency compression by estimating audible bandwidth. Ear Hear 34 (02) e24-e27
- Miller CW, Bates E, Brennan M. 2016; The effects of frequency lowering on speech perception in noise with adult hearing-aid users. Int J Audiol 55 (05) 305-312
- Mussoi BS, Bentler RA. 2015; Impact of frequency compression on music perception. Int J Audiol 54 (09) 627-633
- Parsa V, Scollie S, Glista D, Seelisch A. 2013; Nonlinear frequency compression: effects on sound quality ratings of speech and music. Trends Amplif 17 (01) 54-68
- Pelli DG. 1997; The VideoToolbox software for visual psychophysics: transforming numbers into movies. Spat Vis 10 (04) 437-442
- Perreau AE, Bentler RA, Tyler RS. 2013; The contribution of a frequency-compression hearing aid to contralateral cochlear implant performance. J Am Acad Audiol 24 (02) 105-120
- Picou EM, Marcrum SC, Ricketts TA. 2015; Evaluation of the effects of nonlinear frequency compression on speech recognition and sound quality for adults with mild to moderate hearing loss. Int J Audiol 54 (03) 162-169
- Pinheiro J, Bates D, DebRoy S, Sarkar D, R Core Team. (2016). nlme: Linear and Nonlinear Mixed Effects Models. R package version 3.1-128. http://CRAN.R-project.org/package=nlme . Accessed June 10, 2016.
- Pittman AL, Stelmachowicz PG. 2003; Hearing loss in children and adults: audiometric configuration, asymmetry, and progression. Ear Hear 24 (03) 198-205
- R Core Team 2016. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing;
- Sarampalis A, Kalluri S, Edwards B, Hafter E. 2009; Objective measures of listening effort: effects of background noise and noise reduction. J Speech Lang Hear Res 52 (05) 1230-1240
- Scollie S, Seewald R, Cornelisse L, Moodie S, Bagatto M, Laurnagaray D, Beaulac S, Pumford J. 2005; The Desired Sensation Level multistage input/output algorithm. Trends Amplif 9 (04) 159-197
- Simpson A, Hersbach AA, McDermott HJ. 2005; Improvements in speech perception with an experimental nonlinear frequency compression hearing device. Int J Audiol 44 (05) 281-292
- Simpson A, Hersbach AA, McDermott HJ. 2006; Frequency-compression outcomes in listeners with steeply sloping audiograms. Int J Audiol 45 (11) 619-629
- Souza PE, Arehart KH, Kates JM, Croghan NB, Gehani N. 2013; Exploring the limits of frequency lowering. J Speech Lang Hear Res 56 (05) 1349-1363
- Smith J, Dann M, Brown PM. 2009; An evaluation of frequency transposition for hearing‐impaired school‐age children. Deafness Educ Int 11 (02) 62-82
- Spratford M, Hodson McLean H, McCreery RW. 2017; (Accepted) Relationship of grammatical context on children’s recognition of s/z-inflected words. J Am Acad Adiol 28 (09) 799-809
- Storkel HL, Hoover JR. 2010; An online calculator to compute phonotactic probability and neighborhood density on the basis of child corpora of spoken American English. Behav Res Methods 42 (02) 497-506
- Turner CW, Souza PE, Forget LN. 1995; Use of temporal envelope cues in speech recognition by normal and hearing-impaired listeners. J Acoust Soc Am 97 (04) 2568-2576
- Venables WN, Ripley BD. 2002. Modern Applied Statistics with S. 4th ed. New York: Springer; doi:10.1007/978-0-387-21706-2
- Wolfe J, John A, Schafer E, Nyffeler M, Boretzki M, Caraway T. 2010; Evaluation of nonlinear frequency compression for school-age children with moderate to moderately severe hearing loss. J Am Acad Audiol 21 (10) 618-628
- Wolfe J, John A, Schafer E, Nyffeler M, Boretzki M, Caraway T, Hudson M. 2011; Long-term effects of non-linear frequency compression for children with moderate hearing loss. Int J Audiol 50 (06) 396-404
Corresponding author
-
REFERENCES
- Alexander JM. (2013) Individual variability in recognition of frequency-lowered speech. In: Seminars in Hearing. vol. 34, no. 2, pp. 86–109. Stuttgart, Germany: Thieme Medical Publishers.
- Alexander JM. 2016; Nonlinear frequency compression: Influence of start frequency and input bandwidth on consonant and vowel recognition. J Acoust Soc Am 139 (02) 938-957
- Alexander JM, Kopun JG, Stelmachowicz PG. 2014; Effects of frequency compression and frequency transposition on fricative and affricate perception in listeners with normal hearing and mild to moderate hearing loss. Ear Hear 35 (05) 519-532
- Angelo K, Alexander JM, Christiansen TU, Simonsen CS, Jespersgaard CF. 2015 Oticon Frequency Lowering. [White Paper]. http://web.ics.purdue.edu/∼alexan14/Publications_files/SpeechRescue.pdf . Accessed September 13, 2016
- Auriemmo J, Kuk F, Lau C, Marshall S, Thiele N, Pikora M, Quick D, Stenger P. 2009; Effect of linear frequency transposition on speech recognition and production of school-age children. J Am Acad Audiol 20 (05) 289-305
- Barrouillet P, Camos V. 2007; The time-based resource-sharing model of working memory. The Cognitive Neuroscience of Working Memory 455: 59-80
- Boersma P, Weenink D. 2011 Praat version 5.2.21. http://www.fon.hum.uva.nl/praat/download_win.html . Accessed June 24, 2017
- Bohnert A, Nyffeler M, Keilmann A. 2010; Advantages of a non-linear frequency compression algorithm in noise. Eur Arch Otorhinolaryngol 267 (07) 1045-1053
- Brainard DH. 1997; The psychophysics toolbox. Spat Vis 10 (04) 433-436
- Brennan MA, McCreery R, Kopun J, Hoover B, Alexander J, Lewis D, Stelmachowicz PG. 2014; Paired comparisons of nonlinear frequency compression, extended bandwidth, and restricted bandwidth hearing aid processing for children and adults with hearing loss. J Am Acad Audiol 25 (10) 983-998
- Casey BJ, Trainor RJ, Orendi JL, Schubert AB, Nystrom LE, Giedd JN, Castellanos FX, Haxby JV, Noll DC, Cohen JD, Forman SD, Dahl RE, Rapoport JL. 1997; A developmental functional MRI study of prefrontal activation during performance of a go-no-go task. J Cogn Neurosci 9 (06) 835-847
- Ching TY, Dillon H, Byrne D. 1998; Speech recognition of hearing-impaired listeners: predictions from audibility and the limited role of high-frequency amplification. J Acoust Soc Am 103 (02) 1128-1140
- Durston S, Thomas KM, Yang Y, Uluğ AM, Zimmerman RD, Casey BJ. 2002; A neural basis for the development of inhibitory control. Dev Sci 5 (04) F9-F16
- Ellis RJ, Munro KJ. 2013; Does cognitive function predict frequency compressed speech recognition in listeners with normal hearing and normal cognition?. Int J Audiol 52 (01) 14-22
- Glista D, Scollie S, Bagatto M, Seewald R, Parsa V, Johnson A. 2009; Evaluation of nonlinear frequency compression: clinical outcomes. Int J Audiol 48 (09) 632-644
- Glista D, Scollie S, Sulkers J. 2012; Perceptual acclimatization post nonlinear frequency compression hearing aid fitting in older children. J Speech Lang Hear Res 55 (06) 1765-1787
- Hogan CA, Turner CW. 1998; High-frequency audibility: Benefits for hearing-impaired listeners. J Acoust Soc Am 104 (01) 432-441
- Hopkins K, Khanom M, Dickinson AM, Munro KJ. 2014; Benefit from non-linear frequency compression hearing aids in a clinical setting: the effects of duration of experience and severity of high-frequency hearing loss. Int J Audiol 53 (04) 219-228
- Hornsby BW. 2013; The effects of hearing aid use on listening effort and mental fatigue associated with sustained speech processing demands. Ear Hear 34 (05) 523-534
- Kirby BJ, Brown CJ. 2015; Effects of nonlinear frequency compression on ACC amplitude and listener performance. Ear Hear 36 (05) e261-e270
- Kuk F, Keenan D, Korhonen P, Lau CC. 2009; Efficacy of linear frequency transposition on consonant identification in quiet and in noise. J Am Acad Audiol 20 (08) 465-479
- Kleiner M, Brainard D, Pelli D. 2007; What’s new in Psychtoolbox-3? ECVP Abstract Supplement. Perception 36: 1-16
- Margolis RH, Saly GL. 2008; Distribution of hearing loss characteristics in a clinical population. Ear Hear 29 (04) 524-532
- MathWorks 2014. Matlab R2014a. Natick, Massachusetts:
- McCreery RW, Alexander J, Brennan MA, Hoover B, Kopun J, Stelmachowicz PG. 2014; The influence of audibility on speech recognition with nonlinear frequency compression for children and adults with hearing loss. Ear Hear 35 (04) 440-447
- McCreery RW, Brennan MA, Hoover B, Kopun J, Stelmachowicz PG. 2013; Maximizing audibility and speech recognition with nonlinear frequency compression by estimating audible bandwidth. Ear Hear 34 (02) e24-e27
- Miller CW, Bates E, Brennan M. 2016; The effects of frequency lowering on speech perception in noise with adult hearing-aid users. Int J Audiol 55 (05) 305-312
- Mussoi BS, Bentler RA. 2015; Impact of frequency compression on music perception. Int J Audiol 54 (09) 627-633
- Parsa V, Scollie S, Glista D, Seelisch A. 2013; Nonlinear frequency compression: effects on sound quality ratings of speech and music. Trends Amplif 17 (01) 54-68
- Pelli DG. 1997; The VideoToolbox software for visual psychophysics: transforming numbers into movies. Spat Vis 10 (04) 437-442
- Perreau AE, Bentler RA, Tyler RS. 2013; The contribution of a frequency-compression hearing aid to contralateral cochlear implant performance. J Am Acad Audiol 24 (02) 105-120
- Picou EM, Marcrum SC, Ricketts TA. 2015; Evaluation of the effects of nonlinear frequency compression on speech recognition and sound quality for adults with mild to moderate hearing loss. Int J Audiol 54 (03) 162-169
- Pinheiro J, Bates D, DebRoy S, Sarkar D, R Core Team. (2016). nlme: Linear and Nonlinear Mixed Effects Models. R package version 3.1-128. http://CRAN.R-project.org/package=nlme . Accessed June 10, 2016.
- Pittman AL, Stelmachowicz PG. 2003; Hearing loss in children and adults: audiometric configuration, asymmetry, and progression. Ear Hear 24 (03) 198-205
- R Core Team 2016. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing;
- Sarampalis A, Kalluri S, Edwards B, Hafter E. 2009; Objective measures of listening effort: effects of background noise and noise reduction. J Speech Lang Hear Res 52 (05) 1230-1240
- Scollie S, Seewald R, Cornelisse L, Moodie S, Bagatto M, Laurnagaray D, Beaulac S, Pumford J. 2005; The Desired Sensation Level multistage input/output algorithm. Trends Amplif 9 (04) 159-197
- Simpson A, Hersbach AA, McDermott HJ. 2005; Improvements in speech perception with an experimental nonlinear frequency compression hearing device. Int J Audiol 44 (05) 281-292
- Simpson A, Hersbach AA, McDermott HJ. 2006; Frequency-compression outcomes in listeners with steeply sloping audiograms. Int J Audiol 45 (11) 619-629
- Souza PE, Arehart KH, Kates JM, Croghan NB, Gehani N. 2013; Exploring the limits of frequency lowering. J Speech Lang Hear Res 56 (05) 1349-1363
- Smith J, Dann M, Brown PM. 2009; An evaluation of frequency transposition for hearing‐impaired school‐age children. Deafness Educ Int 11 (02) 62-82
- Spratford M, Hodson McLean H, McCreery RW. 2017; (Accepted) Relationship of grammatical context on children’s recognition of s/z-inflected words. J Am Acad Adiol 28 (09) 799-809
- Storkel HL, Hoover JR. 2010; An online calculator to compute phonotactic probability and neighborhood density on the basis of child corpora of spoken American English. Behav Res Methods 42 (02) 497-506
- Turner CW, Souza PE, Forget LN. 1995; Use of temporal envelope cues in speech recognition by normal and hearing-impaired listeners. J Acoust Soc Am 97 (04) 2568-2576
- Venables WN, Ripley BD. 2002. Modern Applied Statistics with S. 4th ed. New York: Springer; doi:10.1007/978-0-387-21706-2
- Wolfe J, John A, Schafer E, Nyffeler M, Boretzki M, Caraway T. 2010; Evaluation of nonlinear frequency compression for school-age children with moderate to moderately severe hearing loss. J Am Acad Audiol 21 (10) 618-628
- Wolfe J, John A, Schafer E, Nyffeler M, Boretzki M, Caraway T, Hudson M. 2011; Long-term effects of non-linear frequency compression for children with moderate hearing loss. Int J Audiol 50 (06) 396-404











