J Am Acad Audiol 2020; 31(07): 547-550
DOI: 10.1055/s-0040-1709444
Research Article

Bilateral Cochlear Implants Allow Listeners to Benefit from Visual Information When Talker Location is Varied

Michael F. Dorman
1   Department of Speech and Hearing Science, Arizona State University, Tempe, Arizona
,
Sarah Natale
1   Department of Speech and Hearing Science, Arizona State University, Tempe, Arizona
,
Alissa Knickerbocker
1   Department of Speech and Hearing Science, Arizona State University, Tempe, Arizona
› Institutsangaben
Funding This work was conducted at Arizona State University and was supported by a grant from MED-EL Corporation.
 

Abstract

Background Previous research has found that when the location of a talker was varied and an auditory prompt indicated the location of the talker, the addition of visual information produced a significant and large improvement in speech understanding for listeners with bilateral cochlear implants (CIs) but not with a unilateral CI. Presumably, the sound-source localization ability of the bilateral CI listeners allowed them to orient to the auditory prompt and benefit from visual information for the subsequent target sentence.

Purpose The goal of this project was to assess the robustness of previous research by using a different test environment, a different CI, different test material, and a different response measure.

Research Design Nine listeners fit with bilateral CIs were tested in a simulation of a crowded restaurant. Auditory–visual (AV) sentence material was presented from loudspeakers and video monitors at 0, +90, and −90 degrees. Each trial started with the presentation of an auditory alerting phrase from one of the three target loudspeakers followed by an AV target sentence from that loudspeaker/monitor. On each trial, the two nontarget monitors showed the speaker mouthing a different sentence. Sentences were presented in noise in four test conditions: one CI, one CI plus vision, bilateral CIs, and bilateral CIs plus vision.

Results Mean percent words correct for the four test conditions were: one CI, 43%; bilateral CI, 60%; one CI plus vision, 52%; and bilateral CI plus vision, 84%. Visual information did not significantly improve performance in the single CI conditions but did improve performance in the bilateral CI conditions. The magnitude of improvement for two CIs versus one CI in the AV condition was approximately twice that for two CIs versus one CI in the auditory condition.

Conclusions Our results are consistent with previous data showing the large value of bilateral implants in a complex AV listening environment. The results indicate that the value of bilateral CIs for speech understanding is significantly underestimated in standard, auditory-only, single-speaker, test environments.


#

It is well established that patients fit with bilateral cochlear implants (CIs) are able to locate a sound source on the horizontal plane with significantly better accuracy than patients fit with a single CI.[1] [2] It is also well established that allowing a listener to view the face of a talker adds greatly to speech understanding.[3] [4] [5] Van Hoesel[6] showed how these two effects combine to provide bilateral CI patients with a large improvement in speech understanding in noise when the location of a sound source varies.

In the study by Van Hoesel,[6] congruent auditory (A) and visual (V) information for speech was presented from four locations on the frontal plane. Noise was presented from eight locations. Each trial began with an auditory cue presented from one of the test-sentence speaker locations. This directed listeners' attention to the correct location in space. Soon after, a target sentence was presented from the same loudspeaker. Tests, with randomized location of the talker, were conducted in single and bilateral CI conditions, with and without visual cues. In the monaural conditions, the listeners could not “find” the speaker sufficiently quickly to use, and to benefit from, the visual information that accompanied the speech signal. In the bilateral CI conditions, however, the listeners could use the auditory cue to quickly locate the sound source and benefit from the visual information. Expressed in terms of signal-to-noise ratio (SNR), the difference in benefit from visual information in the unilateral and bilateral test conditions was large: 5 dB.

This experiment shows the value of bilateral CIs for speech understanding in a novel and important way and is important for several reasons. One is that, although bilateral CI listeners report a significant increase in health-related quality of life re: unilateral patients,[7] studies of bilateral benefit for speech understanding have found, most generally, only modest increases in scores over unilateral benefit.[8] The large effect reported by Van Hoesel[6] may be one factor underlying the change in quality of life with bilateral implantation.

The Van Hoesel outcome is important because, as the author suggests, it demonstrates that testing in standard audiometric test environments (with auditory-only test materials and a fixed loudspeaker location) significantly underestimates the value of bilateral CIs for speech understanding. This, in turn, alters cost/benefit analyses of bilateral CIs for health care systems. Bilateral CIs are expensive and it is critical for health care systems in developed and less developed countries to have validated data from which to estimate the relative value of bilateral CIs.

In the research reported here, we created a novel version of the Van Hoesel[6] test environment. At issue was whether the original finding was sufficiently robust to survive (1) a change in test environment, (2) a change in test material, and (3) a change in response measure.

Method

Institutional Review Board Approval

This research was reviewed and approved by the Institutional Review Board at Arizona State University.


#

Subjects

Nine bilateral CI listeners (four females, five males) participated in this project. All were fit with an MED-EL CI and used their everyday programs on the SONNET processor with an omni microphone setting. The listeners ranged in age from 46 to 83 years (mean = 55 years). Duration of deafness before implantation ranged from less than 1 year to 16 years (mean = 5 years). CI experience ranged from 1 year to 25 years (mean = 9 years).

All listeners could localize a wideband sound source on the frontal plane, in quiet, with between 11 and 31 degrees of error (mean = 19 degrees). For methods and procedures, see Dorman et al.[2]


#

Test Environment

The listeners were tested in a 3.2 m × 2.1 m sound booth using an eight-loudspeaker (arrayed in a 360-degree arc) R-SPACE environment.[9] [10] Target material was presented from the loudspeakers at −90, 0, or +90 degrees. These loudspeaker locations were chosen so that listeners would need to orient to locate the target material. A video monitor was set just below the level of the loudspeaker in each of these locations. Directionally appropriate noise, i.e., noise recorded in a large restaurant with eight microphones set in a circular array pointing outward at every 45-degree angle around the circle, was delivered from each of the eight loudspeakers, including the loudspeaker from which the target was presented.


#

Stimuli

The stimuli were female voice AV sentences drawn from the AzAV test corpus.[11] The sentences are a re-recording of the AV sentences created by MacLeod and Summerfield.[12] [13] There are 10 lists of 15 sentences each with equal auditory intelligibility and equal gain from the addition of visual information across lists.


#

Procedure

Each trial was started with the auditory presentation of an alerting phrase, “she's here,” from one of the three target loudspeakers. This was followed after a 2-second interval by a target sentence from that loudspeaker. Sentences were presented in four test conditions: one CI, one CI plus vision, bilateral CIs, and bilateral CIs plus vision. In conditions with visual stimulation, when the target was presented from one location, at the other locations the speaker was shown mouthing a different sentence. The listeners were first tested in the bilateral CI, no visual-input condition and noise was added in a patient-specific manner to drive performance off the ceiling, i.e., to 70% correct or less. The other three test conditions were administered in a quasi-random order at that SNR. In the one CI conditions, the patient used the CI that, in previous testing, had allowed the highest level of speech understanding. Noise was present during the presentation of both the alerting and test phase of the experiment.


#
#

Results

As shown in [Fig. 1], the mean scores in terms of percent words correct and the standard errors for the four test conditions were: one CI, 43.4 (6.1); bilateral CI, 60.1 (6.2); one CI plus vision, 51.7 (6.9); and bilateral CI plus vision, 84.2 (3.9). For statistical analysis, the percent correct scores were converted to RAU[14] and then entered into a two-way analysis of variance (ANOVA) with repeated measures. There was a significant effect for number of ears (F (1, 8) = 45.42, p < 0.001); for mode of presentation, i.e., A or AV, (F (1, 8) = 80.65, p < 0.001) and for their interaction (F (1, 8) = 11.77, p < 0.001). The significant interaction term was the result of the large gain in performance in the AV condition with two ears relative to the AV condition with one ear. Post-tests (Holm-Sidak) indicated that performance with two ears was significantly better than performance with one ear for both A and AV conditions. However, improvement in the AV conditions versus A conditions was significant only for the bilateral CI condition.

Zoom Image
Fig. 1 Percent words correct as a function of test condition. Each open symbol represents the performance of a single listener. Error bars indicate ± 1 SEM. CI, cochlear implant; V = visual information.

#

Discussion

In previous research, when the location of a talker varied, visual information for speech perception improved performance only in a bilateral CI test condition—with a single CI, visual information was of little benefit.[6]

We found a similar outcome. As shown in [Fig. 1] and described above, the availability of visual information did not significantly improve performance in the single CI condition (44 vs. 52 percent correct) but did significantly improve performance in the bilateral CI condition (60 vs. 84 percent correct). Due to differences in methodology, it is difficult to directly compare the changes in SRT in Van Hoesel[6] and the changes in percent correct in this study. However, both outcome measures indicate a large effect.

In the present experiment, the mean gain in performance in the auditory-only condition for bilateral versus single CI was 17 percentage points (range: 8–28 percentage points). However, in the AV test conditions, the mean gain was 33 percentage points (range: 9–55 percentage points)—approximately twice the gain found in the auditory-only condition. Moreover, visual inspection of [Fig. 1] suggests that the gain scores for many patients in the bilateral CI plus V condition were likely constrained by the percent correct ceiling which, in turn, constrained the mean gain in performance. From this view, the difference in outcome between the unilateral and bilateral AV test conditions was large indeed.

Sound-Source Localization

In quiet, all of the listeners in this project showed, in an experiment conducted previously,[2] some ability to locate sound sources on the horizontal plane. Their mean root mean square (RMS) error score was 19 degrees. In contrast, the mean RMS error score for normal-hearing listeners, using the same stimulus and response measure, was approximately 6 degrees. Thus, in quiet, our patients performed poorer than subjects with normal hearing. Moreover, accuracy decreases in the presence of noise.[15] How can poorer- or much poorer-than-normal localization accuracy be of benefit to CI patients?

In our view, sound-source localization functions to direct attention to the proper side, perhaps quadrant, of space, and then listeners use their eyes to find the “target” (for a review, see Blauert[16]). In this view, even the poorer-than-normal localization ability of bilateral CI patients should be useful. The results from the present study are consistent with this speculation.


#

Generality of Results

Recently, Dorman et al[11] described data on the environments in which CI patients listened to speech. Most patients, most of the time, indicated that they could see the face of the person with whom they were talking. Thus, patient performance in complex, AV test environments, like those in the current study, is likely to be relevant to performance in real-world environments. If so, then, as Van Hoesel[6] has suggested, cost/benefit analyses of unilateral and bilateral CIs should give significant weight to the greatly improved speech understanding in noise for bilateral CI listeners when the target location varies and visual information is available.


#
#
#

Conflict of Interest

Dr. Dorman reports a grant from MED-EL Corporation, during the conduct of the study; and personal fees from MED-EL Corporation and Advanced Bionics, outside the submitted work. Dr. Natale reports a grant from MED-EL, during the conduct of the study. Dr. Knickerbocker reports personal fees from Arizona State University and from Center for Neurosciences, outside the submitted work.

  • References

  • 1 Nopp P, Schleich P, D'Haese P. Sound localization in bilateral users of MED-EL COMBI 40/40+ cochlear implants. Ear Hear 2004; 25 (03) 205-214
  • 2 Dorman MF, Loiselle LH, Cook SJ, Yost WA, Gifford RH. Sound source localization by normal hearing listeners, hearing-impaired listeners and cochlear implant listeners. Audiol Neurotol 2016; 21 (03) 127-131
  • 3 Sumby W, Pollack I. Visual contribution to speech intelligibility in noise. J Acoust Soc Am 1954; 26: 212-215
  • 4 Summerfield Q. Some preliminaries to a comprehensive account of audio-visual speech perception. In: Dodd B, Campbell R. , eds. Hearing by Eye: The Psychology of Lipreading. Hillsdale, NJ: Lawrence Erlbaum Associates; 1987: 3-51
  • 5 Tye-Murray N, Sommers M, Spehar B. Auditory and visual lexical neighborhoods in audiovisual speech perception. Trends Amplif 2007; 11 (04) 233-241
  • 6 van Hoesel RJ. Audio-visual speech intelligibility benefits with bilateral cochlear implants when talker location varies. J Assoc Res Otolaryngol 2015; 16 (02) 309-315
  • 7 Bichey B, Miyamoto R. Outcomes in bilateral cochlear implantation. Otolaryngol Head Neck Surg 2008; 138 (05) 655-661
  • 8 Dorman M, Yost W, Wilson B, Gifford R. Speech perception and sound localization by adults with bilateral cochlear implants. Semin Hear 2011; 32 (01) 73-89
  • 9 Compton-Conley CL, Neuman AC, Killion MC, Levitt H. Performance of directional microphones for hearing aids: real-world versus simulation. J Am Acad Audiol 2004; 15 (06) 440-455
  • 10 Revit L, Killion M, Compton-Conley C. Developing and testing a laboratory sound system that yields accurate real-world results. Hear Rev 2007; 14 (11) 54-62
  • 11 Dorman MF, Liss J, Wang S, Berisha V, Ludwig C, Natale SC. Experiments on auditory-visual perception of sentences by unilateral, bimodal and bilateral cochlear implant patients. J Speech Lang Hear Res 2016; 59 (06) 1505-1519
  • 12 MacLeod A, Summerfield Q. Quantifying the contribution of vision to speech perception in noise. Br J Audiol 1987; 21 (02) 131-141
  • 13 MacLeod A, Summerfield Q. A procedure for measuring auditory and audio-visual speech-reception thresholds for sentences in noise: rationale, evaluation, and recommendations for use. Br J Audiol 1990; 24 (01) 29-43
  • 14 Studebaker G. A “rationalized” arcsine transform. J Speech Lang Hear Res 1985; 28 (03) 455-462
  • 15 Kerber S, Seeber BU. Sound localization in noise by normal-hearing listeners and cochlear implant users. Ear Hear 2012; 33 (04) 445-457
  • 16 Blauert J. Spatial Hearing. Cambridge: MIT Press; 1997

Address for correspondence

Michael F. Dorman, PhD
Department of Speech and Hearing Science, Arizona State University
Tempe, AZ 85287-0102

Publikationsverlauf

Eingereicht: 26. September 2019

Angenommen: 13. Januar 2020

Artikel online veröffentlicht:
27. April 2020

© 2020. Copyright © 2020 by the American Academy of Audiology. All rights reserved.

Thieme Medical Publishers
333 Seventh Avenue, New York, NY 10001, USA.

  • References

  • 1 Nopp P, Schleich P, D'Haese P. Sound localization in bilateral users of MED-EL COMBI 40/40+ cochlear implants. Ear Hear 2004; 25 (03) 205-214
  • 2 Dorman MF, Loiselle LH, Cook SJ, Yost WA, Gifford RH. Sound source localization by normal hearing listeners, hearing-impaired listeners and cochlear implant listeners. Audiol Neurotol 2016; 21 (03) 127-131
  • 3 Sumby W, Pollack I. Visual contribution to speech intelligibility in noise. J Acoust Soc Am 1954; 26: 212-215
  • 4 Summerfield Q. Some preliminaries to a comprehensive account of audio-visual speech perception. In: Dodd B, Campbell R. , eds. Hearing by Eye: The Psychology of Lipreading. Hillsdale, NJ: Lawrence Erlbaum Associates; 1987: 3-51
  • 5 Tye-Murray N, Sommers M, Spehar B. Auditory and visual lexical neighborhoods in audiovisual speech perception. Trends Amplif 2007; 11 (04) 233-241
  • 6 van Hoesel RJ. Audio-visual speech intelligibility benefits with bilateral cochlear implants when talker location varies. J Assoc Res Otolaryngol 2015; 16 (02) 309-315
  • 7 Bichey B, Miyamoto R. Outcomes in bilateral cochlear implantation. Otolaryngol Head Neck Surg 2008; 138 (05) 655-661
  • 8 Dorman M, Yost W, Wilson B, Gifford R. Speech perception and sound localization by adults with bilateral cochlear implants. Semin Hear 2011; 32 (01) 73-89
  • 9 Compton-Conley CL, Neuman AC, Killion MC, Levitt H. Performance of directional microphones for hearing aids: real-world versus simulation. J Am Acad Audiol 2004; 15 (06) 440-455
  • 10 Revit L, Killion M, Compton-Conley C. Developing and testing a laboratory sound system that yields accurate real-world results. Hear Rev 2007; 14 (11) 54-62
  • 11 Dorman MF, Liss J, Wang S, Berisha V, Ludwig C, Natale SC. Experiments on auditory-visual perception of sentences by unilateral, bimodal and bilateral cochlear implant patients. J Speech Lang Hear Res 2016; 59 (06) 1505-1519
  • 12 MacLeod A, Summerfield Q. Quantifying the contribution of vision to speech perception in noise. Br J Audiol 1987; 21 (02) 131-141
  • 13 MacLeod A, Summerfield Q. A procedure for measuring auditory and audio-visual speech-reception thresholds for sentences in noise: rationale, evaluation, and recommendations for use. Br J Audiol 1990; 24 (01) 29-43
  • 14 Studebaker G. A “rationalized” arcsine transform. J Speech Lang Hear Res 1985; 28 (03) 455-462
  • 15 Kerber S, Seeber BU. Sound localization in noise by normal-hearing listeners and cochlear implant users. Ear Hear 2012; 33 (04) 445-457
  • 16 Blauert J. Spatial Hearing. Cambridge: MIT Press; 1997

Zoom Image
Fig. 1 Percent words correct as a function of test condition. Each open symbol represents the performance of a single listener. Error bars indicate ± 1 SEM. CI, cochlear implant; V = visual information.