Laryngorhinootologie 2023; 102(S 02): S263
DOI: 10.1055/s-0043-1767324
Abstracts | DGHNOKHC
Otology/Neurootology/Audiology:Cochlear implant

Impact of AI-based scene-classifiers in cochlear implant systems on speech understanding

Andreas Büchner
1   HNO-Klinik, MHH, Hals-Nasen-Ohrenklinik
,
Sven Kliesch
1   HNO-Klinik, MHH, Hals-Nasen-Ohrenklinik
,
Thomas Lenarz
1   HNO-Klinik, MHH, Hals-Nasen-Ohrenklinik
› Author Affiliations
 

Introduction Scene classification systems that combine directional microphones and signal enhancement algorithms are becoming increasingly important in the daily use of hearing instruments. The latest developments use neural networks to decide which program is optimal when CI-users enter a new listening situation. Focusing on a specific direction or enhancing speech coming from the side or from behind, for example, can improve the hearing ability of CI patients in everyday life.

Material and method Speech understanding was measured in a group of 20 study participants using the Oldenburg Sentence Test, comparing automatic programs with manual settings selected by the patient in each situation. The different listening situations were presented in a circle with 8 loudspeakers. In addition, an older processor generation was tested, which still had a conventional scene classification system without AI.

Results The automatic system of the new processor improved speech perception by up to 5 dB in noise, while the signal processing of the previous system showed an SNR improvement of only about 2 dB. Summary Scene classification systems in CI sound processors lead to significantly better speech perception and straightforward handling in difficult listening situations.



Publication History

Article published online:
12 May 2023

Georg Thieme Verlag
Rüdigerstraße 14, 70469 Stuttgart, Germany