CC BY-NC-ND 4.0 · Semin Hear 2021; 42(03): 295-308
DOI: 10.1055/s-0041-1735136
Review Article

Improving Speech Understanding and Monitoring Health with Hearing Aids Using Artificial Intelligence and Embedded Sensors

David A. Fabry
1   Starkey Hearing Technologies, Eden Prairie, Minnesota
,
Achintya K. Bhowmik
1   Starkey Hearing Technologies, Eden Prairie, Minnesota
› Author Affiliations
 

Abstract

This article details ways that machine learning and artificial intelligence technologies are being integrated in modern hearing aids to improve speech understanding in background noise and provide a gateway to overall health and wellness. Discussion focuses on how Starkey incorporates automatic and user-driven optimization of speech intelligibility with onboard hearing aid signal processing and machine learning algorithms, smartphone-based deep neural network processing, and wireless hearing aid accessories. The article will conclude with a review of health and wellness tracking capabilities that are enabled by embedded sensors and artificial intelligence.


#

In recent years, hearing aids have rapidly evolved from dedicated, single-purpose devices into multipurpose, multifunction devices. By combining acoustic and biometric sensors with signal processing, hearing aids today can monitor physical activity and social engagement, automatically detect falls, and serve as an intelligent virtual assistant, in addition to improving speech intelligibility in quiet and noisy listening environments.[1]

Fundamentally, the most essential function of any hearing aid is to optimize speech intelligibility, so that hearing aid users can communicate with comfort and clarity in challenging listening situations. In addition to the significant progress made toward achieving this primary goal, in recent years, embedded sensors and artificial intelligence (AI) algorithms are now also endowing modern, advanced hearing aids with important health and wellness tracking capabilities.

Since 2018, Starkey has incorporated acoustic, inertial, and biometric sensors directly into the hearing aids. Onboard signal processing algorithms based on machine learning and AI technologies use the inputs from these sensors to provide hearing aid users with optimal speech intelligibility in noise,[2] physical activity tracking,[3] fall detection,[4] and social engagement assessment.[1]

According to the latest MarkeTrak X findings,[5] natural sound quality, performance in background noise, comfort for loud sounds, and spatial awareness are the top overall contributors to hearing aid satisfaction and benefit. A benchmark study on a cohort of 20 hard-of-hearing participants listening to four noisy acoustic scenes (conducted by FORCE Technology SenseLab, an independent perceptual assessment laboratory) measured speech sound quality and preference for the noise management systems (directional microphone and noise reduction algorithm) in the Starkey Livio AI and Muse iQ hearing aids along with premium hearing aids from other manufacturers. In all four noisy acoustic scenes, listeners judged the overall loudness of background noise to be lower for both Starkey hearing aids when compared with other manufacturers' premium hearing aids.[2] In addition, Livio AI and Muse iQ hearing aids were judged to have the lowest sound distortion, in terms of reverberation, across all four acoustic scenes. Starkey has continued to focus on improving performance in noise by using even more sophisticated machine learning and AI strategies to mimic—or exceed—human performance. To begin, however, the different aspects of human intelligence and AI are defined, as the latter has rapidly been approaching a “buzzword” status in recent years.

Definitions

Intelligence

One may have a basic understanding of the meaning of the word intelligence, but many theories and approaches can describe what the word means. Robert Sternberg (2020), IBM Professor of Psychology and Dean of the School of Arts and Sciences at Tufts University, describes intelligence as the “…mental quality that consists of the abilities to learn from experience, adapt to new situations, understand and handle abstract concepts, and use knowledge to manipulate one's environment.”[6] [Fig. 1] is a schematic of how the human perception and intelligence system uses biological sensors to collect information from the environment, processes these information in the brain to understand the world, takes actions accordingly, and learns based on experiences. This concept makes sense for human perception and intelligence, but what about the ways that devices and machines process and learn from information?

Zoom Image
Figure 1 Schematic diagram depicting key processes in human perception and intelligence, characterized by sensing inputs, processing information, developing actions based on these processes, and learning based on experience.

#

Artificial Intelligence

This term has been used for decades and has advanced over time with technological innovations. Today, AI is designed to enable machines to simulate human intelligence and human behavior, albeit for applications in narrow domains. AI systems do not require devices to be pre-programmed; instead, they use algorithms that may use data or sensory inputs to process, act—and learn—using their own “intelligence,” often acquired with training on relevant datasets. With unprecedented advances in algorithms, computing technologies, and digital data in recent years, AI has been rapidly adopted in a wide range of devices and systems, enabling a burgeoning array of new applications.[7] The broader category of AI includes machine learning, edge computing, and deep neural networks (DNNs), as defined below. See the article by Balling et al in this issue for additional details about the use of AI in hearing aids.


#

Machine Learning

Machine learning is a subfield of AI concerned with building algorithms that rely on a collection of examples of some phenomenon. These examples can exist in nature, be produced by humans, or be generated by another algorithm. Machine learning may also be defined as the process of solving a problem by gathering a dataset and algorithmically building a statistical model of that dataset that may, in turn, be used to “solve” the practical problem. As a branch within AI, machine learning systems use inputs to process, act, and improve performance based on these pretrained models. Simply put, machine learning uses algorithms to parse data, learn from that data, and make informed decisions or predictions based on what it has learned. The power behind machine learning is the size and diversity of the dataset used to train the models and the number of parameters or features used to characterize the models.


#

Edge Computing

Edge computing is a distributed computing paradigm that brings computation and data storage closer to the location where it is needed to decrease latency and save communication bandwidth. For hearing aid applications, edge computing moves computation closer to the edge of the network, relying on ear-level processing, without requiring the hearing aids to be connected to a smartphone or cloud-based data centers.


#

Deep Neural Networks

As a special subcategory within the field of machine learning, DNN systems use multiple layers of interconnected computational nodes, referred to as “neurons.” Each layer is composed of a large number of neurons representing the “width” of the network. The number of layers defines the “depth” of the neural network. The human cerebral cortex consists of a large ensemble of interconnected biological neurons, which allows it to process a multitude of sensory information in a hierarchy of increasing sophistication. In so doing, it teases out complex patterns or correlations in that information to help people understand and navigate in the real world. Inspired by the structure and function of the human cerebral cortex, DNN-based AI systems are increasingly solving problems that were previously considered tractable only through human intelligence.[7] See the article by Andersen et al in this issue for additional details about the use of DNN in hearing aids.


#
#

Applications for Improving Speech Intelligibility Using AI

Acoustic Environmental Classification

Derived from auditory scene analysis,[8] acoustic environmental classification (AEC) is the computational process by which signal processing is used to mimic the auditory system's ability to separate individual sounds in real-world listening environments, thereby classifying them into discrete “scenes” or environments based on temporal and spectral features.[9] Modern hearing aids have used AEC to classify listening environments (e.g., quiet, speech, noise, and music) and automatically enable sound management features (e.g., directional microphones, noise reduction, and feedback control) appropriate for that environment[10] (see the article by Hayes in this issue for more information about environmental classifiers). Most AEC systems combine two processing stages: feature extraction and feature/pattern classification, followed by postprocessing and environmental sound classification ([Fig. 2]). The accuracy of any AEC system depends on the number of feature parameters, sound classes, and the type of statistical model used. Supervised machine learning models that have been trained on large, known datasets have been used to improve the classification accuracy of AEC systems. Starkey's Hearing Reality Sound AEC system features eight automated sound classes: music, speech in quiet, speech in loud noise, speech in noise, machine, wind, noise, and quiet. It prioritizes speech intelligibility in noise by making discrete adjustments in gain, compression, directionality, noise management, and other parameters appropriate for each specific class. Classification accuracy for many hearing aid systems peaks at approximately 80 to 90%; problems are most likely to arise in the classification of compressed pop music, strongly reverberated speech, and tonal or fluctuating noises.[11] For this reason, AEC—even with machine learning training with large amounts of data—is not always sufficient, especially in challenging listening environments.[12] These situations are better served by user-prompted, on-demand analysis and automatic adjustments for enhanced speech clarity, as described later.

Zoom Image
Figure 2 Block diagram of an acoustic environmental classification (AEC) system incorporating feature extraction, feature/pattern classification, post-processing, and environmental sound classification.

#

Edge Mode

In January 2020, Starkey introduced Edge Mode, an advanced edge AI computing solution designed to overcome some of the limitations of AEC by putting the power of AI under the hearing aid user's control. Edge Mode is designed as a simple interface where the hearing aid user initiates assistance using a control such as a double-tap or push-button when confronted with a challenging listening environment ([Fig. 3]). The recognition of a double-tap gesture is accomplished with the micro-electro-mechanical systems–based motion sensors integrated within the hearing aids.[1] The hearing aid captures an “acoustic snapshot” of the listening environment and optimizes speech intelligibility by adjusting the parameters of eight proprietary classifications comprising challenging quiet and noisy listening situations. These AI-based, on-demand adaptive tuning parameter adjustments to the prescribed settings include gain offsets, noise management settings, directional-microphone settings, wind noise management settings, to name a few. No smartphone or cloud connectivity is needed; all computational power is achieved through “on the ear” processing when activated by the user via a tap or button press via onboard controls. Earlier investigations[13] have shown that most users found Edge Mode easy to operate and preferred it over audiogram-based prescribed hearing aid settings when communicating in restaurant noise, automobiles, and reverberant listening environments ([Fig. 4]).

Zoom Image
Figure 3 Workflow for on-demand adaptive tuning (ODAT), known as “Edge Mode.”
Zoom Image
Figure 4 Preference count of Edge Mode versus prescribed settings from 15 hearing-impaired participants. Legend shows the acoustic scene.

During the COVID-19 pandemic, health and government officials encouraged or mandated community-wide face mask wearing to reduce potential presymptomatic or asymptomatic transmission of the virus to others. This practice, in combination with social distancing (i.e., keeping >6 feet apart), helped decrease the spread of the virus, but it also posed a barrier to clear, empathetic communication, particularly for those with hearing loss.[14]

Fabry and colleagues assessed differences in sound attenuation across face masks via acoustic measurements made on many of the latest commercially available styles.[15] [Fig. 5] illustrates the differences for a range of mask types. Data were normalized to the condition when no mask was worn (the “zero” line on the x-axis). Findings suggested that while all face masks reduced important high-frequency information, there was significant variation across fabric, medical, and paper masks, especially those equipped with a plastic window. One unexpected finding was that face masks and face shields equipped with transparent plastic panels had an enhancement of several decibels (dB) in the low/mid frequencies, as well as a reduction in the high frequencies.[16] [17] These data illustrate the challenge of using a predetermined compensation scheme with fixed high-frequency gain adjustment to account for the impact of social distancing and face mask use.

Zoom Image
Figure 5 The acoustic impact of different face masks compared with when no face mask is worn. (Note: measurements were made using a head and torso simulator manikin.)

These findings kindled the development of the user-activated Edge Mode for Masks in Livio Edge AI hearing aids. As noted, Edge Mode uses an onboard AI model trained with machine-learning technology to optimize speech intelligibility and sound quality in all listening environments by assessing the levels of speech and noise present. Edge Mode for Masks dynamically adjusts multiple feature parameters, including gain, output, noise management, and directional microphones. Therefore, unlike simple gain offsets used in other “Mask Mode” programs, Edge Mode for Masks is “agnostic” to which mask is worn, the distance between conversation partners, and the presence of background noise. Again, all required signal processing for Edge Mode for Masks is performed using ear-level hearing aid processing, with no connection to a smartphone or the cloud. In laboratory testing of hearing aid users, both Edge Mode for Masks and a “manual” Mask Mode offset program were significantly preferred by hearing aid users over the “Normal” prescription targets when the talker was using a medical-grade N95 face mask. Ongoing research is evaluating whether Edge Mode for Masks will be preferred versus the Mask Mode offset program when a broader array of face masks is used, similar to those depicted in [Fig. 5].

In summary, although machine learning–based AEC systems are effective for up to 90% of “real-world” listening environments, the addition of on-demand edge AI computing technology that the user controls via an effective and easy-to-use interface may provide superior control and accuracy for the remaining balance of challenging listening environments encountered by hearing aid users.


#

Intellivoice Deep Neural Networks

Edge Mode is likened to a user-initiated “acoustic snapshot” for AEC optimization and speech enhancement. It follows that DNN can be likened to a multilayered approach for improving speech intelligibility in noisy and reverberant listening environments. Prior research at Starkey has demonstrated the use of DNN for improving speech intelligibility in a wide range of signal-to-noise ratios and noise types while maintaining speech quality.[18] [19]

In 2020, Starkey introduced IntelliVoice,[20] a DNN-based speech enhancement strategy that combines the increased computational processing power available on a smartphone with the benefits of using the smartphone microphone as an input source that is closer to the target sounds (similar to the Apple iPhone “Live Listen” feature). [Fig. 6] depicts a high-level schematic of IntelliVoice DNN. The spectrogram shown in [Fig. 7] illustrates how IntelliVoice preprocesses spectrotemporal segments for the presence of speech and/or noise to reject noise or speech at low signal-to-noise ratios (SNRs) while passing speech at higher SNRs through for amplification. [Fig. 8] illustrates field test results with IntelliVoice DNN versus hearing aid-only processing for overall preference and speech understanding in noisy listening environments based on 12 hearing aid users with hearing losses ranging from mild to profound in degree. Additional analysis revealed a positive correlation between the degree of hearing loss and IntelliVoice algorithm preference. This was most likely due to the system delays introduced by “off-boarding” processing to the smartphone for the IntelliVoice algorithms. Our findings suggest that hearing aid users with greater degrees of hearing loss tolerate increases in signal processing complexity that contribute to system delays if they improve SNRs, while those with better hearing are less likely to tolerate the additional delays. As such, IntelliVoice DNN is recommended only for users with severe-to-profound hearing loss.

Zoom Image
Figure 6 High-level schematic of the smartphone-based IntelliVoice deep neural network implementation.
Zoom Image
Figure 7 Spectrographic representation of a multilayered deep neural network approach that analyzes spectrotemporal segments for the presence of speech or noise and passes speech through at a criterion speech-to-noise ratio while rejecting segments that are determined to be noise.
Zoom Image
Figure 8 Field test preference results for overall preference and speech understanding between IntelliVoice deep neural network (DNN) and hearing (HA) aid-only processing for 12 hearing aid (HA) users with mild-to-moderate (4 participants), moderate-to-severe (3 participants), and severe-to-profound (5 participants) hearing loss. The number on the y-axis corresponds to the number of users who preferred each option.

#

Table Microphone Accessory

Another way that Starkey incorporates machine learning and edge computing in hearing aids to improve speech intelligibility in noise is a new multipurpose wireless accessory designed in collaboration with Nuance Hearing.[21] It uses eight spatially separated microphones and sophisticated directional beamforming technology to divide the listening environment into eight 45-degree segments. In “Automatic” mode, the Table Microphone dynamically switches the direction of the beam to focus on the active speaker in a group while simultaneously reducing competing background speech or noise from other directions. In “Manual” mode, the user can select either one or two speakers to focus on in a group and can change the direction of the beam or beams by simply touching on the top of the device. In “Surround” mode, all microphones are active so that sound is amplified from all directions around the user. Automatic and Manual modes are optimized for listening to speech in noise, and Surround mode is optimized for listening to speech in quiet. The Table Microphone provides the best listening benefit when placed at the center of a group or close to a single conversation partner. In the laboratory, 18 participants with hearing loss (10 females, 8 males; mean age: 66.9 years [range: 50–80 years]) completed a speech intelligibility test using unaided, aided with Livio Edge AI custom rechargeable hearing aids alone, and aided with the Table Microphone accessory. As shown in [Fig. 9], the Table Microphone had a median SNR improvement for the hearing in noise test of 7.2 dB SNR improvement compared with hearing aids alone and a 15.0-dB SNR improvement compared with the unaided condition. The Table Microphone accessory is paired directly with the hearing aids and does not require the use of a smartphone or cloud-based computing. It may also function as a remote microphone and a multimedia streamer.

Zoom Image
Figure 9 Speech reception thresholds (SRT) in diffuse noise for 18 hearing aid users in unaided and hearing aid–only conditions (Livio Edge AI) and when the Table Mic beamforming microphone array (left) is used.

#
#

Applications for Monitoring Health and Wellness

In addition to improvements in speech intelligibility, embedded sensors and AI are transforming hearing aids into multifunctional health and communication devices that continuously monitor and track physical activities and social engagement, and detect if the user experiences a fall so that they can automatically send alert messages to designated contacts. Since 2018, Starkey has incorporated inertial measurement unit (IMU) sensors into hearing aids to monitor the user's movement and position. In combination with the classification of the listening environment via the AEC system, these data are used to monitor physical activity and social engagement while hearing aids are worn, which are then displayed on the mobile application ([Fig. 10]).

Zoom Image
Figure 10 Body (steps, exercise, stand) and brain (use, engagement, environment) scores reported within the Thrive user control application.

Social Engagement

Hearing loss is correlated with many chronic health conditions. In recent years, significant attention has been focused on the link between hearing loss and cognitive decline. Compared with individuals with normal hearing, persons with a mild, moderate, and severe hearing impairment, respectively, had a 2-, 3-, and 5-fold increased risk of incident all-cause dementia over more than a decade of follow-up.[22] [23] The Lancet Commission[24] reported that treating hearing loss is the largest modifiable risk factor for the prevention of dementia. Furthermore, they reported that hearing loss is a risk factor that should be addressed in midlife—not toward the end of life—for optimal benefit. A study published in the Journal of the American Medical Association [25] indicated a significant degree of memory deficit in persons with age-related hearing loss who did not use hearing aids compared with those without hearing loss. However, memory function was significantly better and much closer to the performance of those with normal hearing in a similar group of individuals matched for hearing loss who did use hearing aids. An issue is how much hearing aid use is necessary to achieve any potential cognitive benefits. While research has demonstrated that people who use their hearing aids more than 8 hours/day are more satisfied than those who use their hearing aids less often,[26] there is little evidence as to whether the type of listening environment is important (and predictive) to success. Many persons with hearing loss report difficulty understanding speech in the presence of background noise.[27] While communication in noisy listening environments is a top driver of success with hearing aids,[28] the majority of new hearing aid users wear them in generally favorable listening environments.[29] Hearing aid “data logging” has been recommended to identify those who are not using, or only minimally using their aids, so that clinicians can provide appropriate rehabilitation and support, particularly for new hearing users.[30] Although data logging provides an objective measure that is a more accurate representation of hearing aid use than “self-report” measures, which are often over-reported,[31] it also requires clinical intervention via face-to-face or telehealth. In a new approach, Starkey has incorporated measures of “social engagement” into the user-controlled “Thrive” app that automatically monitor and “gamify” (1) hours of daily hearing aid use; (2) time spent in listening environments where speech is present, either in quiet or noisy backgrounds; and (3) the diversity of listening environments encountered during each 24-hour period, as expressed by the inferred AEC classes.[32] By displaying a daily social engagement score directly in the app, this simple tool empowers hearing aid users to challenge themselves to use their hearing aids and communicate with others in a wide variety of quiet and noisy listening environments. Users can even designate family members or professional caregivers to monitor daily progress in real time via a companion application.[33] These patient-centered tools may encourage people to use their hearing aids in difficult listening environments more often. They also can provide clinicians with the information they need to better optimize the hearing aid for a wider range of situations.


#

Physical Activity

Previous research suggests that modifiable risk factors for cardiovascular disease (CVD) may play a role in developing age-related hearing loss.[34] Daily physical activity tracking has been promoted as a means to reduce cardiovascular risk, and studies have shown that achieving 10,000 steps per day reduces body mass index in aging individuals.[35] A recent study evaluated the efficacy and the effectiveness of Starkey Livio AI hearing aids in tracking step count in real-world conditions and reported that the hearing aids were more accurate than two wrist-worn activity tracking devices.[3] The hearing aids were found to be feasible, consistent, and sensitive in detecting daily step counts.

In addition to physical steps, the American Heart Association, the American College of Cardiology, and the American College of Sports Medicine, among other organizations, have emphasized that sedentary behavior and physical inactivity are major modifiable CVD risk factors, especially in the aging population. A major emphasis has been directed at reducing CVD risk by promoting 30 minutes of daily exercise and reducing sedentary behavior.[36] Additionally, the American College of Sports Medicine has recommended daily flexibility exercises be completed to maintain joint range of movement and musculoskeletal strength.[37] To that end, the thrive application automatically tracks and displays daily steps, exercise, and stand (for at least 1 minute in a 1 hour period) to encourage hearing aid users to be more physically active to mitigate the impact of CVD and potential comorbidity with hearing loss.[32]


#

Fall Detection

Approximately 40% of adults aged 65 years and older fall once or more per year, resulting in serious morbidities, mortality, and healthcare costs.[38] In addition, studies have reported a significant positive association between the severity of hearing loss and reports of falls, even when adjusting for demographic, cardiovascular, and vestibular balance function.[39] Forward falls, backward falls, trips, slips, and falls to the side have all been frequently observed in aging adults.[40] Starkey developed an ear-level fall detection algorithm, using IMU sensors embedded into custom or standard hearing aids, which is designed to be highly sensitive to these types of fall events. Once the hearing aids detect the occurrence of a fall, an alert message is automatically sent to previously designated contacts. If the wearer has recovered from a fall and does not need help, the alert can be cancelled within 60 seconds of the detection of the fall event. A recent study evaluated the sensitivity and specificity of the fall detection algorithm based on acceleration rate, estimated falling distance, and impact magnitude for bilateral hearing aids compared with a commercially available, neck-worn personal emergency response system.[4] On average, the ear-worn fall detection system had comparable or higher sensitivity and specificity rates for fall detection than the neck-worn pendant for laboratory conditions simulating forward and backward falls and near falls ([Fig. 11]). These data suggest that the ear-worn system may provide a suitable alternative to more traditional neck-worn devices for detecting falls.

Zoom Image
Figure 11 Measured fall detection and alert accuracy {[(true positives + true negatives)/total trials] ×100}, sensitivity {[true positives/(true positives + false negatives)] × 100}, and sensitivity {[true positives/(true positives + false negatives)] × 100} for a popular neck-worn pendant (AutoAlert) versus Livio Edge AI with normal and high sensitivity. Accuracy, sensitivity, and specificity were compared across the three fall detection systems with McNemar's test for paired nominal data. Livio AI (normal sensitivity) was more accurate than AutoAlert [χ 2(1) = 9.13, p = 0.002] and Livio AI (high sensitivity) [χ 2(1) = 27.03, p < 0.001]; the difference in accuracy between Livio AI (high sensitivity) and AutoAlert was not significant [χ 2(1) = 0.36, p = 0.550]. Livio AI (normal sensitivity) was significantly more sensitive than AutoAlert [χ 2(1) = 9.98, p = 0.002] and Livio AI (high sensitivity) [χ 2(1) = 29.00, p < 0.001]; the difference in sensitivity between Livio AI (high sensitivity) and AutoAlert was not significant [χ 2(1) = 0.51, p = 0.47]. Livio AI (high sensitivity) was significantly more specific than Livio AI (normal sensitivity) [χ 2(1) = 4.00, p = 0.045]. However, specificity differences were not statistically significant between Livio AI (normal sensitivity) and AutoAlert [χ 2(1) = 3.57, p = 0.059] or between Livio AI (high sensitivity) and AutoAlert [χ 2(1) = 1.00, p = 0.317].[4]

#
#

Summary

This article provided an overview of Starkey's approach for incorporating AI, machine learning, edge computing, DNNs, and embedded sensors into modern state-of-the-art hearing aids and accessories. By focusing fundamentally on improving sound quality and speech intelligibility for quiet and noisy listening environments, while also connecting hearing aid use to overall health and wellness, today's hearing aids not only help hearing-impaired individuals hear, understand speech, and communicate better but also enable them to live healthier lives by actively tracking both physical and cognitive activities.


#
#

Conflicts of Interest

D.A.F. and A.K.B are a full-time employees and officers of Starkey Hearing Technologies.


Address for correspondence

David Fabry, Ph.D.
Starkey Hearing Technologies
6600 Washington Avenue S, Eden Prairie
MN 55344   

Publication History

Article published online:
24 September 2021

© 2021. The Author(s). This is an open access article published by Thieme under the terms of the Creative Commons Attribution-NonDerivative-NonCommercial License, permitting copying and reproduction so long as the original work is given appropriate credit. Contents may not be used for commercial purposes, or adapted, remixed, transformed or built upon. (https://creativecommons.org/licenses/by-nc-nd/4.0/)

Thieme Medical Publishers, Inc.
333 Seventh Avenue, 18th Floor, New York, NY 10001, USA


Zoom Image
Figure 1 Schematic diagram depicting key processes in human perception and intelligence, characterized by sensing inputs, processing information, developing actions based on these processes, and learning based on experience.
Zoom Image
Figure 2 Block diagram of an acoustic environmental classification (AEC) system incorporating feature extraction, feature/pattern classification, post-processing, and environmental sound classification.
Zoom Image
Figure 3 Workflow for on-demand adaptive tuning (ODAT), known as “Edge Mode.”
Zoom Image
Figure 4 Preference count of Edge Mode versus prescribed settings from 15 hearing-impaired participants. Legend shows the acoustic scene.
Zoom Image
Figure 5 The acoustic impact of different face masks compared with when no face mask is worn. (Note: measurements were made using a head and torso simulator manikin.)
Zoom Image
Figure 6 High-level schematic of the smartphone-based IntelliVoice deep neural network implementation.
Zoom Image
Figure 7 Spectrographic representation of a multilayered deep neural network approach that analyzes spectrotemporal segments for the presence of speech or noise and passes speech through at a criterion speech-to-noise ratio while rejecting segments that are determined to be noise.
Zoom Image
Figure 8 Field test preference results for overall preference and speech understanding between IntelliVoice deep neural network (DNN) and hearing (HA) aid-only processing for 12 hearing aid (HA) users with mild-to-moderate (4 participants), moderate-to-severe (3 participants), and severe-to-profound (5 participants) hearing loss. The number on the y-axis corresponds to the number of users who preferred each option.
Zoom Image
Figure 9 Speech reception thresholds (SRT) in diffuse noise for 18 hearing aid users in unaided and hearing aid–only conditions (Livio Edge AI) and when the Table Mic beamforming microphone array (left) is used.
Zoom Image
Figure 10 Body (steps, exercise, stand) and brain (use, engagement, environment) scores reported within the Thrive user control application.
Zoom Image
Figure 11 Measured fall detection and alert accuracy {[(true positives + true negatives)/total trials] ×100}, sensitivity {[true positives/(true positives + false negatives)] × 100}, and sensitivity {[true positives/(true positives + false negatives)] × 100} for a popular neck-worn pendant (AutoAlert) versus Livio Edge AI with normal and high sensitivity. Accuracy, sensitivity, and specificity were compared across the three fall detection systems with McNemar's test for paired nominal data. Livio AI (normal sensitivity) was more accurate than AutoAlert [χ 2(1) = 9.13, p = 0.002] and Livio AI (high sensitivity) [χ 2(1) = 27.03, p < 0.001]; the difference in accuracy between Livio AI (high sensitivity) and AutoAlert was not significant [χ 2(1) = 0.36, p = 0.550]. Livio AI (normal sensitivity) was significantly more sensitive than AutoAlert [χ 2(1) = 9.98, p = 0.002] and Livio AI (high sensitivity) [χ 2(1) = 29.00, p < 0.001]; the difference in sensitivity between Livio AI (high sensitivity) and AutoAlert was not significant [χ 2(1) = 0.51, p = 0.47]. Livio AI (high sensitivity) was significantly more specific than Livio AI (normal sensitivity) [χ 2(1) = 4.00, p = 0.045]. However, specificity differences were not statistically significant between Livio AI (normal sensitivity) and AutoAlert [χ 2(1) = 3.57, p = 0.059] or between Livio AI (high sensitivity) and AutoAlert [χ 2(1) = 1.00, p = 0.317].[4]