Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Audiol. Otol.

Sec. Technology and Innovation in Auditory Implants and Hearing Aids

Volume 3 - 2025 | doi: 10.3389/fauot.2025.1677482

Effectiveness of deep neural networks in hearing aids for improving signal-to-noise ratio, speech recognition, and listener preference in background noise

Provisionally accepted
  • 1Department of Otolaryngology - Head and Neck Surgery, Stanford University, Stanford, United States
  • 2Department of Otolaryngology, School of Medicine, Stanford University, Stanford, United States
  • 3Starkey Hearing Technologies Inc, Eden Prairie, United States
  • 4California State University Sacramento, Sacramento, United States

The final, formatted version of the article will be published soon.

Traditional approaches to improving speech perception in noise (SPIN) for hearing -aid users have centered on directional microphones and remote wireless technologies. Recent advances in artificial intelligence and machine learning offer new opportunities for enhancing the signal-to-noise ratio (SNR) through adaptive signal processing. In this study, we evaluated the efficacy of a novel deep neural network (DNN)-based algorithm, commercially implemented as Edge Mode™, in improving SPIN outcomes for individuals with sensorineural hearing loss beyond that of conventional environmental classification approaches. The algorithm was evaluated using (1) objective KEMAR-based performance in 7 seven real-world scenarios, (2) aided and unaided speech-in-noise performance in 20 individuals with SNHL, and (3) real-world subjective ratings via ecological momentary assessment (EMA) in 20 individuals with SNHL. Significant improvements in SPIN performance were observed on CNC+5, QuickSIN, and WIN, but not NST+5, likely due to the use of speech-shaped noise in the latter, suggesting the algorithm is optimized for multi-talker babble environments. SPIN gains were not predicted by unaided performance or degree of hearing loss, indicating individual variability in benefit, potentially due to differences in peripheral encoding or cognitive function. Furthermore, subjective EMA responses mirrored these improvements, supporting real-world utility. These findings demonstrate that DNN-based signal processing can meaningfully enhance speech understanding in complex listening environments, underscoring the potential of AI-powered features in modern hearing aids and highlighting the need for more personalized fitting strategies.

Keywords: Hearing Aids, Audiology, Speech in noise, Deep Neural Network, artificial intelligence

Received: 31 Jul 2025; Accepted: 22 Sep 2025.

Copyright: © 2025 Fitzgerald, Athreya, Srour, Rejimon, Venkitakrishnan, Bhowmik, Jackler, Steenerson and Fabry. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Matthew B Fitzgerald, fitzmb@stanford.edu

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.