Hearing aid simulation

What our baseline hearing aid simulates, with examples.

Our challenge entrants are going to use machine learning to develop better hearing aid processing for listening to speech in noise (SPIN). We’ll provide a baseline hearing aid model for entrants to improve on. The figure below shows our baseline system, where the yellow box to the left is where the simulated hearing aid sits (labelled “Enhancement model”).

This image has an empty alt attribute; its file name is baseline-1024x456.png
The draft baseline system (where SPIN is speech in noise, HL is hearing loss, SI is speech intelligibility, and L & R are Left and Right).

We decided to base our simulated hearing aid on the open Master Hearing Aid (openMHA), which is an open-source software platform for real-time audio signal processing. This was developed by the University of Oldenburg, HörTech gGmbH, Oldenburg, and the BatAndCat Corporation, USA. The original version was developed as one of the outcomes of the Cluster of Excellence Hearing4all project. The openMHA platform includes:

  • a software development kit (C/C++ SDK) including an extensive signal processing  library for algorithm development and a set of Matlab and Octave tools to support development and off-line testing
  • real-time runtime environments for standard PC platforms and mobile ARM platforms
  • a set of baseline reference algorithms that forms a complete hearing aid system, including multi-band dynamic compression and amplification, directional microphones,  binaural beamformers and coherence filters, single-channel noise reduction, and feedback control.

We have written a Python wrapper for the core openMHA system for ease of use within machine learning frameworks. We developed a simple generic hearing aid configuration and translated the Camfit compressive fitting, the prescription that takes a listener’s audiogram and determines the right settings for the hearing aid, based on Moore et al. 1999 and encoded by openMHA.

Some aspects of modern digital hearing aids that we’ve decided to simulate are:

  • differential microphones, and
  • a multiband compressor for dynamic compression.

We’ve decided not to simulate the following on the basis that all these tend to be implemented in proprietary forms, such that we can’t replicate them exactly in our open-source algorithm:

  • coordination of gross processing parameters across ears,
  • binaural processing involving some degree of signal exchange between left and right devices,
  • gain changes influenced by speech-to-noise ratio estimators,
  • frequency shifting or scaling, and
  • dual or adaptive time-constant wide dynamic range compression.

We are using the Oldenburg Hearing Device (OlHeaD) Head Related Transfer Function (HRTF) Database (Denk et al. 2018) to replicate the signals that would be received by the front and rear microphones of the hearing aid and also at the eardrums of the wearer.

Audio examples of hearing aid processing

Here is an example of speech in noise processed by the simulated hearing aid for a moderate level of hearing loss. We can hear that the shape of the frequency spectrum has been modified to suit the listener’s specific pattern of hearing loss.

Here’s the original noisy signal where the noise is generated by a washing machine.
Here’s the same signal processed by the simulated hearing aid for a listener with a moderate level of hearing loss (Pure Tone Average of 38). For illustration purposes, this is presented here at an overall level that is similar to that of the original signal.
Here’s the noisy signal as it would be perceived by the listener wearing the hearing aid. Without the aid, the original noisy signal would be near inaudible.

Information about our hearing loss model can be found here.

The target speech comes from our new 40 speaker British English speech database, while the speech interferer noise comes from the SLR83 database, which comprises recordings of male and female speakers of English from various parts of the UK and Ireland.


We are grateful to the developers of the openMHA platform for the use of their software. Special thanks are due to Hendrik Kayser and Tobias Herzke. We are also grateful to Brian Moore, Michael Stone and colleagues for the Camfit compressive prescription, and to the people involved in the preparation of the OlHead HRTF (particularly Florian Denk) and SLR83 databases. The feature image is taken from Denk et al. (2018).


Demirsahin, I., Kjartansson, O., Gutkin, A., & Rivera, C. E. (2020). Open-source Multi-speaker Corpora of the English Accents in the British Isles. Available at http://www.openslr.org/83/

Denk, F., Ernst, S. M., Ewert, S. D., & Kollmeier, B. (2018). Adapting hearing devices to the individual ear acoustics: Database and target response correction functions for various device styles. Trends in Hearing, 22, 2331216518779313.

Moore, B. C. J., Alcántara, J. I., Stone, M. A., & Glasberg, B. R. (1999). Use of a loudness model for hearing aid fitting: II. Hearing aids with multi-channel compression. British Journal of Audiology, 33(3), 157-170.

One thought on “Hearing aid simulation”

  1. Hi there,
    Very glad you will be using MHA software. Just in case, I would like to also offer our portable hardware units (see on the BatAndCat.com site) if somebody would like to do field experiments. We can provide these units at cost, as well as some free samples.
    Best regards,
    BatAndCat Sound Labs

Leave a Reply to Chas Pavlovic Cancel reply

Your email address will not be published.