Our challenge entrants are going to use machine learning to develop better hearing aid processing for listening to speech in noise (SPIN). We’ll provide a baseline hearing aid model for entrants to improve on. The figure below shows our baseline system, where the yellow box to the left is where the simulated hearing aid sits (labelled “Enhancement model”).

We decided to base our simulated hearing aid on the open Master Hearing Aid (openMHA), which is an open-source software platform for real-time audio signal processing. This was developed by the University of Oldenburg, HörTech gGmbH, Oldenburg, and the BatAndCat Corporation, USA. The original version was developed as one of the outcomes of the Cluster of Excellence Hearing4all project. The openMHA platform includes:
- a software development kit (C/C++ SDK) including an extensive signal processing library for algorithm development and a set of Matlab and Octave tools to support development and off-line testing
- real-time runtime environments for standard PC platforms and mobile ARM platforms
- a set of baseline reference algorithms that forms a complete hearing aid system (multi-band dynamic compression and amplification, directional microphones, binaural beamformers and coherence filters, single-channel noise reduction, feedback control).
We have written a Python wrapper for the core openMHA system for ease of use within machine learning frameworks. We developed a generic hearing aid configuration and translated the Camfit compressive fitting, the prescription that takes a listener’s audiogram and determines the right settings for the hearing aid, based on Moore et al. 1999 and encoded by openMHA.
Some aspects of modern digital hearing aids that we’ve decided to simulate are:
- differential microphones, and
- a multiband compressor for dynamic compression.
We’ve decided not to simulate the following on the basis that all these tend to be implemented in proprietary forms, such that we can’t replicate them exactly in our open-source algorithm:
- coordination of gross processing parameters across ears,
- binaural processing involving some degree of signal exchange between left and right devices,
- gain changes influenced by speech-to-noise ratio estimators,
- frequency shifting or scaling, and
- dual or adaptive time-constant wide dynamic range compression.
We are using the Oldenburg Hearing Device (OlHeaD) Head Related Transfer Function (HRTF) Database (Denk et al. 2018) to replicate the signals that would be received by the front and rear microphones of the hearing aid and also at the eardrums of the wearer.
Audio examples of hearing aid processing
Here is an example of speech in noise processed by the simulated hearing aid for a moderate level of hearing loss. We can hear that the shape of the frequency spectrum has been modified to suit the listener’s specific pattern of hearing loss.
Information about our hearing loss model can be found here.
The target speech comes from our new 40 speaker British English speech database, while the speech interferer noise comes from the SLR83 database, which comprises recordings of male and female speakers of English from various parts of the UK and Ireland.
Acknowledgements
We are grateful to the developers of the openMHA platform for the use of their software. Special thanks are due to Hendrik Kayser and Tobias Herzke. We are also grateful to Brian Moore, Michael Stone and colleagues for the Camfit compressive prescription, and to the people involved in the preparation of the OlHead HRTF (particularly Florian Denk) and SLR83 databases. The feature image is taken from Denk et al. (2018).
References
Demirsahin, I., Kjartansson, O., Gutkin, A., & Rivera, C. E. (2020). Open-source Multi-speaker Corpora of the English Accents in the British Isles. Available at http://www.openslr.org/83/
Denk, F., Ernst, S. M., Ewert, S. D., & Kollmeier, B. (2018). Adapting hearing devices to the individual ear acoustics: Database and target response correction functions for various device styles. Trends in Hearing, 22, 2331216518779313.
Moore, B. C. J., Alcántara, J. I., Stone, M. A., & Glasberg, B. R. (1999). Use of a loudness model for hearing aid fitting: II. Hearing aids with multi-channel compression. British Journal of Audiology, 33(3), 157-170.
Hi there,
Very glad you will be using MHA software. Just in case, I would like to also offer our portable hardware units (see on the BatAndCat.com site) if somebody would like to do field experiments. We can provide these units at cost, as well as some free samples.
Best regards,
Chas
BatAndCat Sound Labs