Skip to main content

Core Software

The following software available to download shortly:

  • Scene generator
  • Hearing aid processor baseline
  • Hearing loss model
  • Speech intelligibility model

The code is a Python package and accompanying unix shell scripts, with the facility to process a single scene or to bulk process the complete Clarity dataset.

A. Scene generator

The scene generator is fully open-source python code for generating hearing aid inputs for each scene

  • Inputs: target and interferer signals, BRIRs, RAVEN project (rpf) files, scene description JSON files
  • Outputs: Mixed target+interferer signals for each hearing aid channel, direct path (simulating a measurement close to the eardrum). Reverberated pre-mixed signals can also be optionally generated.

B. Baseline hearing aid processor

The baseline hearing aid processor is based on openMHA [1] but with a Python wrapper. The python code configures openMHA with a Camfit compressive fitting [2] for a specific listener’s audiogram.

This configuration of openMHA includes multiband dynamic compression, non-adaptive differential processing and a softclip plugin. The intention was to produce a basic hearing aid without various aspects of signal processing that are common in high-end hearing aids, but tend to be implemented in proprietary forms so cannot be replicated exactly.

The main inputs and outputs for the processor are as follows:

  • Inputs: Mixed scene signals for each hearing aid channel, a listener ID drawn from scene-listener pairs identified in ‘scenes_listeners.json’ and an entry in the listener metadata json file ‘listeners.json’ for that ID
  • Outputs: The stereo hearing aid output signal, <scene>_<listener>_HA-output.wav

C. Hearing Loss model

Open-source python implementation of a hearing loss model developed by Brian Moore, Michael Stone and other members of the Auditory Perception Group, University of Cambridge (e.g., [3]).

  • Inputs: A stereo wav audio signal, e.g., the output of the baseline hearing aid processor, and a set of audiograms (both L and R ears).
  • Outputs: The signal after simulating the hearing loss as specified by the set of audiograms (stereo wav file), <scene>_<listener>_HL-output.wav

D. Speech Intelligibility model

Python implementation of a binaural intelligibility model, Modified Binaural Short-Time Objective Intelligibility (MBSTOI; [4]). This is an experimental baseline tool that is level-independent. Note that MBSTOI requires signal time-alignment (and alignment within one-third octave bands).

  • Inputs: HL-model output signals, audiogram, reference target signal (i.e., the premixed target signal convolved with the BRIR with the reflections “turned off”, specified as ‘target_anechoic’), (scene metadata)
  • Outputs: predicted intelligibility score

References

  1. Kayser, H., Herzke, T., Maanen, P., Pavlovic, C. and Hohmann, V., 2019. Open Master Hearing Aid (openMHA): An integrated platform for hearing aid research. Journal of the Acoustical Society of America, 146(4), pp. 2879-2879.
  2. Moore, B. C. J., Alcantara, J. I., Stone, M. and Glasberg, B. R., 1999. Use of a loudness model for hearing aid fitting: II. Hearing aids with multi-channel compression. British Journal of Audiology, 33(3), pp. 157-170.
  3. Nejime, Y. and Moore, B. C., 1997. Simulation of the effect of threshold elevation and loudness recruitment combined with reduced frequency selectivity on the intelligibility of speech in noise. Journal of the Acoustical Society of America, 102(1), pp. 603-615.
  4. Andersen, A. H., de Haan, J. M., Tan, Z. H. and Jensen, J., 2018. Refinement and validation of the binaural short-time objective intelligibility measure for spatially diverse conditions. Speech Communication, 102, pp. 1-13.