One in six people in the UK has some level of hearing loss, and this number is certain to increase as the population ages. Yet only 40% of people who could benefit from hearing aids have them, and most people who have the devices don’t use them regularly. A major reason for this low uptake and use is the perception that hearing aids perform poorly.

A critical problem for hearing aids is speech in noise, even for the most sophisticated devices. A hearing aid wearer may have difficulty conversing with family or friends while the television is on, and hearing public announcements at the train station. Such difficulties can lead to social isolation, and thereby reduce emotional and physical well being. Consequently, how hearing aid devices process speech in noise is crucial.

Our approach is inspired by the latest developments in automatic speech recognition and speech synthesis, two areas in which public competitions have led to rapid advancements in technology. We want to encourage more researchers to consider how their skills and technology could benefit the millions of people with hearing impairments.

Round one: the challenges

The challenges of round one features

  • A simulated living room;
  • One source of speech;
  • A range of reverberation times (low to moderate);
  • Real or simulated domestic noise backgrounds, e.g., noise from a washing machine or competing speech.

Transcribed utterances are provided for supplied audio signals. A generative tool will be supplied so that entrants can generate a large database of audio signals that can be used as material for training machine learning models.

The round comprises two challenges:

  • Enhancement: hearing aid signal processing;
  • Prediction: perception models of speech intelligibility, incorporating a model of hearing loss.

As COVID-19 restricted our ability to test peoples’ responses to audio, we launched the Enhancement Challenge in January 2021, with the Prediction Challenge opening later in 2021.

Entrants to the Enhancement Challenge were required to provide the following:

  • Processed signals with associated information about the signals (speech material, noise sources, reverberation time, etc.);
  • System information;
  • Documentation.

Entrants to the Prediction Challenge will be required to provide

  • Intelligibility scores and associated information about the signals and/or
  • Signals processed by the hearing loss model with associated information;
  • System information;
  • Documentation.

Entrants are also encouraged to provide their models.

Round one: evaluation

Entries to the enhancement challenge are evaluated as follows:

  • Initially, entries are ranked on the basis of objective speech intelligibility assessments.
  • Subsequently, a subset of the entries are ranked on the basis of real speech intelligibility scores from our listener panel.

Entries to the Perception Challenge will be evaluated according to how well they predict real intelligibility scores from our panel of listeners.