One in six people in the UK has a hearing impairment, and this number is certain to increase as the population ages. Yet only 40% of people who could benefit from hearing aids have them, and most people who have the devices don’t use them often enough. A major reason for this low uptake and use is the perception that hearing aids perform poorly.
We are organising a series of machine learning challenges to advance hearing-aid signal processing and the modelling of speech-in-noise perception. To facilitate this we will generate open-access datasets, models and infrastructure, including
- open-source tools for generating realistic training materials for different listening scenarios;
- baseline models of hearing impairment;
- baseline models of hearing-device speech processing;
- baseline models of speech perception;
- databases of speech perception in noise for hearing impaired listeners.
Over 5 years we will deliver three challenge rounds. In round one, speech will occur in the context of a living room, i.e., a person speaking in a moderately reverberant room with minimal background noise.
We expect to open a beta version of round one in November 2020, a full launch in January-February 2021 for a closing date in June 2021 and results in October 2021.
Interested in getting involved? Please sign up!