Skip to main content

The 3rd Clarity Enhancement Challenge

CEC3 Task1 and Task2 are live! πŸ”₯πŸ”₯πŸ”₯

Task 1 and 2 are now live. The data can be obtained from the download page and full task descriptions are available on this site. Task 3 data will be available from the 1st May.

The third Clarity Enhancement Challenge (CEC3) is about improving the performance of hearing aids for speech-in-noise. According to the World Health Organization, 430 million people worldwide require rehabilitation to address hearing loss. By 2050, this will increase to one in ten people having disabling hearing loss. Yet even in developed countries, only 40% of people who could benefit from hearing aids have them and use them enough. A major reason for low uptake is the perception that hearing aids perform poorly.

Fig 1. Task 1 and 2 is a scenario with one talker, a listener with hearing loss and wearing hearing aids, a domestic environment and common sources of unwanted sound.

Overview of challenge​

The challenge provides participants with hearing aid input signals representing scenes containing a target speaker. Participants are asked to process the signals to provide hearing aid output signals that will be intelligible to hearing-impaired listeners. The challenge is evaluated using standard objective speech intelligibility metrics but also with listening tests with hearing-impaired listeners.

The challenge is formed of three enhancement tasks that add realism to the fully simulated scenes used in the previous 2nd Clarity Enhancement Challenge. Participants are welcome to submit to one or more tasks. We are particularly interested in systems that handle all three cases with little or no redesign/retraining. Further details of the tasks are presented below.

Task 1: Real ambisonic room impulse responses πŸ”₯πŸ”₯πŸ”₯​

In the previous CEC1 and CEC2 challenges, hearing aid input signals were simulated using pre-recorded audio sources mixed with simulated room impulse responses. These simulated responses were then used to make training and evaluation data. In this first task, we are rerunning the CEC2 scenario, but with a new evaluation set that uses real impulse responses measured with an ambisonic microphone array in a real room. We are interested in how well systems that are trained on simulated data can generalise to this new evaluation set.

Task 1 Details...

Task 2: Real hearing aid signals πŸ”₯πŸ”₯πŸ”₯​

In all previous Clarity challenges, hearing aid input signals are simulated using room impulse responses and head-related transfer function. In this task, we provide participants with real microphone signals. We have recorded scenes using microphones on a behind-the-ear hearing-aid worn by a real listener attending to a target speaker. The scenario closely follows CEC2, i.e. the same noise interferers, etc, but the data is more challenging because it includes real room acoustics, real head movements and real microphone characteristics. A matched training set has been provided and we are interested in how well systems can cope with the inherently more complex data. Ground truth head motion data is also provided and we are interested in whether systems can exploit this information.

Task 2 Details...

πŸ”œ Task 3: Real dynamic backgrounds (launching 1st May)​

Fig 1. Task 3 is a scenario with dynamic background noise, including recordings at a railway station (with trains!).

In all previous Clarity challenges, the interfering signals have been static and carefully controlled. In this task, we will use naturally occurring, dynamic noise backgrounds. We are collecting a dataset of 64-channel ambisonic audio recordings from settings that hearing-impaired listeners find challenging. These include train stations, roadsides and large social gatherings (i.e., the 'cocktail party' scenario). Using these recordings and measured impulse responses, we will create a dataset of hearing aid input signals feature target sentences in dynamic background noise.

Task 3 Details...

All tasks​

For all tasks, we will be providing standard training, development and evaluation datasets. The training and development datasets will be released at the start of the challenge. The evaluation dataset will be released shortly before the submission deadline without reference signals. Participants will then be asked to submit their processed signals for remote evaluation.

Note, if you are interested in participating, please sign up to our Clarity Challenge’s Google group so that we can keep you posted on the latest developments.