Skip to main content


What information can I use?

Training and development

Teams can expand the training data provided using augmentation or by supplementing it with data from other publicly available sources, excluding datasets that may appear in the evaluation (this prohibits training on any previous Clarity challenge evaluation data, speech from Crowdsourced high-quality UK and Ireland English Dialect speech data set, or music from the MTG-Jamendo Dataset). Any additional data used must be made clear in the technical report. Teams can also use publicly available pre-trained models, provided they weren't trained on the prohibited data mentioned in this paragraph.

Any of the CEC3 metadata can be used during training and development, but during evaluation, the system will only have access to the hearing aid input signals and the listener audiograms.

Teams that augment or extend the training dataset must also submit a version of the system using only the standard dataset. etc


The only data that can be used during evaluation are

  • The 6-channels hearing aid input signals
  • The listener characterisation (pure tone air-conduction audiograms and/or digit triple test results).
  • The provided clean audio examples for the target talker (these will not be the same as any of the target utterances.)
  • The head-rotation signal (but if used, a version of the system that does not use it should also be prepared for comparison.)

Computational restrictions

  • Systems must be causal; the output from the hearing aid at time t must not use any information from input samples more than 5 ms into the future (i.e., no information from input samples >t+5 ms).
  • There is no limit on computational requirements but memory and processing requirements should be clearly stated in the technical reports.

Please see this blog post for further explanation of these last two rules about latency and computation time.