Skip to main content

The Challenge Rules

What information can I use?

Training and development

Teams should use the signals and listener responses provided in the CPC3.train.json file.

In addition, teams can use any publicly available intelligibility data except for datasets that have been provided for previous Clarity prediction challenges (CPC1, CPC2). Teams can use or pre-existing intelligibility models that might help improve their training. The FAQ gives links to some models that might be used for this.

Evaluation

The only data that can be used by the prediction model(s) during evaluation are described below.

For non-intrusive methods:

  • The output of the hearing aid processor/system.
  • The listener's hearing impairment severity as indicated by the metadata.

Additionally, for intrusive methods:

  • The target reference signal, i.e. the target convolved with the anechoic BRIR (channel 1) for each ear (‘target_anechoic’).
  • The target transcript, i.e., the sentence spoken in the target signal. (Note, this can be recovered by listening to the reference signal but will be provided for convenience).

Baseline models and computational restrictions

  • Teams may choose to use all or some of the provided baseline models.
  • There is no limit on computational cost.
  • Models can be non-causal.

What sort of model do I create?

  • Your model should report the speech intelligibility for the whole sentence for each audio sample/listener combination, i.e. a single score that represents a prediction of the proportion of words that would be recognised correctly
  • The model architecture is entirely up to you, e.g. you can create a model that attempts to recognise individual words and then reduces this down to a proportion, or you can estimate an intelligibility score directly from the audio. Models may have explicit hearing loss model stages or be trained directly to map signals and audiograms to predictions.

Submitting multiple entries

If you wish to submit multiple entries,

  • Your systems must have significant differences in their approach.
  • You must contact the organisers to discuss your plans.
  • If accepted you will be issued with multiple Team IDs to distinguish your entries.
  • In your documentation, you must make it clear how the submissions differ.

Evaluation of systems

  • Entries will be ranked according to their performance in predicting measured intelligibility scores.
  • The system score will be taken to be the RMSE between the predicted and measured intelligibility scores across the complete test set.
  • A single ranking will be used for both intrusive and non-intrusive methods, but it will be made clear which methods are intrusive.
  • Systems will only be considered if the technical report has been submitted and the system is judged to be compliant with the challenge rules.

Teams

  • Teams must have registered and nominated a contact person.
  • Teams can be from one or more institutions.
  • The organisers - and any person forming a team with one or more organisers - may enter the challenge themselves but will not be eligible to win the cash prizes.

Transparency

  • Teams must provide a technical document of up to 2 pages describing the system/model and any external data and pre-existing tools, software and models used.
  • We will publish all technical documents on the challenge website (anonymous or otherwise).
  • Teams are encouraged – but not required – to provide us with access to the system(s)/model(s) and to make their code open source.
  • Anonymous entries are allowed but will not be eligible for cash prizes.
  • If a group of people submits multiple entries, they cannot win more than one prize in a given category.
  • All teams will be referred to using anonymous codenames if the rank ordering is published before the final results are announced.
  • Teams are strongly encouraged to submit their report for presentation at the Clarity-2025 Interspeech Satellite Workshop.

Intellectual property

The following terms apply to participation in this machine learning challenge (“Challenge”). The entrants' “Submission” will consist of a set of intelligibility predictions and an accompanying technical report. The Challenge is organised by the “Challenge Organiser”.

Entrants retain ownership of all intellectual and industrial property rights (including moral rights) in and to Submissions.

As a condition of submission, Entrant grants the Challenge Organiser, its subsidiaries, agents and partner companies, a perpetual, irrevocable, worldwide, royalty-free, and non-exclusive license to use, reproduce, adapt, modify, publish, distribute, publicly perform, create a derivative work from, and publicly display the Submission.

Entrants provide Submissions on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE.