Timings and session details are provided below. All times are UK local time (i.e., UTC).
|9:10||The Clarity CEC2 Overview||[YouTube]|
|9:40||Challenge Papers: Session I|
|11:30||Invited Talk - Karolina Smeds (ORCA Europe) and Stefan Petrausch (WS Audiology)||[YouTube]|
|12:00||CPC2 and CEC3 discussion + Future Directions|
|13:00||Challenge Papers: Session II|
|14:20||Prizes and conclusions||[YouTube]|
Karolina Smeds ORCA Europe
Stefan Petrausch WS Audiology
Real-life listening through hearing aids.
Real-life listening through hearing aids (Karolina Smeds and Stefan Petrausch)
The presentation will introduce “real-life listening” and describe one particular listening situation, often described as difficult by hearing-aid users: the group conversation. Some of the special characteristics of a group conversation will be presented and ways to evaluate success in group conversations will be mentioned. In the second part of the presentation, typical challenges for machine learning applications in this context of “real-life listening” will be discussed. Requirements on robustness, sound quality, latency, and computational complexity constitute certain boundary conditions, which have to be considered right from the start of the development. It will be shown how current applications, like acoustic classification and own voice detection, and current research activities are dealing with these restrictions.
Karolina Smeds has a background in Engineering Physics and Audiology. Her PhD work focused on loudness aspects of hearing-aid fitting, combining clinical and theoretical aspects on the topic. For 15 years, Karolina led an external research group, ORCA Europe in Stockholm, Sweden, fully funded by the Danish hearing-aid manufacturer Widex A/S, now WS Audiology, where she still works. Recently the research group has focused on investigations of “real-life hearing”. This includes investigations of people’s auditory reality and development and evaluation of outcome measures for hearing-aid fitting success, both in the laboratory and in the field, that can produce results that are indicative of real-life performance and preference. Recently, the group has moved into the field of health psychology and into investigations of spoken conversations. At the University of Nottingham, Karolina is continuing to work on auditory reality, outcome measures that can produce ecologically valid results, and analysis of spoken conversations, primarily in collaboration with the Scottish Section of the Hearing Sciences group.
Stefan Petrausch studied electrical engineering at the University of Erlangen-Nuremberg, where he received his diploma degree (Dipl.-Ing.) and his doctoral degree (Dr.-Ing.) in the year 2002 and 2007 respectively. He performed his PhD thesis with the title "Block-Based Physical Modeling" in the field of musical signal processing, dealing with distributed methods for the numerical simulation of partial differential equations. Since 2007 he is a member of the signal processing group at WSAudiology, where he is currently leading the team and activities for research centered signal processing development and prototype applications. In this context, he has worked on almost all signal processing aspects for digital hearing instruments like directional processing, adaptive feedback cancellation, and binaural signal processing, with more and more focus on machine learning solutions for these topics in the last years.
Challenge Papers: Session I
| Sheffield System for the Second Clarity Enhancement Challenge [Report] [YouTube]
(1University of Sheffield, Department of Computer Science, Sheffield, UK)
| Informed Target Speaker Extraction Using TCN and TCN-Conformer Architectures for the 2nd Clarity Enhancement Challenge [Report] [YouTube]
(1Department of Medical Physics and Acoustics and Cluster of Excellence Hearing4all, University of Oldenburg, Germany, 2Fraunhofer Institute for Digital Media Technology IDMT, Oldenburg Branch for Hearing, Speech and Audio Technology HSA, Germany)
| CITEAR: A Two-Stage End-to-End System for Noisy-Reverberant Hearing-Aid Processing [Report] [YouTube]
(1Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan, 2Research Center for Information Technology Innovation, Academia Sinica, Taipei, Taiwan, 3Graduate Institute of Electronics Engineering, National Taiwan University, Taipei, Taiwan, 4Department of Computer Science and Information Engineering,National Cheng-Kung University, Tainan, Taiwan)
| LNSFEars: Low-latency Neural Spectrospatioal Filtering and Equalizer for hearing aids [Report] [YouTube]
(1Key Laboratory of Modern Acoustics Institute of Acoustics, Nanjing University, Nanjing 210093, China 2 NJU-Horizon Intelligent Audio Lab, Horizon Robotics, Nanjing 210038, China.)
Challenge Papers: Session II
| LLMSE: Low-latency Multi-channel Speech Enhancement for Hearing Aids [Report] [YouTube]
| DRC-NET for The 2nd Clarity Enhancement Challenge [Report] [YouTube]
(College of Computer Science, Inner Mongolia University, China)
| Multi-channel Target Speaker Extraction with Refinement: The WAVLAB Submission to the Second Clarity Enhancement Challenge [Report] [YouTube]
(1Universita Politecnica delle Marche, Italy, 2Carnegie Mellon University, USA, 3Tokyo Metropolitan University, Japan 4Pulse Audition, France)