recipes.cad1.task2.baseline.evaluate module¶
Evaluate the enhanced signals using the HAAQI metric.
- recipes.cad1.task2.baseline.evaluate.evaluate_scene(ref_signal: ndarray, enh_signal: ndarray, sample_rate: int, scene_id: str, current_scene: dict, listener: Listener, car_scene_acoustic: CarSceneAcoustics, hrtf: dict, config: DictConfig) tuple[float, float] [source]¶
Evaluate a single scene and return HAAQI scores for left and right ears
- Parameters:
ref_signal (np.ndarray) – A numpy array of shape (2, n_samples) containing the reference signal.
enh_signal (np.ndarray) – A numpy array of shape (2, n_samples) containing the enhanced signal.
sample_rate (int) – The sampling frequency of the reference and enhanced signals.
scene_id (str) – A string identifier for the scene being evaluated.
current_scene (dict) – A dictionary containing information about the scene being evaluated, including the song ID, the listener ID, the car noise type, and the split.
listener (Listener) – the listener to use
car_scene_acoustic (CarSceneAcoustics) – An instance of the CarSceneAcoustics class, which is used to generate car noise and add binaural room impulse responses (BRIRs) to the enhanced signal.
hrtf (dict) – A dictionary containing the head-related transfer functions (HRTFs) for the listener being evaluated. This includes the left and right HRTFs for the car and the anechoic room.
config (DictConfig) – A dictionary-like object containing various configuration parameters for the evaluation. This includes the path to the enhanced signal folder,the path to the music directory, and a flag indicating whether to set a random seed.
- Returns:
A tuple containing HAAQI scores for left and right ears.
- Return type:
Tuple[float, float]