recipes.cpc2.baseline.evaluate module

Evaluate the predictions against the ground truth correctness values

recipes.cpc2.baseline.evaluate.compute_scores(predictions, labels) dict[source]

Compute the scores for the predictions

recipes.cpc2.baseline.evaluate.evaluate(cfg: DictConfig) None[source]

Evaluate the predictions against the ground truth correctness values

recipes.cpc2.baseline.evaluate.kt_score(x: ndarray, y: ndarray) float[source]

Compute the Kendall’s tau correlation between two arrays

recipes.cpc2.baseline.evaluate.ncc_score(x: ndarray, y: ndarray) float[source]

Compute the normalized cross correlation between two arrays

recipes.cpc2.baseline.evaluate.rmse_score(x: ndarray, y: ndarray) float[source]

Compute the root mean squared error between two arrays

recipes.cpc2.baseline.evaluate.std_err(x: ndarray, y: ndarray) float[source]

Compute the standard error between two arrays