clarity.engine package

Submodules

clarity.engine.losses module

class clarity.engine.losses.SISNRLoss(*args, **kwargs)[source]

Bases: Module

cal_sisnr(x, s, eps=1e-08)[source]

Arguments: x: separated signal, N x S tensor s: reference signal, N x S tensor Return: sisnr: N tensor

forward(x, y)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class clarity.engine.losses.SNRLoss(tao=0.001)[source]

Bases: Module

forward(x, s, eps=1e-08)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

l2norm(mat, keepdim=False)[source]
class clarity.engine.losses.STOILevelLoss(sr, alpha, block_size=0.4, overlap=0.7, gamma_a=-70)[source]

Bases: Module

alpha

rms measurement

forward(x, s)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

gamma_a

mse

measure_loudness(signal, eps=1e-08)[source]
class clarity.engine.losses.STOILoss(sr)[source]

Bases: Module

forward(x, s)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

clarity.engine.system module

Adopted from Asteroid https://github.com/asteroid-team/asteroid/blob/master/asteroid/engine/system.py

class clarity.engine.system.System(model, optimizer, loss_func, train_loader, val_loader=None, scheduler=None, config=None)[source]

Bases: LightningModule

common_step(batch, batch_nb, train=True)[source]

Common forward step between training and validation. The function of this method is to unpack the data given by the loader, forward the batch through the model and compute the loss. Pytorch-lightning handles all the rest.

Parameters:
  • batch – the object returned by the loader (a list of torch.Tensor in most cases) but can be something else.

  • _batch_nb (int) – The number of the batch in the epoch.

  • _train (bool) – Whether in training mode. Needed only if the training and validation steps are fundamentally different, otherwise, pytorch-lightning handles the usual differences.

Returns:

The loss value on this batch.

Return type:

torch.Tensor

Note

This is typically the method to overwrite when subclassing System. If the training and validation steps are somehow different (except for loss.backward() and optimzer.step()), the argument train can be used to switch behavior. Otherwise, training_step and validation_step can be overwriten.

configure_optimizers()[source]

Initialize optimizers, batch-wise and epoch-wise schedulers.

forward(*args, **kwargs)[source]

Applies forward pass of the model.

Returns:

torch.Tensor

on_save_checkpoint(checkpoint)[source]

Overwrite if you want to save more things in the checkpoint.

on_validation_epoch_end()[source]

Log hp_metric to tensorboard for hparams selection.

train_dataloader()[source]

Training dataloader

training_step(batch, batch_nb)[source]

Pass data through the model and compute the loss. Backprop is not performed (meaning PL will do it for you).

Parameters:
  • batch – the object returned by the loader (a list of torch.Tensor in most cases) but can be something else.

  • batch_nb (int) – The number of the batch in the epoch.

Returns:

torch.Tensor, the value of the loss.

val_dataloader()[source]

Validation dataloader

validation_step(batch, batch_nb)[source]

Need to overwrite PL validation_step to do validation.

Parameters:
  • batch – the object returned by the loader (a list of torch.Tensor in most cases) but can be something else.

  • batch_nb (int) – The number of the batch in the epoch.

Module contents