clarity.engine package¶
Submodules¶
clarity.engine.losses module¶
- class clarity.engine.losses.SISNRLoss(*args: Any, **kwargs: Any)[source]¶
Bases:
Module
clarity.engine.system module¶
Adopted from Asteroid https://github.com/asteroid-team/asteroid/blob/master/asteroid/engine/system.py
- class clarity.engine.system.System(*args: Any, **kwargs: Any)[source]¶
Bases:
LightningModule
- common_step(batch, batch_nb, train=True)[source]¶
Common forward step between training and validation. The function of this method is to unpack the data given by the loader, forward the batch through the model and compute the loss. Pytorch-lightning handles all the rest.
- Parameters:
batch – the object returned by the loader (a list of torch.Tensor in most cases) but can be something else.
_batch_nb (int) – The number of the batch in the epoch.
_train (bool) – Whether in training mode. Needed only if the training and validation steps are fundamentally different, otherwise, pytorch-lightning handles the usual differences.
- Returns:
The loss value on this batch.
- Return type:
torch.Tensor
Note
This is typically the method to overwrite when subclassing
System
. If the training and validation steps are somehow different (except forloss.backward()
andoptimzer.step()
), the argumenttrain
can be used to switch behavior. Otherwise,training_step
andvalidation_step
can be overwriten.
- on_save_checkpoint(checkpoint)[source]¶
Overwrite if you want to save more things in the checkpoint.
- training_step(batch, batch_nb)[source]¶
Pass data through the model and compute the loss. Backprop is not performed (meaning PL will do it for you).
- Parameters:
batch – the object returned by the loader (a list of torch.Tensor in most cases) but can be something else.
batch_nb (int) – The number of the batch in the epoch.
- Returns:
torch.Tensor, the value of the loss.