recipes.cec1.e009_sheffield.train module

class recipes.cec1.e009_sheffield.train.AmpModule(*args: Any, **kwargs: Any)[source]

Bases: System

common_step(batch, batch_nb, train=True)[source]

Common forward step between training and validation. The function of this method is to unpack the data given by the loader, forward the batch through the model and compute the loss. Pytorch-lightning handles all the rest.

Parameters:
  • batch – the object returned by the loader (a list of torch.Tensor in most cases) but can be something else.

  • _batch_nb (int) – The number of the batch in the epoch.

  • _train (bool) – Whether in training mode. Needed only if the training and validation steps are fundamentally different, otherwise, pytorch-lightning handles the usual differences.

Returns:

The loss value on this batch.

Return type:

torch.Tensor

Note

This is typically the method to overwrite when subclassing System. If the training and validation steps are somehow different (except for loss.backward() and optimzer.step()), the argument train can be used to switch behavior. Otherwise, training_step and validation_step can be overwriten.

class recipes.cec1.e009_sheffield.train.DenModule(*args: Any, **kwargs: Any)[source]

Bases: System

common_step(batch, batch_nb, train=True)[source]

Common forward step between training and validation. The function of this method is to unpack the data given by the loader, forward the batch through the model and compute the loss. Pytorch-lightning handles all the rest.

Parameters:
  • batch – the object returned by the loader (a list of torch.Tensor in most cases) but can be something else.

  • _batch_nb (int) – The number of the batch in the epoch.

  • _train (bool) – Whether in training mode. Needed only if the training and validation steps are fundamentally different, otherwise, pytorch-lightning handles the usual differences.

Returns:

The loss value on this batch.

Return type:

torch.Tensor

Note

This is typically the method to overwrite when subclassing System. If the training and validation steps are somehow different (except for loss.backward() and optimzer.step()), the argument train can be used to switch behavior. Otherwise, training_step and validation_step can be overwriten.

recipes.cec1.e009_sheffield.train.run(cfg: DictConfig) None[source]
recipes.cec1.e009_sheffield.train.train_amp(cfg, ear)[source]
recipes.cec1.e009_sheffield.train.train_den(cfg, ear)[source]