clarity.utils.source_separation_support module

Module that contains functions for source separation.

clarity.utils.source_separation_support.get_device(device: str) tuple[source]

Get the Torch device.

Parameters:

device (str) – device type, e.g. “cpu”, “gpu0”, “gpu1”, etc.

Returns:

torch.device() appropiate to the hardware available. str: device type selected, e.g. “cpu”, “cuda”.

Return type:

torch.device

clarity.utils.source_separation_support.separate_sources(model: torch.nn.Module, mix: torch.Tensor, sample_rate: int, segment: float = 10.0, overlap: float = 0.1, device: torch.device | str | None = None)[source]

Apply model to a given mixture. Use fade, and add segments together in order to add model segment by segment.

Parameters:
  • model (torch.nn.Module) – model to use for separation

  • mix (torch.Tensor) – mixture to separate, shape (batch, channels, time)

  • sample_rate (int) – sampling rate of the mixture

  • segment (float) – segment length in seconds

  • overlap (float) – overlap between segments, between 0 and 1

  • device (torch.device, str, or None) – if provided, device on which to execute the computation, otherwise mix.device is assumed. When device is different from mix.device, only local computations will be on device, while the entire tracks will be stored on mix.device.

Returns:

estimated sources

Return type:

torch.Tensor

Based on https://pytorch.org/audio/main/tutorials/hybrid_demucs_tutorial.html