Utils module

utils.generate_local_map_mask(chunk_size, attention_size, mask_future=False, device='cpu')

Compute attention mask as attention_size wide diagonal.

Parameters
  • chunk_size (int) – Time dimension size.

  • attention_size (int) – Number of backward elements to apply attention.

  • device (device) – torch device. Default is 'cpu'.

Return type

BoolTensor

Returns

Mask as a boolean tensor.

utils.generate_original_PE(length, d_model)

Generate positional encoding as described in original paper. torch.Tensor

Parameters
  • length (int) – Time window length, i.e. K.

  • d_model (int) – Dimension of the model vector.

Return type

Tensor

Returns

Tensor of shape (K, d_model).

utils.generate_regular_PE(length, d_model, period=24)

Generate positional encoding with a given period.

Parameters
  • length (int) – Time window length, i.e. K.

  • d_model (int) – Dimension of the model vector.

  • period (Optional[int]) – Size of the pattern to repeat. Default is 24.

Return type

Tensor

Returns

Tensor of shape (K, d_model).