Utils module¶
- utils.generate_local_map_mask(chunk_size, attention_size, mask_future=False, device='cpu')¶
Compute attention mask as attention_size wide diagonal.
- utils.generate_original_PE(length, d_model)¶
Generate positional encoding as described in original paper.
torch.Tensor
- utils.generate_regular_PE(length, d_model, period=24)¶
Generate positional encoding with a given period.