deepdow.layers.collapse module

Collection of layers that decrease the number of dimensions.

class AttentionCollapse(n_channels)[source]

Bases: Module

Collapsing over the channels with attention.

Parameters:

n_channels (int) – Number of input channels.

affine

Fully connected layer performing linear mapping.

Type:

nn.Module

context_vector

Fully connected layer encoding direction importance.

Type:

nn.Module

forward(x)[source]

Perform forward pass.

Parameters:

x (torch.Tensor) – Tensor of shape (n_samples, n_channels, lookback, n_assets).

Returns:

Tensor of shape (n_samples, n_channels, n_assets).

Return type:

torch.Tensor

training: bool
class AverageCollapse(collapse_dim=2)[source]

Bases: Module

Global average collapsing over a specified dimension.

forward(x)[source]

Perform forward pass.

Parameters:

x (torch.Tensor) – N-dimensional tensor of shape (d_0, d_1, …, d_{N-1}).

Returns:

{N-1}-dimensional tensor of shape (d_0, …, d_{collapse_dim - 1}, d_{collapse_dim + 1}, …, d_{N-1}). Average over the removeed dimension.

Return type:

torch.Tensor

training: bool
class ElementCollapse(collapse_dim=2, element_ix=-1)[source]

Bases: Module

Single element over a specified dimension.

forward(x)[source]

Perform forward pass.

Parameters:

x (torch.Tensor) – N-dimensional tensor of shape (d_0, d_1, …, d_{N-1}).

Returns:

{N-1}-dimensional tensor of shape (d_0, …, d_{collapse_dim - 1}, d_{collapse_dim + 1}, …, d_{N-1}). Taking the self.element_ix element of the removed dimension.

Return type:

torch.Tensor

training: bool
class ExponentialCollapse(collapse_dim=2, forgetting_factor=None)[source]

Bases: Module

Exponential weighted collapsing over a specified dimension.

The unscaled weights are defined recursively with the following rules:
  • w_{0}=1

  • w_{t+1} = forgetting_factor * w_{t} + 1

Parameters:
  • collapse_dim (int) – What dimension to remove.

  • forgetting_factor (float or None) – If float, then fixed constant. If None this will become learnable.

forward(x)[source]

Perform forward pass.

Parameters:

x (torch.Tensor) – N-dimensional tensor of shape (d_0, d_1, …, d_{N-1}).

Returns:

{N-1}-dimensional tensor of shape (d_0, …, d_{collapse_dim - 1}, d_{collapse_dim + 1}, …, d_{N-1}). Exponential Average over the removed dimension.

Return type:

torch.Tensor

training: bool
class MaxCollapse(collapse_dim=2)[source]

Bases: Module

Global max collapsing over a specified dimension.

forward(x)[source]

Perform forward pass.

Parameters:

x (torch.Tensor) – N-dimensional tensor of shape (d_0, d_1, …, d_{N-1}).

Returns:

{N-1}-dimensional tensor of shape (d_0, …, d_{collapse_dim - 1}, d_{collapse_dim + 1}, …, d_{N-1}). Maximum over the removed dimension.

Return type:

torch.Tensor

training: bool
class SumCollapse(collapse_dim=2)[source]

Bases: Module

Global sum collapsing over a specified dimension.

forward(x)[source]

Perform forward pass.

Parameters:

x (torch.Tensor) – N-dimensional tensor of shape (d_0, d_1, …, d_{N-1}).

Returns:

{N-1}-dimensional tensor of shape (d_0, …, d_{collapse_dim - 1}, d_{collapse_dim + 1}, …, d_{N-1}). Sum over the removed dimension.

Return type:

torch.Tensor

training: bool