deepdow.layers.transform module¶
Collection of layers focusing on transforming tensors while keeping the number of dimensions constant.
-
class
Conv
(n_input_channels, n_output_channels, kernel_size=3, method='2D')[source]¶ Bases:
torch.nn.modules.module.Module
Convolutional layer.
- Parameters
n_input_channels (int) – Number of input channels.
n_output_channels (int) – Number of output channels.
kernel_size (int) – Size of the kernel.
method (str, {'2D, '1D'}) – What type of convolution is used in the background.
-
forward
(x)[source]¶ Perform forward pass.
- Parameters
x (torch.Tensor) – Tensor of shape (n_samples, n_input_channels, lookback, n_assets) if `self.method=’2D’. Otherwise (n_samples, n_input_channels, lookback).
- Returns
Tensor of shape (n_samples, n_output_channels, lookback, n_assets) if self.method=’2D’. Otherwise (n_samples, n_output_channels, lookback).
- Return type
torch.Tensor
-
training
: bool¶
-
class
RNN
(n_channels, hidden_size, cell_type='LSTM', bidirectional=True, n_layers=1)[source]¶ Bases:
torch.nn.modules.module.Module
Recurrent neural network layer.
- Parameters
n_channels (int) – Number of input channels.
hidden_size (int) – Hidden state size. Alternatively one can see it as number of output channels.
cell_type (str, {'LSTM', 'RNN'}) – Type of the recurrent cell.
bidirectional (bool) – If True, then bidirectional. Note that hidden_size already takes this parameter into account.
n_layers (int) – Number of stacked layers.
-
forward
(x)[source]¶ Perform forward pass.
- Parameters
x (torch.Tensor) – Tensor of shape (n_samples, n_channels, lookback, n_assets).
- Returns
Tensor of shape (n_samples, self.hidden_size, lookback, n_assets).
- Return type
torch.Tensor
-
training
: bool¶
-
class
Warp
(mode='bilinear', padding_mode='reflection')[source]¶ Bases:
torch.nn.modules.module.Module
Custom warping layer.
-
forward
(x, tform)[source]¶ Warp the tensor x with tform along the time dimension.
- Parameters
x (torch.Tensor) – Tensor of shape (n_samples, n_channels, lookback, n_assets).
tform (torch.Tensor) – Tensor of shape (n_samples, lookback) or (n_samples, lookback, n_assets). Note that in the first case the same transformation is going to be used over all assets. To prevent folding the transformation should be increasing along the time dimension. It should range from -1 (beginning of the series) to 1 (end of the series).
- Returns
x_warped – Warped version of input x with transformation tform. The shape is the same as the input shape - (n_samples, n_channels, lookback, n_assets).
- Return type
torch.Tensor
-
training
: bool¶
-
-
class
Zoom
(mode='bilinear', padding_mode='reflection')[source]¶ Bases:
torch.nn.modules.module.Module
Zoom in and out.
It can dynamically zoom into more recent timesteps and disregard older ones. Conversely, it can collapse more timesteps into one. Based on Spatial Transformer Network.
- Parameters
mode (str, {'bilinear', 'nearest'}) – What interpolation to perform.
padding_mode (str, {'zeros', 'border', 'reflection'}) – How to fill in values that fall outisde of the grid. Relevant in the case when we zoom out.
References
- [1] Jaderberg, Max, Karen Simonyan, and Andrew Zisserman. “Spatial transformer networks.”
Advances in neural information processing systems. 2015.
-
forward
(x, scale)[source]¶ Perform forward pass.
- Parameters
x (torch.Tensor) – Tensor of shape (n_samples, n_channels, lookback, n_assets).
scale (torch.Tensor) – Tensor of shape (n_samples,) representing how much to zoom in (scale < 1) or zoom out (scale > 1).
- Returns
Tensor of shape (n_samples, n_channels, lookback, n_assets) that is a zoomed version of the input. Note that the shape is identical to the input.
- Return type
torch.Tensor
-
training
: bool¶