deepdow.layers.allocate module

Collection of layers that are using producing weight allocations.

class AnalyticalMarkowitz[source]

Bases: Module

Minimum variance and maximum sharpe ratio with no constraints.

There exists known analytical solutions so numerical solutions are necessary.

References

[1] http://faculty.washington.edu/ezivot/econ424/portfolioTheoryMatrix.pdf

forward(covmat, rets=None)[source]

Perform forward pass.

Parameters:
  • covmat (torch.Tensor) – Covariance matrix of shape (n_samples, n_assets, n_assets).

  • rets (torch.Tensor or None) – If tensor then of shape (n_samples, n_assets) representing expected returns. If provided triggers computation of maximum share ratio. Else None triggers computation of minimum variance portfolio.

Returns:

weights – Of shape (n_samples, n_assets) representing the optimal weights. If rets provided, then it represents maximum sharpe ratio portfolio (tangency portfolio). Otherwise minimum variance portfolio.

Return type:

torch.Tensor

training: bool
class NCO(n_clusters, n_init=10, init='random', random_state=None)[source]

Bases: Module

Nested cluster optimization.

This optimization algorithm performs the following steps:

  1. Divide all assets into clusters

  2. Run standard optimization inside of each of these clusters (intra step)

  3. Run standard optimization on the resulting portfolios (inter step)

  4. Compute the final weights

Parameters:
  • n_clusters (int) – Number of clusters to find in the data. Note that the underlying clustering model is KMeans - deepdow.layers.KMeans.

  • n_init (int) – Number of runs of the clustering algorithm.

  • init (str, {'random', 'k-means++'}) – Initialization strategy of the clustering algorithm.

  • random_state (int or None) – Random state passed to the stochastic k-means clustering.

See also

deepdow.layers.KMeans

k-means clustering algorithm

References

[1] M Lopez de Prado.

“A Robust Estimator of the Efficient Frontier” Available at SSRN 3469961, 2019

forward(covmat, rets=None)[source]

Perform forward pass.

Parameters:
  • covmat (torch.Tensor) – Covariance matrix of shape (n_samples, n_assets, n_assets).

  • rets (torch.Tensor or None) – If tensor then of shape (n_samples, n_assets) representing expected returns. If provided triggers computation of maximum share ratio. Else None triggers computation of minimum variance portfolio.

Returns:

weights – Of shape (n_samples, n_assets) representing the optimal weights. If rets provided, then maximum sharpe ratio portfolio (tangency portfolio) used both on intra and inter cluster level. Otherwise minimum variance portfolio.

Return type:

torch.Tensor

Notes

Currently there is not batching over the sample dimension - simple for loop is used.

training: bool
class NumericalMarkowitz(n_assets, max_weight=1)[source]

Bases: Module

Convex optimization layer stylized into portfolio optimization problem.

Parameters:

n_assets (int) – Number of assets.

cvxpylayer

Custom layer used by a third party package called cvxpylayers.

Type:

CvxpyLayer

References

[1] https://github.com/cvxgrp/cvxpylayers

forward(rets, covmat_sqrt, gamma_sqrt, alpha)[source]

Perform forward pass.

Parameters:
  • rets (torch.Tensor) – Of shape (n_samples, n_assets) representing expected returns (or whatever the feature extractor decided to encode).

  • covmat_sqrt (torch.Tensor) – Of shape (n_samples, n_assets, n_assets) representing the square of the covariance matrix.

  • gamma_sqrt (torch.Tensor) – Of shape (n_samples,) representing the tradeoff between risk and return - where on efficient frontier we are.

  • alpha (torch.Tensor) – Of shape (n_samples,) representing how much L2 regularization is applied to weights. Note that we pass the absolute value of this variable into the optimizer since when creating the problem we asserted it is going to be nonnegative.

Returns:

weights – Of shape (n_samples, n_assets) representing the optimal weights as determined by the convex optimizer.

Return type:

torch.Tensor

training: bool
class NumericalRiskBudgeting(n_assets, max_weight=1)[source]

Bases: Module

Convex optimization layer stylized into portfolio optimization problem.

Parameters:

n_assets (int) – Number of assets.

cvxpylayer

Custom layer used by a third party package called cvxpylayers.

Type:

CvxpyLayer

References

[1] https://github.com/cvxgrp/cvxpylayers [2] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2297383 [3] https://mpra.ub.uni-muenchen.de/37749/2/MPRA_paper_37749.pdf

forward(covmat_sqrt, b)[source]

Perform forward pass.

Parameters:
  • covmat (torch.Tensor) – Of shape (n_samples, n_assets, n_assets) representing the covariance matrix.

  • b (torch.Tensor) – Of shape (n_samples, n_assets) representing the budget, risk contribution from each component (asset) is equal to the budget, refer [3]

Returns:

weights – Of shape (n_samples, n_assets) representing the optimal weights as determined by the convex optimizer.

Return type:

torch.Tensor

training: bool
class Resample(allocator, n_draws=None, n_portfolios=5, sqrt=False, random_state=None)[source]

Bases: Module

Meta allocator that bootstraps the input expected returns and covariance matrix.

The idea is to take the input covmat and expected returns and view them as parameters of a Multivariate Normal distribution. After that, we iterate the below steps n_portfolios times:

  1. Sample n_draws from the distribution

  2. Estimate expected_returns and covariance matrix

  3. Use the allocator to compute weights.

This will results in n_portfolios portfolios that we simply average to get the final weights.

Parameters:
  • allocator (AnalyticalMarkowitz or NCO or NumericalMarkowitz) – Instance of an allocator.

  • n_draws (int or None) – Number of draws. If None then set equal to number of assets to prevent numerical problems.

  • n_portfolios (int) – Number of samples.

  • sqrt (bool) – If True, then the input array represent the square root of the covariance matrix. Else it is the actual covariance matrix.

  • random_state (int or None) – Random state (forward passes with same parameters will have same results).

References

[1] Michaud, Richard O., and Robert Michaud.

“Estimation error and portfolio optimization: a resampling solution.” Available at SSRN 2658657 (2007)

forward(matrix, rets=None, **kwargs)[source]

Perform forward pass.

Only accepts keyword arguments to avoid ambiguity.

Parameters:
  • matrix (torch.Tensor) – Of shape (n_samples, n_assets, n_assets) representing the square of the covariance matrix if self.square=True else the covariance matrix itself.

  • rets (torch.Tensor or None) – Of shape (n_samples, n_assets) representing expected returns (or whatever the feature extractor decided to encode). Note that NCO and AnalyticalMarkowitz allow for rets=None (using only minimum variance).

  • kwargs (dict) – All additional input arguments the self.allocator needs to perform forward pass.

Returns:

weights – Of shape (n_samples, n_assets) representing the optimal weights.

Return type:

torch.Tensor

training: bool
class SoftmaxAllocator(temperature=1, formulation='analytical', n_assets=None, max_weight=1)[source]

Bases: Module

Portfolio creation by computing a softmax over the asset dimension with temperature.

Parameters:
  • temperature (None or float) – If None, then needs to be provided per sample during forward pass. If float then assumed to be always the same.

  • formulation (str, {'analytical', 'variational'}) – Controls what way the problem is solved. If ‘analytical’ then using an explicit formula, however, one cannot decide on a max_weight different than 1. If variational then solved via convex optimization and one can set any max_weight.

  • n_assets (None or int) – Only required and used if formulation=’variational.

  • max_weight (float) – A float between (0, 1] representing the maximum weight per asset.

forward(x, temperature=None)[source]

Perform forward pass.

Parameters:
  • x (torch.Tensor) – Tensor of shape (n_samples, n_assets).

  • temperature (None or torch.Tensor) – If None, then using the temperature provided at construction time. Otherwise a torch.Tensor of shape (n_samples,) representing a per sample temperature.

Returns:

weights – Tensor of shape (n_samples, n_assets).

Return type:

torch.Tensor

training: bool
class SparsemaxAllocator(n_assets, temperature=1, max_weight=1)[source]

Bases: Module

Portfolio creation by computing a sparsemax over the asset dimension with temperature.

Parameters:
  • n_assets (int) – Number of assets. Note that we require this quantity at construction to make sure the underlying cvxpylayer does not need to be reinitialized every forward pass.

  • temperature (None or float) – If None, then needs to be provided per sample during forward pass. If float then assumed to be always the same.

  • max_weight (float) – A float between (0, 1] representing the maximum weight per asset.

References

[1] Martins, Andre, and Ramon Astudillo. “From softmax to sparsemax: A sparse model of attention and multi-label classification.” International Conference on Machine Learning. 2016.

[2] Malaviya, Chaitanya, Pedro Ferreira, and André FT Martins. “Sparse and constrained attention for neural machine translation.” arXiv preprint arXiv:1805.08241 (2018)

forward(x, temperature=None)[source]

Perform forward pass.

Parameters:
  • x (torch.Tensor) – Tensor of shape (n_samples, n_assets).

  • temperature (None or torch.Tensor) – If None, then using the temperature provided at construction time. Otherwise a torch.Tensor of shape (n_samples,) representing a per sample temperature.

Returns:

weights – Tensor of shape (n_samples, n_assets).

Return type:

torch.Tensor

training: bool
class WeightNorm(n_assets)[source]

Bases: Module

Allocation via weight normalization.

We learn a single weight for each asset and make sure that they sum up to one.

forward(x)[source]

Perform forward pass.

Parameters:

x (torch.Tensor) – Tensor of shape (n_samples, dim_1, …., dim_N).

Returns:

weights – Tensor of shape (n_samples, n_assets).

Return type:

torch.Tensor

training: bool