Top Qs
Timeline
Chat
Perspective

Hierarchical Risk Parity

Machine learning framework for portfolio construction From Wikipedia, the free encyclopedia

Remove ads

Hierarchical Risk Parity (HRP) is an advanced investment portfolio optimization framework developed in 2016 by Marcos López de Prado at Guggenheim Partners and Cornell University. HRP is a probabilistic graph-based alternative to the prevailing mean-variance optimization (MVO) framework developed by Harry Markowitz in 1952, and for which he received the Nobel Prize in economic sciences.[1] HRP algorithms apply discrete mathematics and machine learning techniques to create diversified and robust investment portfolios that outperform MVO methods out-of-sample.[2] HRP aims to address the limitations of traditional portfolio construction methods, particularly when dealing with highly correlated assets.[3] Following its publication, HRP has been implemented in numerous open-source libraries, and received multiple extensions.[4][5][6][7][8]

Remove ads

Key Features

Summarize
Perspective

HRP portfolios have been proposed as a robust alternative to traditional quadratic optimization methods, including the Critical Line Algorithm (CLA) of Markowitz. HRP addresses three central issues commonly associated with quadratic optimizers: numerical instability, excessive concentration in a small number of assets, and poor out-of-sample performance.[1]

HRP leverages techniques from graph theory and machine learning to construct diversified portfolios using only the information embedded in the covariance matrix. Unlike quadratic programming methods, HRP does not require the covariance matrix to be invertible. Consequently, HRP remains applicable even in cases where the covariance matrix is ill-conditioned or singular—conditions under which standard optimizers fail.[9]

Monte Carlo simulations indicate that HRP achieves lower out-of-sample variance than CLA, despite the fact that minimizing variance is the explicit optimization objective of CLA. Furthermore, HRP portfolios exhibit lower realized risk compared to those generated by traditional risk parity methodologies. Empirical backtests have demonstrated that HRP would have historically outperformed conventional portfolio construction techniques.[10][11]

Algorithms within the HRP framework are characterized by the following features:

  • Machine Learning Approach: HRP employs hierarchical clustering, a machine learning technique, to group similar assets based on their correlations.[2][12] This allows the algorithm to identify the underlying hierarchical structure of the portfolio, and avoid that errors spread through the entire network.
  • Risk-Based Allocation: The algorithm allocates capital based on risk, ensuring that assets only compete with similar assets for representation in the portfolio.[4] This approach leads to better diversification across different risk sources, while avoiding the instability associated with noisy returns estimates.
  • Covariance Matrix Handling: Unlike traditional methods like Mean-Variance Optimization, HRP does not require inverting the covariance matrix.[3] This makes it more stable and applicable to portfolios with a large number of assets, particularly when the covariance matrix's condition number is high.
Remove ads

The Problem: Markowitz's Curse

Summarize
Perspective

Portfolio construction is perhaps the most recurrent financial problem. On a daily basis, investment managers must build portfolios that incorporate their views and forecasts on risks and returns. Despite the theoretical elegance of Markowitz's mean-variance framework, its practical implementation is hindered by several limitations that undermine the reliability of solutions derived from the Critical Line Algorithm (CLA). A principal concern is the high sensitivity of optimal portfolios to small perturbations in expected returns: even minor forecasting errors can result in significantly different allocations (Michaud, 1998).[13] Given the inherent difficulty of producing accurate return forecasts, numerous researchers have advocated for approaches that forgo expected returns entirely and instead rely solely on the covariance structure of asset returns. This has given rise to risk-based allocation methods, among which risk parity is a widely cited example (Jurczenko, 2015).[14]

While eliminating return forecasts mitigates some instability, it does not eliminate it. Quadratic programming techniques employed in portfolio optimization require the inversion of a positive-definite covariance matrix, meaning all eigenvalues must be strictly positive. When the matrix is numerically ill-conditioned—that is, when the ratio of its largest to smallest eigenvalue (its condition number) is large—matrix inversion becomes unreliable and prone to significant numerical errors (Bailey and López de Prado, 2012).[15]

Thumb
A diagonal correlation matrix has the lowest condition number. As we add correlated investments, the maximum eigenvalue is greater and the minimum eigenvalue is lower. The condition number rises quickly, leading to unstable inverse correlation matrices. At some point, the benefits of diversification are more than offset by estimation errors.

The condition number of a covariance, correlation, or any symmetric (and thus diagonalizable) matrix is defined as the absolute value of the ratio between its largest and smallest eigenvalues in modulus. The figure on the right presents the sorted eigenvalues of several correlation matrices; the condition number is represented by the ratio of the first to last eigenvalues in each sequence. A diagonal correlation matrix, which is equal to its own inverse, exhibits the minimum possible condition number.

As the number of correlated (or multicollinear) assets in a portfolio increases, the condition number rises. At high levels, this leads to severe numerical instability, whereby slight modifications in any matrix entry may result in drastically different inverses. This phenomenon, often referred to as Markowitz’s curse, encapsulates the paradox wherein increased correlation among assets heightens the theoretical need for diversification, yet simultaneously increases the likelihood of unstable optimization outcomes. Consequently, the potential benefits of diversification are frequently overshadowed by estimation errors.

These problems are exacerbated as the dimensionality of the covariance matrix increases. The estimation of each covariance term consumes degrees of freedom, and in general, a minimum of independent and identically distributed (IID) observations is required to estimate a non-singular covariance matrix of dimension . For example, constructing an invertible covariance matrix of dimension 50 necessitates at least five years of daily IID observations. However, empirical evidence suggests that the correlation structure of financial assets is highly unstable over such extended periods. These difficulties are highlighted by the observation that even naïve allocation strategies—such as equally weighted portfolios—have frequently outperformed both mean-variance and risk-based optimizations in out-of-sample tests (De Miguel et al., 2009).[16]

Remove ads

The Solution: Hierarchical Risk Parity

Summarize
Perspective

The HRP algorithm addresses Markowitz's curse in three steps:[17]

  1. Hierarchical Clustering: Assets are grouped into clusters based on their correlations, forming a hierarchical tree structure.
  2. Quasi-Diagonalization: The correlation matrix is reordered based on the clustering results, revealing a block diagonal structure.
  3. Recursive Bisection: Weights are assigned to assets through a top-down approach, splitting the portfolio into smaller sub-portfolios and allocating capital based on inverse variance.

Step 1: Hierarchical Clustering

Thumb
Sample correlation matrix before applying Step 1 of HRP.

Given a matrix of asset returns , where each column represents a time series of returns for one of assets over time periods, a hierarchical clustering process can be used to construct a tree-based representation of asset relationships. First, we compute the correlation matrix , where . From this, a pairwise distance matrix is defined using the transformation:

This distance function defines a proper metric space, satisfying non-negativity, identity of indiscernibles, symmetry, and the triangle inequality.

Next, a secondary distance matrix is computed, where each entry measures the Euclidean distance between the distance profiles of two assets:

While reflects correlation-based proximity between two assets, quantifies dissimilarity across the entire system, as it depends on all pairwise distances.

Thumb
The clustering procedure has correctly identified that series 9 and 10 where perturbations of series 2, hence (9,2,10) are clustered together. Similarly, 7 is a perturbation of 1, 6 is a perturbation of 3, and 8 is a perturbation of 5. The only original item that was not perturbated is 4, and that is the one item for which the clustering algorithm found no similarity.

Hierarchical clustering proceeds by identifying the pair with the smallest value of (for ), and forming a new cluster . The distance between this new cluster and all remaining items is updated using a linkage criterion. One example is the single linkage (nearest neighbor) method:

The algorithm is repeated recursively: the pair with minimum distance is clustered, the matrix is updated, and clustering continues until all items belong to a single cluster. This produces clusters and a binary tree (dendrogram) that encodes the hierarchical structure.

The clustering process is recorded in a linkage matrix , where each row represents a clustering step: and are the merged clusters, is the distance between them, and is the total number of original items included in the cluster. HRP accepts a wide range of clustering metrics and linkage criteria. For further discussion, see Rokach and Maimon (2005), Brualdi (2010), and the documentation for SciPy’s hierarchical clustering tools.[18][19]

Step 2: Quasi-Diagonalization

Thumb
Step 2 quasi-diagonalizes the correlation matrix, in the sense that the largest values lie along the diagonal. However, unlike PCA or similar procedures, HRP does not require a change of basis. HRP solves the allocation problem robustly, while working with the original investments.

At this stage, the rows and columns of the covariance matrix are reordered such that the largest values are concentrated along the diagonal. This process is referred to as a quasi-diagonalization of the covariance matrix. Unlike full diagonalization, this method does not require a change of basis. The resulting structure places similar investments near each other and dissimilar investments farther apart, facilitating block-wise portfolio construction. (See illustrative examples on the right)

The procedure relies on the structure of the linkage matrix generated during hierarchical clustering. Each row in the linkage matrix represents the merger of two branches into a cluster. To obtain the ordering of original items, the final row is recursively expanded by replacing clusters with their constituents, in the order dictated by the hierarchical tree. This produces a sorted list of indices corresponding to the unclustered items.

This ordering is used in the final step of the HRP algorithm to allocate capital proportionally to the hierarchical structure of asset relationships.

Step 3: Recursive Bisection

This final stage of the Hierarchical Risk Parity (HRP) algorithm computes portfolio weights using the quasi-diagonal covariance matrix. When the covariance matrix is diagonal, inverse-variance weighting is optimal (see Appendix 16.A.2). HRP uses this insight in both bottom-up and top-down directions:

Bottom-up: estimate the variance of a cluster using inverse-variance weights.

Top-down: split capital between clusters inversely proportional to their estimated variances.

The recursive algorithm proceeds as follows:

The recursive algorithm proceeds as follows:

  1. Initialize a list L with all asset indices: L = {1, 2, ..., N}.
  2. Assign initial weights: w_n = 1 for all assets.
  3. While any subset in L contains more than one element:
    1. Bisect the subset into two halves: L1 and L2.
    2. For each subset Lj, compute its covariance matrix Vj and define weights proportional to the inverse of its diagonal entries.
    3. Normalize those weights so they sum to 1, and compute the variance of the cluster as w' V w.
    4. Compute a split factor α = 1 - (V1 / (V1 + V2)).
    5. Rescale the weights of L1 by α and of L2 by 1 - α.

This ensures that all weights remain between 0 and 1 and sum to 1 at all times.

The following Python function implements this process, where sorIx is the list of ordered assets returned by Step 2 (Quasi-Diagonalization), cov is the covariance matrix:

import pandas as pd, numpy as np
def getRecBipart(cov, sortIx):
    # Compute HRP allocation
    w = pd.Series(1.0, index=sortIx)
    clusters = [sortIx]    
    while len(clusters) > 0:
        # Bisect each cluster
        new_clusters = []
        for cluster in clusters:
            if len(cluster) > 1:
                split = int(len(cluster) / 2)
                new_clusters.append(cluster[:split])
                new_clusters.append(cluster[split:])
        clusters = new_clusters
        # Update weights
        for i in range(0, len(clusters), 2):
            c1 = clusters[i]
            c2 = clusters[i + 1]
            var1 = getClusterVar(cov, c1)
            var2 = getClusterVar(cov, c2)
            alpha = 1.0 - var1 / (var1 + var2)
            w[c1] *= alpha
            w[c2] *= 1.0 - alpha
    return w
#------------------------------------------------------------------------------
def getClusterVar(cov,cItems):
    # Compute variance per cluster
    cov_=cov.loc[cItems,cItems] # matrix slice
    w_=getIVP(cov_).reshape(-1,1)
    cVar=np.dot(np.dot(w_.T,cov_),w_)[0,0]
    return cVar
#------------------------------------------------------------------------------
def getIVP(cov,**kargs):
    # Compute the inverse-variance portfolio
    ivp=1./np.diag(cov)
    ivp/=ivp.sum()
    return ivp

This stage completes the HRP algorithm. It runs in time in the best case and in the worst case. The result is a portfolio allocation that respects hierarchical structure while being robust to estimation error.

Remove ads

Numerical Example

Summarize
Perspective
Thumb
A characteristic outcome of the three methods studied: CLA concentrates weights on a few investments, hence becoming exposed to idiosyncratic shocks. IVP evenly spreads weights through all investments, ignoring the correlation structure. This makes it vulnerable to systemic shocks. HRP finds a compromise between diversifying across all investments and diversifying across cluster, which makes it more resilient against both types of shocks.

In this section, we compute HRP portfolio allocations on the sample correlation matrix used earlier, and compare them to two alternative standard methods:

  • A minimum-variance portfolio computed using quadratic optimization, specifically the Critical Line Algorithm (CLA). This is the only solution on the efficient frontier that does not depend on expected returns. See Bailey and López de Prado (2013) for an implementation of CLA.[20]
  • A traditional risk parity portfolio based on inverse-variance weights (IVP).

All portfolios are subject to standard constraints: non-negativity () and full investment (). The condition number of the covariance matrix used in this example is approximately 150.93, which is not particularly high and thus does not unduly penalize CLA.

The table on the right shows several notable results. CLA allocates 92.66% of the total portfolio weight to the top five assets, whereas HRP allocates 62.57% to the same group. CLA assigns zero weight to three assets; without the non-negativity constraint, those allocations would have been negative. HRP offers an intermediate allocation structure between the concentrated nature of CLA and the equalizing tendency of IVP.

The concentration observed in CLA is a result of its objective to minimize total portfolio variance. However, the resulting portfolios display similar risk profiles, with standard deviations of and . Thus, CLA reduces risk only marginally while significantly reducing diversification. In this example, the top five positions dominate the CLA portfolio, exposing it to greater vulnerability in adverse market conditions compared to the more balanced HRP allocation.

Remove ads

Monte-Carlo Simulations

Summarize
Perspective

While the in-sample variance of the Critical Line Algorithm (CLA) portfolio is lower than that of the Hierarchical Risk Parity (HRP) portfolio in the earlier numerical example, the portfolio with minimum in-sample variance is not necessarily optimal out-of-sample. To evaluate robustness, a Monte Carlo simulation can be conducted, consistent with the methodology in López de Prado (2018).[21]

The simulation procedure consists of the following steps:

  1. Generate 10 time series of Gaussian returns, each with 520 observations (equivalent to two years of daily data), zero mean, and a standard deviation of 10%. To reflect realistic market behavior, random shocks and a random correlation structure are applied to the data, consistent with jump-diffusion models such as Merton (1976).[22]
  2. Using a rolling window of 260 observations (one year), compute portfolio weights under three methodologies: HRP, CLA (minimum-variance), and the Inverse-Variance Portfolio (IVP). Portfolios are re-estimated and rebalanced every 22 observations (monthly frequency).
  3. Calculate the out-of-sample returns of the three portfolios over the subsequent periods. This process is repeated 10,000 times.

As expected, the average out-of-sample returns are close to zero across all methods. The key differences emerge in the variances of the out-of-sample returns: , , and .

Although CLA is designed to minimize portfolio variance, its out-of-sample performance shows the highest variance, exceeding that of HRP by 72.47%. This finding aligns with earlier results from De Miguel et al. (2009).[16] Based on these figures, HRP improves the out-of-sample Sharpe ratio of a CLA strategy by approximately 31.3%. While IVP benefits from assuming a diagonal covariance matrix, its variance remains 38.24% higher than HRP's. These differences are especially important for risk parity strategies, which often employ significant leverage (see Bailey et al., 2014).[23]

The experiment can be repeated with different parameters, including larger asset universes, more frequent or severe shocks, varying correlation structures, or the inclusion of rebalancing costs. Notably, the cumulative transaction costs associated with frequent CLA rebalances can materially degrade performance over time.

The mathematical explanation for HRP's outperformance over CLA and IVP is non-trivial. However, an intuitive rationale can be offered: CLA's concentration makes it vulnerable to idiosyncratic shocks, while IVP’s disregard for asset correlations makes it sensitive to systematic shocks involving correlated groups. HRP mitigates both risks by balancing diversification across individual assets and across hierarchical clusters. This results in more robust allocations. For a detailed mathematical proof, see Antonov et al. (2024).[24]

Remove ads

Advantages

HRP algorithms offer several advantages over the (at the time) MVO state-of-the-art methods:

  • Improved diversification: HRP creates portfolios that are well-diversified across different risk sources.
  • Robustness: The algorithm has shown to generate portfolios with robust out-of-sample properties.[3][25]
  • Flexibility: HRP can handle singular covariance matrices and incorporate various constraints.[2]
  • Intuitive approach: The clustering-based method provides an intuitive understanding of the portfolio structure.

By combining elements of machine learning, risk parity, and traditional portfolio theory, HRP offers a sophisticated approach to portfolio construction that aims to overcome the limitations of conventional methods.

Remove ads

References

Loading related searches...

Wikiwand - on

Seamless Wikipedia browsing. On steroids.

Remove ads