Black-Litterman allocation in algorithmic trading

[ quant ]

In December 2019, I released a major update to PyPortfolioOpt, my python portfolio optimisation package. The most significant addition was an implementation of the Black-Litterman (BL) method. Although BL optimisation is commonly used as part of a pipeline to optimise a multiasset/equity portfolio, in this post I argue that BL is particularly well suited to the problem of optimally weighting signals in an algorithmic trading context.

Overview of the Black-Litterman model

The “standard” way of doing portfolio optimisation is to start with a group of assets from which you’d like to make a portfolio, construct a vector of expected returns (usually by taking the mean historical returns of a group), then feed this vector along with the covariance matrix into a mean-variance optimiser (MVO). This sounds great in theory, except that the resulting portfolios tend to perform shockingly badly in practice, worse than if you were to just equally-weight the assets. As it happens, the major cause of problems is the use of mean historical returns to estimate future returns (see this post for more). The Black-Litterman model attempts to improve on this by using a clever Bayesian scheme to construct the expected returns vector.

BL allocation revolves around the concept of a “view”. A view is just a forecast you have of the future returns of an asset. It is different to the expected returns vector required by Efficient frontier optimisation in two key ways:

  1. You don’t need to specify views on all the assets. This is much more realistic, since you may only have an opinion on a handful of assets, while still wanting to construct a diverse portfolio.
  2. BL lets you input view confidences, which translate predictably into the resulting allocation. For example, for a positive view, a higher confidence (ceteris parabus) will result in a greater percentage allocation to that stock (if this sounds trivially obvious, note that the same isn’t always true in efficient frontier optimisation due to instability).

Formally, BL is a Bayesian model which essentially provides a way of combining your views with some prior estimate of returns (“prior” has a particular meaning in the context of Bayesian statistics). Part of the insight of Black and Litterman’s 1991 paper was that the current market-capitalisations of stocks can be used to construct a reasonable prior estimate of returns. For example, if company A has double the market cap of company B (assuming they have the same volatility), it can be argued that company A has double the expected return of company B. Maybe this isn’t a great estimate, but it doesn’t have to be – it is just a prior, which we will later update with our sophisticated views. The Black-Litterman expected returns are then a Bayesian weighted average of the prior and your views, with a weighting based on your confidence. That is all I want to say about the theory behind BL – the interested reader should refer to PyPortfolioOpt’s documentation and the links therein (for the more adventurous, I have presented the mathematical derivation here). This post will be kept relatively non-technical, or at least not explicitly mathematical, though it is clearly best supplemented with an understanding of the mathematics.

Black-Litterman in algorithmic trading

Although BL is typically used to optimise multiasset or equity portfolios, during a quantitative research internship this past summer I became quite interested in the applications of Black-Litterman allocation to algo-trading. Having done a little bit more research, I found that BL provides a very natural way of optimally weighting trading signals. A signal, in the most general sense, is a trigger for some marketplace action, usually based on the output of some model. For example, we might have multiple time series that we have found to be predictive of future price movements, or a machine learning model that predicts the performance of all the stocks in a universe.

There is a subtle difference between these two examples. In the time series case, it is possible that we have multiple signals on the same asset. It could be the case that we have found many factors that all seem to be predictive (to different degrees) of the price of the GBPUSD exchange rate – some technical factors (e.g. a moving average indicator, an oscillator) as well as some “fundamental” ones (e.g. yield curves, consumer spending). By contrast, in the machine learning recommender case we have a single model that makes forecasts on many assets.

What is common across both of these scenarios is that we need a way of combining the individual signals into a resulting portfolio. As far as I can see, there are two obvious configurations in which we can use Black-Litterman to attack this problem, each one more suited to the respective scenario.

Method 1: treat the signals as assets

The first method, which is more suitable for signals based on time series, is quite simple – we model each signal as an asset with a certain expected return and volatility, then optimise a portfolio of these assets. In fact, this approach can be done without the Black-Litterman model, using standard mean-variance optimisation. We would just have to construct a covariance matrix of signal returns as well as an estimate for expected returns (most likely based on historical returns), before optimising some objective function. The covariance matrix is there to ensure that we consider how signals move together, cognisant that we don’t want to excessively allocate to signals that are moving the same way. That said, we have already seen that a pitfall of MVO is that it is highly sensitive to return estimates because it assumes 100% confidence. Black-Litterman doesn’t.

A fair criticism of BL is that it is usually difficult to quantify the confidence in views – formally, BL requires the confidence to be specified as the variance of expected returns (this is different to the volatility of returns). There are some solutions to this, for example, Idzorek (2003) provides a method for mapping percentage confidence estimates to the required form of the BL input, though this is still rather unwieldy in practice. However, this is much less of a problem when it comes to algorithmic trading because we normally collect a wealth of data during backtesting regarding various performance metrics. In many cases, we are explicitly able to provide a confidence estimate, leading to a rigorous input for the BL model.

One seemingly thorny issue is what we should use for a prior. The prior is meant to be the estimate of returns for each asset in the absence of any particular information. I would argue that the most appropriate prior for a time series signal is zero; in fact, this is perfectly acceptable since the BL formula still works with no prior. Some might say that this defeats the whole point of BL, but I would counter that in this case BL still provides a logical way of combining views that have different confidence levels.

Method 2: construct a portfolio on signal recommendations

Method 1 is best suited to constructing portfolios of signals where each signal pertains to a small basket of assets. For the second scenario we mentioned, in which we have a model that looks through an asset universe and identifies a subset that may outperform, the application of Black-Litterman is a lot more straightforward. We use the model’s estimate of returns along with its confidence (mapped to a variance) as inputs to the Black-Litterman formula and construct a portfolio.

For this method, there is more choice for the prior. You can either use the relevant market prior for that universe (e.g. use the S&P500 if your universe is S&P500 equities) or go with no prior. The interesting thing about this is that BL is a fantastic way to mix the predictions of different models. There is nothing stopping you from combining multiple views (each with their own confidences) on the same asset – model A might think AAPL will return 20%, while model B thinks it will only return 2%. Who should you trust? A naive meta-model might just take the mean, but the BL machinery allows us to take a Bayesian weighted average, which incorporates your estimates of confidence.


This post has shown how Black-Litterman allocation finds a natural application in algorithmic trading, since backtests give us information on both expected returns and the confidence in those estimates. One major downside with BL (which is also a downside for all kinds of mean-variance optimisation) is that it is naturally a single-period optimiser, meaning that it doesn’t automatically adjust to new information. You will need to decide on some period over which views will be aggregated and return vectors constructed, then use BL to make an allocation. At the end of this period, you will have to repeat the process (including your new data) and rebalance positions. This introduces another degree of freedom into your resulting algo-trading pipeline, which is never a good thing. However, making the switch from single-period optimisation to multi-period optimisation is a massive step up and requires significantly more theoretical machinery.