Saturday, November 30, 2019

How do you mix quantitative asset allocation with qualitative views?


Usually in asset allocation you have a quantitative approach (which can be from example mean-variance), but you (or you and your firm) also have a more qualitative approach given market-conditions, economic outlooks, or tactical indicators.


Hence, you will eventually come up with 2 allocations, the ones strictly dictated by the numbers $w^*$ which is the result of your quantitative algorithm and the one you have in mind from your personal expectations $\bar{w}$.


What are common the ways $f$ to mix them together such that $w=f(w^*,\bar{w})$ is your "final" allocation?



Answer



There are some cases where you can blend your portfolios using weights directly. One case involves corner portfolios. In this case a linear combination of weights is also efficient. Another case is where you can treat the two separate weights you have produced each as distinct portfolio under the assumption that the correlation between these portfolios is relatively stable. In this scenario, the problem reduces to a two-asset portfolio optimization problem (each asset is simply the linear combination of weights produced via your two methods).



The other class of methods involves blending via the expected returns.


If you arrived at the weights via a mean-variance utility optimization you can back-out the implied expected returns based on these weights and a risk aversion parameter. (Indeed, this is the approach Black-Litterman took to back-out the implied expected returns from a set of benchmark weights, and Jay Walters shows the simple linear algebra for this in the paper I cite below.)


The approaches below require that you blend views on expected returns rather than weights. This is more natural since weights are product of some optimization (one might be short a security for hedging purposes despite having a positive expected return view for the security). Two sets of portfolio weights may each be on the efficient frontier but a generic convex blend of these two sets may be inefficient.


To blend your qualitative scores with quantitative views in return space you can:


Convert qualitative factors into quantitative scores. Grinold & Kahn discuss various techniques in Active Portfolio Management, 2nd ed. Check out the section "Information Processing". One straightforward technique is if you have a rating system such as "Sell, Hold, Buy, Strong Buy" then associate each rating with a dummy variable and build a linear (or non-linear) factor model including your quantitative forecasts as other factors. (Note: There is a more general question of "signal weighting - how do I blend quantitative information efficiently?" which might be worthy of another post.)


OR


Express qualitative views in the form of confidences via Black-Litterman (i.e. MSFT will rise more than APPL with 20% confidence). A Black-Litterman model - specifically the Idzorek variation which uses % confidences - is a good way to do this. Jay Walters has a nice reference paper on Black-Litterman here. Also there is a package in R called BLCop that you can toy with.


The Black-Litterman model has been refined over the last several years. Read the papers from Wing Cheung (Nomura) on the "Augmented Black-Litterman model" if you want to see another explanation. His implementation is quite flexible as it supports generalized factor-view blending as well as other features.


OR


Yet a more general technique is Entropy-pooling. Whereas, Black-Litterman allows you to create views on expectations of asset performance (MSFT will return 8%), or relative views as in MSFT will-outperform APPL, you might have views on correlations, variance, views on the rankings of securities, or views on underlying risk-factors that are statistically related to your securities of interest. These views cannot be satisfied by the "Pick Matrix"/Omega construction in Black-Litterman. In this case Atillio Meucci's implementation of Entropy-Pooling is the way to go. He has MATLAB code demonstrating the approach here. The Entropy Pooling framework applies to parametric or non-parametric problems.



The non-parametric version of Entropy Pooling can handle scenarios which correspond to arbitrary probability distributions. Entropy pooling pooling will process a view with and update the probabilities for each scenario in a way that imposes the least amount of spurious structure on the original probabilities assigned to the scenarios. In this way Entropy pooling is perfectly Bayesian.


Essentially you have a prior -- JxN panel of data furnished from historical data, a reference model, or a monte carlo simulation (J = number of scenarios; N = asset returns or risk factors -- anything you could take a view on.). This JxN panel ties to a vector 'p' of probabilities where one probability corresponds to each scenario. (If you are using historical data, the vector of probabilities could simply be 1/length(data), or exponentially weighted.)


Then you can create a view which contains your current qualitative scores. These views are expressed as constraints on probabilities. So you can setup a constraint which is interpreted as "Buy implies the security is in the top quantile of returns". Or, perhaps you aren't sure exactly what the labels imply about expected returns but you believe it will be consistent with the prior. In this case you can assign the qualitative scores from the past to the historical empirical data (even if you only have partial coverage of the investment universe), and then create views consisting of your qualitative categorical assessments.


The Entropy pooling procedure will generate a revised set of probabilities for each of the scenarios. You can then take expectations (probability weighted average) with the new probabilities for expected portfolio returns, expected security returns, correlations, etc. You would then proceed to optimization with you revised expectations on returns and risk.


No comments:

Post a Comment

technique - How credible is wikipedia?

I understand that this question relates more to wikipedia than it does writing but... If I was going to use wikipedia for a source for a res...