The instability and high sensitivity of optimisation results can be augmented by adding another layer of quantitative methodology in the form of Monte Carlo Simulation. The name Monte Carlo alludes to the nature of the simulation procedure, which, in essence, involves drawing random numbers from a distribution, and then using the random numbers as inputs for a mathematical process, in this case portfolio optimisation. [Quantitative Portfolio Optimization, Asset allocation and Risk management - Mikkel Rassmussen - 2003]
I'm currently trying to apply Monte Carlo techniques in the context of mean variance portfolio optimization.
According to what I have learned until now the most basic and simple model is "Resampling" and it consists in the following steps:
- For each asset fit the historical returns (daily, weekly or monthly data) with a distribution of the parametric family (normal, Student's t, etc.) and obtain the specific parameters (mean, variance).
- For each asset generate a random expected returns from their specific probabilistic distribution.
- Performing mean-variance optimization (tangency portfolio which implies Sharpe-Ratio maximization) using the generated expected returns and covariance matrix (this is computed once with the preferred method).
- Repeat point 2. and 3. for n times.
- Average the weights of all portfolios.
My questions are the following:
- How one should compute correctly statistics (expected return, expected volatility) of the final averaged optimized portfolio?
- Is not very clear to me if one should average the weights of all portfolios (point 5.) according to some techniques or just computing the simple mean. If the first, which are these techniques?
- Are there ways to improve the "Resampling" other than trying different probability distributions (i.e. generate expected returns not directly from a probability distribution but applying i.e. Single Index Model - $R_{it}=\alpha_i+\beta_i \cdot R_{mt} + \epsilon_{it}$ - the random component in that case would be noise $\epsilon_{it}$?
- Does makes sense generate random return with a multivariate probability distribution (mean is the mean of each asset and variance is the covariance matrix)? Doing so I noticed that all assets are always in the portfolio.
Answer
There might be some differences in how we define things, but there should be only one set of assumptions (i.e., for each asset, there should be only one expected return and expected volatility). Your simulations, which generate potential realizations of returns, should conform to these expected returns and volatilities.
It's also not necessary to run multiple simulations (although it's an option for sure). Instead, you could run one simulation and simply divide the simulated returns into multiple samples. So I'd modify the procedure as follows:
- Set assumptions (return, vols & correlations) for the assets.
- Fit distributions for each asset.
- Generate random returns for the assets based on the assumptions and distributions. To simplify the discussion, let's assume there are two assets and you decide to simulate 100 years of monthly data, so now you have a 1200 x 2 matrix of returns.
- Divide these into subsamples. Let's say you decide to use 10 subsamples, then each sample is a 120 x 2 matrix of returns.
- For each sample, given the weights, you can easily compute the cumulative return and volatility. This allows you to use standard mean-variance techniques to compute the optimal weights. Of course, given the optimal weights, you have the return and volatility of the portfolio as well.
- Average the weights/other statistics from the samples.
Regarding your questions:
- How one should compute correctly statistics (expected return, expected volatility) of the final averaged optimized portfolio?
As you can see from the procedure outlined above, you can compute the relevant metrics (returns, vols, etc.) for each sample. You can then take the average/median.
- Is not very clear to me if one should average the weights of all portfolios (point 5.) according to some techniques or just computing the simple mean. If the first, which are these techniques?
Usually a simple average/median is used. It's not clear to me that more sophisticated technique would add much value, but I'd be interested in hearing other perspectives.
- Are there ways to improve the "Resampling" other than trying different probability distributions (i.e. generate expected returns not directly from a probability distribution but applying i.e. Single Index Model -
- the random component in that case would be noise )?
There's a lot of room to incorporate more realistic return models. Typically you'd want to model the skewness of the returns, capture fat tails, etc. You could also account for time-varying correlations amongst assets. Indeed, you could also simulate some underlying factor returns and than map asset returns to these factors (I think this might be what you're alluding to). The possibilities are endless. It's a matter of what your institution prioritizes in the asset allocation process.
- Does makes sense generate random return with a multivariate probability distribution (mean is the mean of each asset and variance is the covariance matrix)? Doing so I noticed that all assets are always in the portfolio.
Yes, a multivariate approach should be used, since the dependency amongst the assets is an important aspect of asset allocation. Hitting corner solutions is not unusual even for a resampling exercise. I recommend that you look at whether there's anything you can do in your assumptions.
I also recommend this report: Non-normality of market returns. It doesn't specifically address resampling, but has a lot of good ideas that are highly relevant.
No comments:
Post a Comment