A fairly naive approach to estimate the probability of drawdown / ruin is to calculate the probabilities of all the permutations of your sample returns, keeping track of those that hit your drawdown / ruin level (as I've written about). However, that assumes returns are independently distributed, which is unlikely.
In response to my blog post, Brian Peterson suggested a block bootstrap to attempt to preserve any dependence that may exist. (Extra credit: is there a method / heuristic to choose the optimal block size?)
What other methods are there?
Answer
I highly recommend the Maximum Entropy Bootstrap for time series, implemented by the meboot package in R. In my work, I've stopped using both the block bootstrap and residuals bootstrap in favor of meboot, and I am pleased with the results.
Hrishikesh Vinod, the researcher behind meboot, described it in his talk at UseR/2010 last year. The algorithm is quite clever and preserves the correlation structure of the original time series while creating bootstrap replications consistent with that structure. The algorithm is outlined in the package documentation.
The package does the heavy lifting for you, creating all the bootstrap replications of your time series. It does not take the final step of calculating the estimated confidence intervals based on the replications, unfortunately. Read the vignette for how you can calculate the CIs from the replications. (In contrast, the boot package supplies the boot.ci function which performs that final step.)
A final note: I trust statistical methods to estimate reasonable outcomes based on historical data. I have absolutely no faith they can predict the full range of possible outcomes, however. Please remember the false sense of security induced by VaR analysis... right up to the great financial meltdown. Now, how will you protect yourself from ruin?
No comments:
Post a Comment