My question contains many parts, but I will try to keep it somewhat focused. I am primarily looking for a framework to evaluate the accuracy of a stock-focused Options Pricing Model. One of the hardest questions seems to be determining what the right answer is, even in hindsight.
Most common pricing models seem to work on the basic framework established by Black, Scholes, and Merton. There is an interest rate component, some base probability distribution and a way to relate those to time and underlying price changes. Each model gives roughly similar answers, but can differ in many details.
I am looking for generally accepted approaches to evaluate the output of these models vs reality. My goal is to be able to run the models on a moderately large batch of stocks (~top 300 by options liquidity) for the last 15 years and generate quantitative measurements about their effectiveness.
These measurements would allow me to rank the models relative to each other. This would be compared to market factors to determine the overall quality of the model and how its quality changes in particular market conditions.
At this point, my main focus is comparing the Implied Volatility output of the function to the future reality. My assumption, and I am specifically asking for feedback on this, is that an OPM with a lower Implied Volatility vs Realized Volatility error measurement would be considered a more accurate model.
This would not be used to rate the OPM directly. I am honestly not sure what the "Correct" error rate would be or what that would mean. The goal is to compare error rates between different models side by side over the same data set. This should allow me to determine which is the comparatively better model based on its ranking of error.
I am following the basic theory that all options prices represent a pricing of future risk by the market. The information traders use to determine risk is incomplete and therefore contains a natural (but possibly varying) level of prediction error. Option Pricing Models induce an additional level of error based on their underlying assumptions. A "Better" model would be one that induces a lower level of error vs the realized volatility.
So here are the specifics;
- Does comparing Implied Volatility to Realized Volatility make sense as a quantitative measurement of an OPM's effectiveness?
- Is there anything fundamentally wrong with this concept I should be aware of?
- Are there any generally accepted methodologies for doing this analysis?
- Are there other measurements of an OPM's effectiveness that are better or should be considered?
- What comparisons would make the results of this analysis useful for model selection? (ie vs Interest Rates, vs Historic Volatility, vs Market Shocks, vs Time to Expiration, vs strike distance, etc...)
- What OPMs should be included in the analysis? (Taking nominations and I will share results.)
I would appreciate constructive comments on the overall concept, but analysis approaches, previous papers and tool suggestions would be more helpful. I am using R, MATLAB and C# (or .NET in general) as my toolkit for doing the work. Any pre-existing tools that would work with those would be extremely appreciated.
The computational efficiency of the model isn't as relevant in this situation, but I may want to compare that at a different time.
If the results are meaningful in any way I am happy to share them and I would welcome collaboration if anyone is interested (make a note in the comments.)
No comments:
Post a Comment