Given a time series ui of returns (where i=1,…,t), σi is calculated from GARCH(1,1) as σ2i=ω+αu2i−1+βσ2i−1.
Hull's book "Options, Futures and Other Derivatives" is an excellent reference. In 6th ed. p. 470, "How Good is the Model?" he states that
If a GARCH model is working well, it should remove the autocorrelation. We can test whether it has done so by considering the autocorrelation structure for the variables u2i/σ2i. If these show very little autocorrelation our model for σi has succeeded in explaining autocorrelation in the u2i.
Maximum likelihood estimation for variance ends with maximizing −m ln(v)−t∑i=1u2i/vi
This function does not really mean u2i/vi being minimized, because −ln(vi) gets larger and so does u2i/vi as vi gets smaller. However, it makes intuitive sense that dividing ut return by its (instant or regime) volatility explains away volatility-related component of the time series. I am looking for a mathematical or logical explanation of this.
I think Hull is not very accurate here as the time series may have trends etc.; also, there are better approaches to finding i.i.d. from the times series than using u2i/σ2i alone. I particularly like Filtering Historical Simulation- Backtest Analysis by Barone-Adesi (2000).
No comments:
Post a Comment