Saturday, December 2, 2017

simulations - Does GARCH derived variance explain the autocorrelation in a time series?


Given a time series ui of returns (where i=1,,t), σi is calculated from GARCH(1,1) as σ2i=ω+αu2i1+βσ2i1.

What is the mathematical basis to say that u2i/σ2i will exhibit little autocorrelation in the series?


Hull's book "Options, Futures and Other Derivatives" is an excellent reference. In 6th ed. p. 470, "How Good is the Model?" he states that



If a GARCH model is working well, it should remove the autocorrelation. We can test whether it has done so by considering the autocorrelation structure for the variables u2i/σ2i. If these show very little autocorrelation our model for σi has succeeded in explaining autocorrelation in the u2i.




Maximum likelihood estimation for variance ends with maximizing m ln(v)ti=1u2i/vi

where vi is variance = σ2i.
This function does not really mean u2i/vi being minimized, because ln(vi) gets larger and so does u2i/vi as vi gets smaller. However, it makes intuitive sense that dividing ut return by its (instant or regime) volatility explains away volatility-related component of the time series. I am looking for a mathematical or logical explanation of this.


I think Hull is not very accurate here as the time series may have trends etc.; also, there are better approaches to finding i.i.d. from the times series than using u2i/σ2i alone. I particularly like Filtering Historical Simulation- Backtest Analysis by Barone-Adesi (2000).




No comments:

Post a Comment

technique - How credible is wikipedia?

I understand that this question relates more to wikipedia than it does writing but... If I was going to use wikipedia for a source for a res...