Saturday, December 2, 2017

simulations - Does GARCH derived variance explain the autocorrelation in a time series?


Given a time series $u_i$ of returns (where $i=1,\dotsc,t$), $\sigma_i$ is calculated from GARCH(1,1) as $$ \sigma_i^2=\omega+\alpha u_{i-1}^2 +\beta \sigma_{i-1}^2. $$ What is the mathematical basis to say that $u_i^2/\sigma_i^2$ will exhibit little autocorrelation in the series?


Hull's book "Options, Futures and Other Derivatives" is an excellent reference. In 6th ed. p. 470, "How Good is the Model?" he states that



If a GARCH model is working well, it should remove the autocorrelation. We can test whether it has done so by considering the autocorrelation structure for the variables $u_i^2/\sigma_i^2$. If these show very little autocorrelation our model for $\sigma_i$ has succeeded in explaining autocorrelation in the $u_i^2$.




Maximum likelihood estimation for variance ends with maximizing $$ -m \space \ln(v) -\sum_{i=1}^{t} u_i^2/v_i $$ where $v_i$ is variance = $\sigma_i^2$.
This function does not really mean $u_i^2/v_i$ being minimized, because $-\ln(v_i)$ gets larger and so does $u_i^2/v_i$ as $v_i$ gets smaller. However, it makes intuitive sense that dividing $u_t$ return by its (instant or regime) volatility explains away volatility-related component of the time series. I am looking for a mathematical or logical explanation of this.


I think Hull is not very accurate here as the time series may have trends etc.; also, there are better approaches to finding i.i.d. from the times series than using $u_i^2/\sigma_i^2$ alone. I particularly like Filtering Historical Simulation- Backtest Analysis by Barone-Adesi (2000).




No comments:

Post a Comment

technique - How credible is wikipedia?

I understand that this question relates more to wikipedia than it does writing but... If I was going to use wikipedia for a source for a res...