Wednesday, October 5, 2016

optimization - Algorithm to fit AR(1)/GARCH(1,1) model of log-returns


I am fitting numerically an AR(1)/GARCH(1,1) process to index and stock log-returns, $r_t=\log(P_t/P_{t-1})$, where $P_t$ is the price at time $t$, and thus far am not clear on where the observed log returns would be used in an algorithm. Several author groups have described (some in part) the components of the AR(1)/GARCH(1,1) approach, for example:


E. Zivot: $$ r_t=\mu + \phi (r_{t-1} - \mu ) + \epsilon_t $$


Rachev et al: $$ \begin{split} r_t&=\mu+\phi r_{t-1}\\ \epsilon_t&=\sigma_t \delta_t \quad \quad (\delta_t \textrm{ is an innovation})\\ \sigma_t &= \sqrt{\alpha_0 + \alpha_1 \epsilon_{t-1}^2 + \beta_1 \sigma_{t-1}^2}\\ \end{split} $$


Brummelhuis & Kaufman: $$ \begin{split} X_t&=\mu_t+ \sigma_t \epsilon_t\\ \mu_t&=\lambda X_{t-1}\\ \sigma_t &= \sqrt{\alpha_0 + \alpha_1 (X_{t-1}-\mu_{t-1})^2 + \beta_1 \sigma_{t-1}^2}\\ \end{split} $$


Jalal & Rockinger: $$ \begin{split} \mu_t&=\phi X_{t-1}\\ \epsilon_t &=X_t - \mu_t\\ \sigma_t &= \sqrt{\alpha_0 + \alpha_1 \epsilon_{t-1}^2 + \beta_1 \sigma_{t-1}^2}\\ \end{split} $$


My approach:


$$ \begin{split} \mu_t&=\mu+\phi r_{t-1}\\ \epsilon_t&=r_t - \mu_t\\ \sigma_t &= \sqrt{\alpha_0 + \alpha_1 \epsilon_{t-1}^2 + \beta_1 \sigma_{t-1}^2}\\ \end{split} $$



Given the multiple descriptions above, my interpretation for an algorithm would be:


Algorithm for AR(1)/GARCH(1,1):



  1. Initialize $\mu=0$, $\sigma_1 = 1$, $\epsilon_1=0$, $\mu=\phi=\alpha_0=\alpha_1=\beta_1=U(0,1)* 0.01$

  2. For $t$ = 2 to $T$:

  3. $\quad \mu_t = \mu + \phi r_{t - 1}\quad \quad (\textrm{Log-returns lag-1 input here})$

  4. $\quad \epsilon_t = r_t - \mu_t \quad \quad (\textrm{Log-returns lag-0 input here})$


  5. $\quad \sigma_t= \sqrt{\alpha_0 + \alpha_1 \epsilon_{t-1}^ 2 + \beta_1 \sigma_{t - 1} ^ 2}$





  6. $\quad \hat{r}_t = \phi \mu_{t - 1} + \sigma_t \epsilon_t \quad \quad \textrm{OR}: \hat{r}_t = \mu + \phi \mu_{t - 1} + \sigma_t \epsilon_t \quad \quad ???$



  7. Next $t$

  8. Calculate residual, $e_t=r_t-\hat{r}_t$

  9. Determine $MSE=\frac{1}{T}\sum_t e_t^2$


The algorithm proposed above is essentially the recursive part to calculate the predicted log-returns $\hat{r}_t$ from the input observed returns $r_t$. Innovations or random quantiles from a probability distribution [such as N(0,1) or $t(\nu)$] would not be employed here since we are fitting a model, not simulating. During each iteration, the goodness-of-fit based on proposed parameters for the objective function would be based on $MSE=\frac{1}{T}\sum_t e_t$, which would be minimized via an optimization technique using non-linear regression, finite differencing, or MLE. Metaheuristics could be used as well where initialization of chromosome (particle) values for $(\mu,\phi,\alpha_0,\alpha_1,\beta_1)$ would occur at the first generation.


In terms of comparing results based R, MATLAB, SAS, etc. the parameterization would be:


mu=$\mu$



ar1=$\phi$


garch0=$\alpha_0$


garch1=$\alpha_1$


garch2=$\beta_1$


I am not sure whether the unconditional mean $\mu$ would be needed in line 6 of the algorithm, however. Please comment on correctness of the algorithm, and possibly suggest coding changes. Again, the goal is to algorithmically use numerical methods to solve for the parameters, not R, MATLAB, or SAS, etc.




No comments:

Post a Comment

technique - How credible is wikipedia?

I understand that this question relates more to wikipedia than it does writing but... If I was going to use wikipedia for a source for a res...