5.6 AR(p) as MA(\(\infty\))
Now, we already see that we can write an MA model as an AR(\(\infty\)) model. Let’s write an AR model as an MA(\(\infty\)) model. This is the main connection between the two models.
5.6.1 AR(1) Model
Let’s return to an AR(1) model momentarily.
\[Y_t = \phi_1Y_{t-1} + W_t\] By successive substitution, we can rewrite this model as an infinite-order MA model,
\[Y_t = \phi_1(\phi_1Y_{t-2} + W_{t-1}) + W_t =\phi_1^2Y_{t-2} + \phi_1W_{t-1} + W_t \] \[ =\phi_1^2(\phi_1Y_{t-3} + W_{t-2}) + \phi_1W_{t-1} + W_t =\phi_1^3Y_{t-3} + \phi_1^2W_{t-2} + \phi_1W_{t-1} + W_t \] \[ =W_t + \phi_1W_{t-1} +\phi_1^2W_{t-2} +\phi_1^3W_{t-3}+\cdots \] This can also be shown with a backward shift operator,
\[(1-\phi_1 B)Y_t = W_t \implies Y_t = (1-\phi_1 B)^{-1}W_t\] \[Y_t = (1-\phi_1 B)^{-1}W_t\] \[= (1+\phi_1 B + \phi_1^2 B^2 + \cdots )W_t\] This converges when \(|\phi_1|<1\) by the infinite sum rule that \(\sum^\infty_{i=1}r^i = (1-r)^{-1}\) if \(|r|<1\).
With this format, it is clear that for an AR(1) process,
\[E(Y_t) = E(W_t +\phi_1W_{t-1} + \phi_1^2 W_{t-2} + \cdots ) = 0\]
\[Var(Y_t) = Var(W_t +\phi_1W_{t-1} + \phi_1^2 W_{t-2} + \cdots ) = \sigma^2_w(1+\phi_1^2 + \phi_1^4 + \cdots) =\frac{\sigma^2_w}{1-\phi_1^2}\text{ if } |\phi_1|<1\]
5.6.2 AR(p) Model
A general AR(p) model can be written with a backward shift operator,
\[(1-\phi_1 B-\phi_2 B^2 -\cdots - \phi_p B^p)Y_t = W_t \implies Y_t = (1-\phi_1 B-\phi_2 B^2 -\cdots - \phi_p B^p)^{-1}W_t\]
where \((1-\phi_1 B-\phi_2 B^2 -\cdots - \phi_p B^p)^{-1} = (1 + \beta_1 B + \beta_2B^2 +\cdots)\) (this mathematical statement can be proved, but we won’t get into that in this course). So,
\[Y_t = (1 + \beta_1 B + \beta_2B^2 +\cdots)W_t\]
With this format, it is clear that for an AR(p) process,
\[E(Y_t) = 0\]
\[Var(Y_t) = \sigma^2_w(1+\beta_1^2 + \beta_2^2 + \cdots)\text{ which will be finite if } \sum \beta_i^2\text{ converges}\]
The autocovariance function is given by
\[\Sigma_Y(k) = \sigma^2_w \sum^{\infty}_{i=0}\beta_i \beta_{i+k}\text{ where } \beta_0=1\]
which will converge if \(\sum |\beta_i|\) converges. But, figuring out what the \(\ beta_i\)’s should be is hard. There is an easier way to do this.
5.6.3 AR(p) Estimation: Yule-Walker Equations
Let’s go back to the original model statement (but let’s assume \(\delta =0\)),
\[Y_t = \phi_1Y_{t-1}+ \phi_2Y_{t-2}+\cdots + \phi_pY_{t-p} +W_t\]
Multiply that model through by \(Y_{t-k}\),
\[Y_tY_{t-k} = \phi_1Y_{t-1}Y_{t-k}+ \phi_2Y_{t-2}Y_{t-k}+\cdots + \phi_pY_{t-p}Y_{t-k} +W_tY_{t-k}\]
Then take the expectation and divide it by the variance of \(Y_t\), \(Var(Y_t)\) (assuming it is finite)
\[\frac{E(Y_tY_{t-k})}{Var(Y_t)} = \frac{\phi_1E(Y_{t-1}Y_{t-k})}{Var(Y_t)} + \frac{\phi_2E(Y_{t-2}Y_{t-k})}{Var(Y_t)}+ \cdots + \frac{\phi_pE(Y_{t-p}Y_{t-k})}{Var(Y_t)} +\frac{E(W_tY_{t-k})}{Var(Y_t)} \]
Assuming the process is stationary, this simplifies to
\[\rho_k = \phi_1\rho_{k-1}+ \phi_2\rho_{k-2}+ \cdots + \phi_p\rho_{k-p} \text{ for } k = 1,2,...\]
If you plug in estimates of the autocorrelation function for \(k=1,...,p\), and recognize that \(\rho(-k) = \rho(k)\), and solve for these equations, you’ll get the estimates of \(\phi_1,...,\phi_p\). This is a well-posed problem that can be done with matrix notation.
In practice, you will have the computer estimate these models for you. Keep reading for R examples.
Sample ACF for AR(p): Decay to zero