Chapter Contents Previous Next
 Time Series Analysis and Control Examples

### Computational Details

#### Least Squares and Householder Transformation

Consider the univariate AR(p) process
Define the design matrix X.
Let y = (yp+1, ... ,yn)'. The least squares estimate, ,is the approximation to the maximum likelihood estimate of if is assumed to be Gaussian error disturbances. Combining X and y as
the Z matrix can be decomposed as
where Q is an orthogonal matrix and R is an upper triangular matrix, w1 = (w1, ... ,wp+1)', and w2 = (wp+2,0, ... ,0)'.

The least squares estimate using Householder transformation is computed by solving the linear system

Ra = w1
The unbiased residual variance estimate is
and
In practice, least squares estimation does not require the orthogonal matrix Q. The TIMSAC subroutines compute the upper triangular matrix without computing the matrix Q.

#### Bayesian Constrained Least Squares

Consider the additive time series model

Practically, it is not possible to estimate parameters a = (T1, ... ,TT,S1, ... ,ST)', since the number of parameters exceeds the number of available observations. Let denote the seasonal difference operator with L seasons and degree of m; that is, .Suppose that T=L*n. Some constraints on the trend and seasonal components need to be imposed such that the sum of squares of , , and is small. The constrained least squares estimates are obtained by minimizing

Using matrix notation,
(y-Ma)'(y-Ma) + (a-a0)'D'D(a-a0)
where , y = (y1, ... ,yT)', and a0 is the initial guess of a. The matrix D is a 3T×2T control matrix in which structure varies according to the order of differencing in trend and season.
where
The n×n matrix Cm has the same structure as the matrix Gm, and IL is the L×L identity matrix. The solution of the constrained least squares method is equivalent to that of maximizing the following function
Therefore, the PDF of the data y is
The prior PDF of the parameter vector a is
When the constant d is known, the estimate of a is the mean of the posterior distribution, where the posterior PDF of the parameter a is proportional to the function L(a). It is obvious that is the minimizer of ,where
The value of d is determined by the minimum ABIC procedure. The ABIC is defined as

#### State Space and Kalman Filter Method

In this section, the mathematical formulas for state space modeling are introduced. The Kalman filter algorithms are derived from the state space model. As an example, the state space model of the TSDECOMP subroutine is formulated.

Define the following state space model:

where and .If the observations, (y1, ... ,yT), and the initial conditions, and , are available, the one-step predictor of the state vector xt and its mean square error (MSE) matrix are written as
Using the current observation, the filtered value of xt and its variance are updated.
where and .The log-likelihood function is computed as
where vt|t-1 is the conditional variance of the one-step prediction error et.

Consider the additive time series decomposition

where xt is a (K×1) regressor vector and is a (K×1) time-varying coefficient vector. Each component has the following constraints:
where and .The AR component ut is assumed to be stationary. The trading day component TDt(i) represents the number of the ith day of the week in time t. If k=3, p=3, m=1, and L=12 (monthly data),
The state vector is defined as
The matrix F is
where
F4 = I6
1' = (1,1, ... ,1)
The matrix G can be denoted as
where
Finally, the matrix Ht is time-varying,
where

 Chapter Contents Previous Next Top