Da Silva Method (VarianceComponent Moving Average Model)
Suppose you have a sample of observations at T time
points on each of N crosssectional units.
The Da Silva method assumes that the observed value of the dependent variable
at the tth time point on the ith crosssectional unit
can be expressed as
where

 x_{it}' = ( x_{it1}, ... , x_{itp})is a vector of explanatory variables for the tth time point
and ith crosssectional unit


is the vector of parameters

 a_{i}
is a timeinvariant, crosssectional unit effect

 b_{t}
is a crosssectionally invariant time effect

 e_{it}
is a residual effect unaccounted for by the explanatory
variables and the specific time and crosssectional
unit effects
Since the observations are arranged first by cross sections,
then by time periods within cross sections,
these equations can be written in matrix notation as
where

y =
(y_{11}, ... ,y_{1T}, y_{21}, ... ,y_{NT})'

X = (x_{11}, ... ,x_{1T},x_{21}, ...
,x_{NT})'

a = (a_{1} ... a_{N})'

b = (b_{1} ... b_{T})'

e =
(e_{11}, ... ,e_{1T}, e_{21}, ... ,e_{NT})'
Here 1_{N} is an N ×1 vector with all elements equal to 1,
and denotes the Kronecker product.
It is assumed that
 x_{it} is a sequence of nonstochastic,
known p×1 vectors in whose
elements are uniformly bounded in .
The matrix X has a full column rank p.
 is a p ×1 constant
vector of unknown parameters.
 a is a vector of uncorrelated random variables such that
E( a_{i})=0 and
,
.
 b is a vector of uncorrelated random variables such that
E( b_{t})=0 and
.
 e_{i} = ( e_{i1}, ... ,e_{iT})'
is a sample of a realization of a finite moving average time series of
order m < T1 for each i; hence,
where are
unknown constants such that and
, and
is a white noise process, that is,
a sequence of uncorrelated random variables with
, and .  The sets of random variables
{a_{i}}^{N}_{i = 1}, {b_{t}}^{T}_{t = 1}, and
{e_{it}}^{T}_{t = 1} for i = 1, ... , N are mutually
uncorrelated.
 The random terms have normal distributions:
and for i = 1, ... , N; t = 1, ... T; k = 1, ... , m.
If assumptions 16 are satisfied, then
and
where
is a T×T matrix with elements
as follows:
where for k=ts. For the definition of I_{N},
I_{T}, J_{N}, and J_{T},
see the "FullerBattese Method" section earlier in this chapter.
The covariance matrix, denoted by V, can
be written in the form
where ,
and, for k=1,..., m,
is a band matrix whose kth
offdiagonal elements are 1's and all other elements are 0's.
Thus, the covariance matrix of the vector of observations
y has the form
where
The estimator of is a twostep
GLStype estimator, that is, GLS
with the unknown covariance matrix replaced by a suitable
estimator of V. It is obtained by substituting Seely estimates
for the scalar multiples .
Seely (1969) presents a general theory of unbiased
estimation when the choice of estimators is restricted to
finite dimensional vector spaces, with a special emphasis on
quadratic estimation of functions of the form
.
The parameters (i=1,..., n)
are associated with a linear model E(y)=X with
covariance matrix
where V_{i} (i=1, ..., n)
are real symmetric matrices.
The method is also discussed by Seely
(1970a,1970b) and Seely and Zyskind (1971).
Seely and Soong (1971) consider the MINQUE principle, using an approach
along the lines of Seely (1969).
Copyright © 1999 by SAS Institute Inc., Cary, NC, USA. All rights reserved.