## Score Statistics and Tests

To understand the general form of the score statistics, let
be the vector of first partial derivatives
of the
log likelihood with respect to the parameter vector , and
let be the matrix of second partial derivatives
of the log likelihood with respect to .
That is, is the gradient vector, and
is the Hessian matrix.
Let be either or the expected value of
.Consider a null hypothesis *H*_{0}. Let
be the MLE of under *H*_{0}.
The chi-square score statistic for testing *H*_{0} is defined by

and it has an asymptotic distribution with
*r* degrees of freedom under *H*_{0}, where *r* is
the number of restrictions imposed on by *H*_{0}.

*Residual Chi-Square*

When you use SELECTION=FORWARD, BACKWARD, or STEPWISE, the procedure
calculates a residual score chi-square score statistic and
reports the statistic, its
degrees of freedom, and the *p*-value. This section describes
how the statistic is calculated.

Suppose there are *s* explanatory effects of
interest. The full model has a parameter vector

where
are
intercept parameters, and
are
slope parameters for the explanatory
effects. Consider the null hypothesis
where t < s.
For the reduced model with *t* explanatory effects, let
be
the MLEs of the unknown intercept
parameters, and let
be the
MLEs of the unknown slope parameters.
The residual chi-square is the chi-square
score statistic
testing the null hypothesis *H*_{0}; that is, the residual
chi-square is

where .

The residual chi-square has an asymptotic chi-square distribution with
*s*-*t* degrees of freedom. A special case is the global
score chi-square, where the reduced model consists of the
*k* intercepts and no explanatory effects. The global score statistic
is displayed in the "Model-Fitting Information and Testing Global
Null Hypothesis BETA=0" table. The table is not produced when the NOFIT
option is used, but the global score statistic is displayed.

*Testing Individual Effects Not in the Model*

These tests are performed in the FORWARD or STEPWISE method.
In the displayed output, the tests are labeled
"Score Chi-Square" in the "Analysis
of Effects Not in the Model" table and in the
"Summary of Stepwise (Forward) Procedure" table.
This section describes how the tests are calculated.
Suppose that *k* intercepts and *t* explanatory variables
(say *v*_{1}, ... , *v*_{t}) have
been fitted to a model and that *v*_{t+1} is another
explanatory variable of interest.
Consider a full model with the *k* intercepts and
*t*+1 explanatory variables
(*v*_{1}, ... ,*v*_{t},*v*_{t+1}) and a reduced
model with *v*_{t+1} excluded.
The significance of
*v*_{t+1} adjusted for
*v*_{1}, ... ,*v*_{t} can be determined by
comparing the corresponding residual chi-square with a
chi-square distribution with one degree of freedom.

*Testing the Parallel Lines Assumption*

For an ordinal response, PROC LOGISTIC performs a test of the parallel lines
assumption. In the displayed output,
this test is labeled "Score Test for the Equal Slopes Assumption"
when the LINK= option is NORMIT or CLOGLOG.
When LINK=LOGIT,
the test is labeled as "Score Test for the
Proportional Odds Assumption" in the output. This section describes the
methods used to calculate the test.

For this test the number of response levels,
*k*+1, is assumed to be
strictly greater than 2. Let Y be the response variable taking values
**1, ... , ***k*, *k*+1. Suppose there are *s* explanatory
variables.
Consider the general cumulative model without making
the parallel lines assumption

where g(.) is the link function, and
is a vector of unknown parameters consisting of an intercept
and
*s* slope parameters .The parameter vector for this general cumulative model is

Under the null hypothesis of parallelism
,there is a single common slope parameter for each of the *s*
explanatory variables. Let be the common
slope parameters.
Let
and
be the
MLEs of the intercept parameters
and the common slope parameters .
Then, under *H*_{0}, the MLE of is

and the chi-squared score statistic
has an asymptotic chi-square distribution with
*s*(*k*-1) degrees of freedom.
This tests the parallel lines assumption by testing the equality of
separate slope parameters simultaneously for all explanatory variables.

Copyright © 1999 by SAS Institute Inc., Cary, NC, USA. All rights reserved.