Chapter Contents Previous Next
 The LOGISTIC Procedure

## Iterative Algorithms for Model-Fitting

Two iterative maximum likelihood algorithms are available in PROC LOGISTIC. The default is the Fisher-scoring method, which is equivalent to fitting by iteratively reweighted least squares. The alternative algorithm is the Newton-Raphson method. Both algorithms give the same parameter estimates; however, the estimated covariance matrix of the parameter estimators may differ slightly. This is due to the fact that the Fisher-scoring method is based on the expected information matrix while the Newton-Raphson method is based on the observed information matrix. In the case of a binary logit model, the observed and expected information matrices are identical, resulting in identical estimated covariance matrices for both algorithms. You can use the TECHNIQUE= option to select a fitting algorithm.

### Iteratively Reweighted Least-Squares Algorithm

Consider the multinomial variable Zj = (Z1j, ... ,Z(k+1)j)' such that

With pij denoting the probability that the jth observation has response value i, the expected value of Zj is pj = (p1j, ... ,p(k+1)j)'. The covariance matrix of Zj is Vj, which is the covariance matrix of a multinomial random variable for one trial with parameter vector pj. Let be the vector of regression parameters; in other words, . And let Dj be the matrix of partial derivatives of pj with respect to .The estimating equation for the regression parameters is

where Wj = wj fj Vj-, wj and fj are the WEIGHT and FREQ values of the jth observation, and Vj- is a generalized inverse of Vj. PROC LOGISTIC chooses Vj- as the inverse of the diagonal matrix with pj as the diagonal.

With a starting value of , the maximum likelihood estimate of is obtained iteratively as

where Dj, Wj, and pj are evaluated at . The expression after the plus sign is the step size. If the likelihood evaluated at is less than that evaluated at ,then is recomputed by step-halving or ridging. The iterative scheme continues until convergence is obtained, that is, until is sufficiently close to . Then the maximum likelihood estimate of is .

The covariance matrix of is estimated by

where and are, respectively, Dj and Wj evaluated at .

By default, starting values are zero for the slope parameters, and for the intercept parameters, starting values are the observed cumulative logits (that is, logits of the observed cumulative proportions of response). Alternatively, the starting values may be specified with the INEST= option.

### Newton-Raphson Algorithm

With parameter vector ,the gradient vector and the Hessian matrix are given, respectively, by

With a starting value of , the maximum likelihood estimate of is obtained iteratively until convergence is obtained:

If the likelihood evaluated at is less than that evaluated at ,then is recomputed by step-halving or ridging.

The covariance matrix of is estimated by

 Chapter Contents Previous Next Top