Chapter Contents Previous Next
 The NLIN Procedure

## Computational Methods

For the system of equations represented by the nonlinear model

where Z is a matrix of the independent variables, is a vector of the parameters, is the error vector, and F is a function of the independent variables and the parameters, there are two approaches to solving for the minimum. The first method is to minimize
where and is an estimate of .

The second method is to solve the nonlinear "normal" equations

where
In the nonlinear situation, both X and are functions of and a closed-form solution generally does not exist. Thus, PROC NLIN uses an iterative process: a starting value for is chosen and continually improved until the error sum of squares is minimized.

The iterative techniques that PROC NLIN uses are similar to a series of linear regressions involving the matrix X evaluated for the current values of and , the residuals evaluated for the current values of .

The iterative process begins at some point .Then X and Y are used to compute a such that

The four methods differ in how is computed to change the vector of parameters.
The default method used to compute (X'X)- is the sweep operator producing a reflexive generalized (g2) inverse. In some cases it would be preferable to use a Moore-Penrose (g4) inverse. If the G4 option is specified in the PROC NLIN statement, a g4 inverse is used to calculate on each iteration.

The Gauss-Newton and Marquardt iterative methods regress the residuals onto the partial derivatives of the model with respect to the parameters until the estimates converge. The Newton iterative method regresses the residuals onto a function of the first and second derivatives of the model with respect to the parameters until the estimates converge. Analytical first- and second-order derivatives are automatically computed.

### Steepest Descent (Gradient)

The steepest descent method is based on the gradient of :
The quantity -X'e is the gradient along which increases. Thus is the direction of steepest descent.

If the automatic variables _WEIGHT_ and _RESID_ are used, then

is the direction, where
WSSE
is an n ×n diagonal matrix with elements wiSSE of weights from the _WEIGHT_ variable. Each element wiSSE contains the value of _WEIGHT_ for the ith observation.

r
is a vector with elements ri from _RESID_. Each element ri contains the value of _RESID_ evaluated for the ith observation.

Using the method of steepest descent, let
where the scalar is chosen such that
Note: The steepest descent method may converge very slowly and is therefore not generally recommended. It is sometimes useful when the initial values are poor.

### Newton

The Newton method uses the second derivatives and solves the equation
where
and is the Hessian of e:
If the automatic variables _WEIGHT_, _WGTJPJ_, and _RESID_ are used, then

is the direction, where
and
WSSE
is an n ×n diagonal matrix with elements wiSSE of weights from the _WEIGHT_ variable. Each element wiSSE contains the value of _WEIGHT_ for the ith observation.

WXPX
is an n ×n diagonal matrix with elements wiXPX of weights from the _WGTJPJ_ variable.

Each element wiXPX contains the value of _WGTJPJ_ for the ith observation.

r
is a vector with elements ri from the _RESID_ variable. Each element ri contains the value of _RESID_ evaluated for the ith observation.

### Gauss-Newton

The Gauss-Newton method uses the Taylor series
where is evaluated at .

Substituting the first two terms of this series into the normal equations
and therefore
Caution: If X'X is singular or becomes singular, PROC NLIN computes using a generalized inverse for the iterations after singularity occurs. If X'X is still singular for the last iteration, the solution should be examined.

### Marquardt

The Marquardt updating formula is as follows:
The Marquardt method is a compromise between the Gauss-Newton and steepest descent methods (Marquardt 1963). As , the direction approaches Gauss-Newton. As , the direction approaches steepest descent.

Marquardt's studies indicate that the average angle between Gauss-Newton and steepest descent directions is about . A choice of between 0 and infinity produces a compromise direction.

By default, PROC NLIN chooses to start and computes a . If SSE, then for the next iteration. Each time SSE, then .

Note: If the SSE decreases on each iteration, then , and you are essentially using the Gauss-Newton method. If SSE does not improve, then is increased until you are moving in the steepest descent direction.

Marquardt's method is equivalent to performing a series of ridge regressions and is useful when the parameter estimates are highly correlated or the objective function is not well approximated by a quadratic.

### Secant Method (DUD)

The multivariate secant method is like the Gauss-Newton method, except that the derivatives are estimated from the history of iterations rather than supplied analytically. The method is also called the method of false position or the Doesn't Use Derivatives (DUD) method (Ralston and Jennrich 1978). If only one parameter is being estimated, the derivative for iteration i+1 can be estimated from the previous two iterations:
When k parameters are to be estimated, the method uses the last k+1 iterations to estimate the derivatives.

Now that automatic analytic derivatives are available, DUD is not a recommended method but is retained for backward compatibility.

### Step-Size Search

The default method of finding the step size k is step halving using SMETHOD=HALVE. If SSE,compute SSE, SSEuntil a smaller SSE is found.

If you specify SMETHOD=GOLDEN, the step size k is determined by a golden section search. The parameter TAU determines the length of the initial interval to be searched, with the interval having length TAU or 2 ×TAU, depending on SSE.The RHO parameter specifies how fine the search is to be. The SSE at each endpoint of the interval is evaluated, and a new subinterval is chosen. The size of the interval is reduced until its length is less than RHO. One pass through the data is required each time the interval is reduced. Hence, if RHO is very small relative to TAU, a large amount of time can be spent determining a step size. For more information on the GOLDEN search, refer to Kennedy and Gentle (1980).

If you specify SMETHOD=CUBIC, the NLIN procedure performs a cubic interpolation to estimate the step size. If the estimated step size does not result in a decrease in SSE, step halving is used.

 Chapter Contents Previous Next Top