Chapter Contents |
Previous |
Next |

The NLP Procedure |

- scale the parameters
- provide better initial values
- use boundary constraints to avoid the region where overflows may happen
- change the algorithm (specified in program statements) which computes the objective function

- By default, the Lagrange vector is evaluated in the same way as Powell (1982) describes. This corresponds to VERSION=2. By specifying VERSION=1, a modification of this algorithm replaces the update of the Lagrange vector with the original update of Powell (1978), that is used in VF02AD.
- You can use the INSTEP= option to impose an upper bound for the step size during the first five iterations.
- You can use the INHESSIAN[=r] option to specify a different starting approximation for the Hessian. Choosing simply the INHESSIAN option will use the Cholesky factor of a (possibly ridged) finite difference approximation of the Hessian to initialize the quasi-Newton update process.

- Check the derivative specification:

If derivatives are specified by using the GRADIENT, HESSIAN, JACOBIAN, CRPJAC, or JACNLC statement, you can compare the specified derivatives with those computed by finite-difference approximations (specifying the FD and FDHESSIAN option). Use the GRADCHECK option to check if the gradient*g*is correct. For more information, refer to the section "Testing the Gradient Specification". - Forward-difference derivatives specified with the FD[=] or FDHESSIAN[=] option may not be precise enough to satisfy strong gradient termination criteria. You may need to specify the more expensive central-difference formulas or use analytical derivatives. The finite difference intervals may be too small or too big and the finite difference derivatives may be erroneous. You can specify the FDINT= option to compute better finite difference intervals.
- Change the optimization technique:

For example, if you use the default TECH=LEVMAR, you can- change to TECH=QUANEW or to TECH=NRRIDG
- run some iterations with TECH= CONGRA, write the results in an OUTEST= or OUTVAR= data set, and use them as initial values specified by an INEST= or INVAR= data set in a second run with a different TECH= technique

- Change or modify the update technique
and the line-search algorithm:

This method applies only to TECH=QUANEW, TECH=HYQUAN, or TECH= CONGRA. For example, if you use the default update formula and the default line-search algorithm, you can- change the update formula with the UPDATE= option
- change the line-search algorithm with the LIS= option
- specify a more precise line-search with the LSPRECISION= option, if you use LIS=2 or LIS=3

- Change the initial values by using a grid search specification to obtain a set of good feasible starting values.

There are two ways to avoid this situation:

- Use the PARMS statement to specify a grid of feasible starting points.
- Use the OPTCHECK[=
*r*] option to avoid terminating at the stationary point.

The signs of the eigenvalues of the (reduced) Hessian matrix contain information regarding a stationary point.

- If all eigenvalues are positive, the Hessian matrix is positive definite and the point is a minimum point.
- If some of the eigenvalues are positive and all remaining eigenvalues are zero, the Hessian matrix is positive semidefinite and the point is a minimum or saddle point.
- If all eigenvalues are negative, the Hessian matrix is negative definite and the point is a maximum point.
- If some of the eigenvalues are negative and all remaining eigenvalues are zero, the Hessian matrix is negative semidefinite and the point is a maximum or saddle point.
- If all eigenvalues are zero, the point can be a minimum, maximum, or saddle point.

If the termination region is too small, the optimization process may take longer to find a point inside such a region or cannot even find such a point due to rounding errors in function values and derivatives. This can easily happen in applications where finite difference approximations of derivatives are used and the GCONV and ABSGCONV termination criteria are too small to respect rounding errors in the gradient values.

Chapter Contents |
Previous |
Next |
Top |

Copyright © 1999 by SAS Institute Inc., Cary, NC, USA. All rights reserved.