### 39.10 Minimization Algorithms using Derivatives

The minimization algorithms described in this section make use of both the function and its derivative. They require an initial guess for the location of the minimum. There is no absolute guarantee of convergence—the function must be suitable for this technique and the initial guess must be sufficiently close to the minimum for it to work.

Derivative Solver: gsl_multifit_fdfsolver_lmsder

This is a robust and efficient version of the Levenberg-Marquardt algorithm as implemented in the scaled LMDER routine in MINPACK. Minpack was written by Jorge J. Moré, Burton S. Garbow and Kenneth E. Hillstrom.

The algorithm uses a generalized trust region to keep each step under control. In order to be accepted a proposed new position x' must satisfy the condition |D (x' - x)| < \Delta, where D is a diagonal scaling matrix and \Delta is the size of the trust region. The components of D are computed internally, using the column norms of the Jacobian to estimate the sensitivity of the residual to each component of x. This improves the behavior of the algorithm for badly scaled functions.

On each iteration the algorithm attempts to minimize the linear system |f + J \delta| subject to the constraint |D \delta| < \Delta. The solution to this constrained linear system is found by solving the linear least squares system

[J; sqrt(mu) D] \delta = - [f; 0]


where \mu is the Levenberg-Marquardt parameter. The above system is solved using a QR decomposition of J.

The proposed step \delta is now tested by evaluating the function at the resulting point, x'. If the step reduces the norm of the function sufficiently, and follows the predicted behavior of the function within the trust region, then it is accepted and the size of the trust region is increased. If the proposed step fails to improve the solution, or differs significantly from the expected behavior within the trust region, then the size of the trust region is decreased and another trial step is computed.

The algorithm also monitors the progress of the solution and returns an error if the changes in the solution are smaller than the machine precision. The possible error codes are,

GSL_ETOLF

the decrease in the function falls below machine precision

GSL_ETOLX

the change in the position vector falls below machine precision

GSL_ETOLG

the norm of the gradient, relative to the norm of the function, falls below machine precision

GSL_ENOPROG

the routine has made 10 or more attempts to find a suitable trial step without success (but subsequent calls can be made to continue the search).15

These error codes indicate that further iterations will be unlikely to change the solution from its current value.

Derivative Solver: gsl_multifit_fdfsolver_lmder

This is an unscaled version of the LMDER algorithm. The elements of the diagonal scaling matrix D are set to 1. This algorithm may be useful in circumstances where the scaled version of LMDER converges too slowly, or the function is already scaled appropriately.

Derivative Solver: gsl_multifit_fdfsolver_lmniel

This is a Levenberg-Marquardt solver based on a smoother updating procedure for the damping parameter \mu proposed by Nielsen, 1999. It does not use a trust region approach and only performs rudimentary scaling and is therefore not as robust as lmsder. However, on each iteration it solves the normal equation system to compute the next step:

(J^T J + \mu I) \delta = -J^T f


which makes it a much more practical method for problems with a large number of residuals (n >> p), since only the p-by-p matrix J^T J is decomposed rather than the full n-by-p Jacobian. This makes a significant difference in efficiency when solving systems with large amounts of data. While not as robust as lmsder, this algorithm has proven effective on a wide class of problems.

### (15)

The return code GSL_CONTINUE was used for this case in versions prior to 1.14.