Next: High Level Driver, Previous: Iteration of the Minimization Algorithm, Up: Nonlinear Least-Squares Fitting [Index]

A minimization procedure should stop when one of the following conditions is true:

- A minimum has been found to within the user-specified precision.
- A user-specified maximum number of iterations has been reached.
- An error has occurred.

The handling of these conditions is under user control. The functions below allow the user to test the current estimate of the best-fit parameters in several standard ways.

- Function:
*int***gsl_multifit_fdfsolver_test***(const gsl_multifit_fdfsolver **`s`, const double`xtol`, const double`gtol`, const double`ftol`, int *`info`) This function tests for convergence of the minimization method using the following criteria:

- Testing for a small step size relative to the current parameter vector
|\delta_i| <= xtol (|x_i| + xtol)

for each

*0 <= i < p*. Each element of the step vector*\delta*is tested individually in case the different parameters have widely different scales. Adding`xtol`to*|x_i|*helps the test avoid breaking down in situations where the true solution value*x_i = 0*. If this test succeeds,`info`is set to 1 and the function returns`GSL_SUCCESS`

.A general guideline for selecting the step tolerance is to choose

*xtol = 10^{-d}*where*d*is the number of accurate decimal digits desired in the solution*x*. See Dennis and Schnabel for more information. - Testing for a small gradient (
*g = \nabla \Phi(x) = J^T f*) indicating a local function minimum:||g||_inf <= gtol

This expression tests whether the ratio

*(\nabla \Phi)_i x_i / \Phi*is small. Testing this scaled gradient is a better than*\nabla \Phi*alone since it is a dimensionless quantity and so independent of the scale of the problem. The`max`

arguments help ensure the test doesn’t break down in regions where*x_i*or*\Phi(x)*are close to 0. If this test succeeds,`info`is set to 2 and the function returns`GSL_SUCCESS`

.A general guideline for choosing the gradient tolerance is to set

`gtol = GSL_DBL_EPSILON^(1/3)`

. See Dennis and Schnabel for more information.

If none of the tests succeed,

`info`is set to 0 and the function returns`GSL_CONTINUE`

, indicating further iterations are required.- Testing for a small step size relative to the current parameter vector

- Function:
*int***gsl_multifit_test_delta***(const gsl_vector **`dx`, const gsl_vector *`x`, double`epsabs`, double`epsrel`) -
This function tests for the convergence of the sequence by comparing the last step

`dx`with the absolute error`epsabs`and relative error`epsrel`to the current position`x`. The test returns`GSL_SUCCESS`

if the following condition is achieved,|dx_i| < epsabs + epsrel |x_i|

for each component of

`x`and returns`GSL_CONTINUE`

otherwise.

- Function:
*int***gsl_multifit_test_gradient***(const gsl_vector **`g`, double`epsabs`) This function tests the residual gradient

`g`against the absolute error bound`epsabs`. Mathematically, the gradient should be exactly zero at the minimum. The test returns`GSL_SUCCESS`

if the following condition is achieved,\sum_i |g_i| < epsabs

and returns

`GSL_CONTINUE`

otherwise. This criterion is suitable for situations where the precise location of the minimum,*x*, is unimportant provided a value can be found where the gradient is small enough.

- Function:
*int***gsl_multifit_gradient***(const gsl_matrix **`J`, const gsl_vector *`f`, gsl_vector *`g`) This function computes the gradient

`g`of*\Phi(x) = (1/2) ||f(x)||^2*from the Jacobian matrix*J*and the function values`f`, using the formula*g = J^T f*.