dfpMin
Davidon Fletcher Powell minimization
Description
Find the minimum point of a function by Davidon Fletcher Powell algorithm.
Usage
dfpMin(func, para, dfunc=richardson.grad, max.iter=500, ftol=1e-10, gtol=1e-3, line.search='brent')
Required Arguments
- func
-
The name of the function to be minimized.
The function func must take a single, vector argument of the parameters for the minimization.
- para
-
The argument to func indicating the starting point for iterations.
Alternatively, para can be a list as returned by dfpMin, which can be used for continuation.
Optional Arguments
- dfunc
-
The name of a function which returns the gradient of func. It takes two
arguments, the first is the function name (func) and the second is the the
vector of the parameters for the minimization. While the first arguement
seems reduntant it is convenient for numerical approximation functions.
Two functions, richardson.grad and numerical.grad, are available for numerical
approximation. numerical.grad does a simple difference calculation to get a
numerical approximation of the gradient. richardson.grad calculates a much
better numerical aproximation of the gradient, but require many more function
evaluations.
- max.iter
-
An integer indicating the maximum number of iterations.
- ftol
-
The function tolerence for convegence.
- gtol
-
The gradient tolerence for convegence.
- line.search
-
Line search method used should be 'brent' or 'nlmin'.
Value
The value returned is a list (point.info) of:
$value=the function value,
$parms=the parameter values at the minimum (or at the last iteration if a minimum is not reached),
$gradient, $hessian,
$converged=a logical indication of convergence,
$search.direction=the search direction for the next step.
Details
The algorithm is modified from the Davidon Fletcher Powell minimization (as reported in the reference
below) so that the grad and hessian are updated after convergence. This
is a good idea if hessian is used as the information matrix - SEE BUG.
If numerical.grad is used in place of richardson.grad the gradient tolerance
may have to be relaxed (eg. gtol=0.05) in order to detect convergence.
REFERENCES
See Numerical Recipes, William H. Press, B.P.Flannery, S.A. Teukolsky and W.T.Vetterling
Bugs
The estimate of the Hessian matrix is not correct.
See Also
Examples
return to Table of Contents