This commit is contained in:
Gael Guennebaud 2018-07-12 11:56:18 +02:00
parent 6d451cf2b6
commit b347eb0b1c

View File

@ -30,8 +30,8 @@
* actually linear. But if this is so, you should probably better use other * actually linear. But if this is so, you should probably better use other
* methods more fitted to this special case. * methods more fitted to this special case.
* *
* One algorithm allows to find an extremum of such a system (Levenberg * One algorithm allows to find a least-squares solution of such a system
* Marquardt algorithm) and the second one is used to find * (Levenberg-Marquardt algorithm) and the second one is used to find
* a zero for the system (Powell hybrid "dogleg" method). * a zero for the system (Powell hybrid "dogleg" method).
* *
* This code is a port of minpack (http://en.wikipedia.org/wiki/MINPACK). * This code is a port of minpack (http://en.wikipedia.org/wiki/MINPACK).
@ -58,35 +58,41 @@
* There are two kinds of tests : those that come from examples bundled with cminpack. * There are two kinds of tests : those that come from examples bundled with cminpack.
* They guaranty we get the same results as the original algorithms (value for 'x', * They guaranty we get the same results as the original algorithms (value for 'x',
* for the number of evaluations of the function, and for the number of evaluations * for the number of evaluations of the function, and for the number of evaluations
* of the jacobian if ever). * of the Jacobian if ever).
* *
* Other tests were added by myself at the very beginning of the * Other tests were added by myself at the very beginning of the
* process and check the results for levenberg-marquardt using the reference data * process and check the results for Levenberg-Marquardt using the reference data
* on http://www.itl.nist.gov/div898/strd/nls/nls_main.shtml. Since then i've * on http://www.itl.nist.gov/div898/strd/nls/nls_main.shtml. Since then i've
* carefully checked that the same results were obtained when modifying the * carefully checked that the same results were obtained when modifying the
* code. Please note that we do not always get the exact same decimals as they do, * code. Please note that we do not always get the exact same decimals as they do,
* but this is ok : they use 128bits float, and we do the tests using the C type 'double', * but this is ok : they use 128bits float, and we do the tests using the C type 'double',
* which is 64 bits on most platforms (x86 and amd64, at least). * which is 64 bits on most platforms (x86 and amd64, at least).
* I've performed those tests on several other implementations of levenberg-marquardt, and * I've performed those tests on several other implementations of Levenberg-Marquardt, and
* (c)minpack performs VERY well compared to those, both in accuracy and speed. * (c)minpack performs VERY well compared to those, both in accuracy and speed.
* *
* The documentation for running the tests is on the wiki * The documentation for running the tests is on the wiki
* http://eigen.tuxfamily.org/index.php?title=Tests * http://eigen.tuxfamily.org/index.php?title=Tests
* *
* \section API API : overview of methods * \section API API: overview of methods
* *
* Both algorithms can use either the jacobian (provided by the user) or compute * Both algorithms needs a functor computing the Jacobian. It can be computed by
* an approximation by themselves (actually using Eigen \ref NumericalDiff_Module). * hand, using auto-differentiation (see \ref AutoDiff_Module), or using numerical
* The part of API referring to the latter use 'NumericalDiff' in the method names * differences (see \ref NumericalDiff_Module). For instance:
* (exemple: LevenbergMarquardt.minimizeNumericalDiff() ) *\code
* MyFunc func;
* NumericalDiff<MyFunc> func_with_num_diff(func);
* LevenbergMarquardt<NumericalDiff<MyFunc> > lm(func_with_num_diff);
* \endcode
* For HybridNonLinearSolver, the method solveNumericalDiff() does the above wrapping for
* you.
* *
* The methods LevenbergMarquardt.lmder1()/lmdif1()/lmstr1() and * The methods LevenbergMarquardt.lmder1()/lmdif1()/lmstr1() and
* HybridNonLinearSolver.hybrj1()/hybrd1() are specific methods from the original * HybridNonLinearSolver.hybrj1()/hybrd1() are specific methods from the original
* minpack package that you probably should NOT use until you are porting a code that * minpack package that you probably should NOT use until you are porting a code that
* was previously using minpack. They just define a 'simple' API with default values * was previously using minpack. They just define a 'simple' API with default values
* for some parameters. * for some parameters.
* *
* All algorithms are provided using Two APIs : * All algorithms are provided using two APIs :
* - one where the user inits the algorithm, and uses '*OneStep()' as much as he wants : * - one where the user inits the algorithm, and uses '*OneStep()' as much as he wants :
* this way the caller have control over the steps * this way the caller have control over the steps
* - one where the user just calls a method (optimize() or solve()) which will * - one where the user just calls a method (optimize() or solve()) which will
@ -94,7 +100,7 @@
* convenience. * convenience.
* *
* As an example, the method LevenbergMarquardt::minimize() is * As an example, the method LevenbergMarquardt::minimize() is
* implemented as follow : * implemented as follow:
* \code * \code
* Status LevenbergMarquardt<FunctorType,Scalar>::minimize(FVectorType &x, const int mode) * Status LevenbergMarquardt<FunctorType,Scalar>::minimize(FVectorType &x, const int mode)
* { * {