Which method besides Levenberg-Marquardt to use for nonlinear fit?


lei
01-23-2003, 05:01 PM
The Levenberg-Marquardt method for nonlinear
fit requires the use of derivatives of the function
with respect to fitting parameters. This is
sometimes not available, (unless obtained by
numerical difference). Which method would
you recommend in this case?

boring7fordy7
07-15-2003, 05:29 AM
No other method but as you mentioned.
calculate the derivates from finite differenzes like
df(x)=( f(x+eps)-f(x-eps))/(2*eps)

Thanasis Kakali
02-15-2005, 08:22 AM
Algorithms that optimize/minimize real functions like downhill simplex for instance. The basic concept is this. The main task of minimization algorithms is to find the minimum value of a function by alternating the values on one, two (or more) function parameters. Now let's say that one has a nonlinear mathematical model Y(I) = f(X1,X2,X(I)) where X1 and X2 are the parameters to be adjusted and Y(I) and X(I) are the experimental data points. The LM function minimizes the mathematical quantity G = Σ(Y_calculated-Y_experimental), (I=1,N) adjusting at the same time the values of the unknown parameters. A minimization algorithm can do the same if instead of minimizing the function's value, the algorithm is set to read the Y(Ι) and X(Ι) data points from an external file and minimize the G value adjusting the X1 and X2 parameters from the model.
I accomplished the above with downhill simplex from IMSL libraries (ch8 optimization) (the subroutine from nr should work as well!) for my diploma thesis. I had a nonlinear model that was huge so that derivatives couldn't be derived and I was not in position to use LM even with arithmetic calculation of the derivatives. Try it and let me know !!!

Thanasis Kakali
02-15-2005, 08:24 AM
Algorithms that optimize/minimize real functions like downhill simplex for instance. The basic concept is this. The main task of minimization algorithms is to find the minimum value of a function by alternating the values on one, two (or more) function parameters. Now let's say that one has a nonlinear mathematical model Y(I) = f(X1,X2,X(I)) where X1 and X2 are the parameters to be adjusted and Y(I) and X(I) are the experimental data points. The LM function minimizes the mathematical quantity G = Σ(Y_calculated-Y_experimental)^2, (I=1,N) adjusting at the same time the values of the unknown parameters. A minimization algorithm can do the same if instead of minimizing the function's value, the algorithm is set to read the Y(Ι) and X(Ι) data points from an external file and minimize the G value adjusting the X1 and X2 parameters from the model.
I accomplished the above with downhill simplex from IMSL libraries (ch8 optimization) (the subroutine from nr should work as well!) for my diploma thesis. I had a nonlinear model that was huge so that derivatives couldn't be derived and I was not in position to use LM even with arithmetic calculation of the derivatives. Try it and let me know !!!