Leverberg-Marquard convergence


nikon
05-03-2005, 04:38 AM
Dear all,

I am trying to implement LM methods to unfold a courve that is a sum of many gaussians with different positions and height.(This is the basic problem, then there are some complications).

To make the problem smaller I could also have fixed position and width parameter, and fit only for heights.

I am facing a problem on convergence. I derived the main algorithm from the implemantation of the C++ software package.

During the alghoritm execution two things can happend:
1) The fitting is improving.
2) The fitting is NOT improving.

For the first case there is a convergence criteria that is reasonable and easy to understand.

In the second case the alghoritm try other 4 iteration and then stop. I don't really understand this.

Then I cannot understand why If I run the alghorithm twice using for the second time starting point the results of the first run, results change, usually improve if you do it three to four time then hangs over the same value.

Best Regards

Davide

JacksonRJones
07-18-2006, 02:35 PM
Hello. I have been working with non-linear fitting problems and I think that a major shortcoming of the mrqmin alorithm is the fact that a singular value decomposition is not done for the matrix inversion - instead GuassJ is used. GuassJ will throw an error for an exactly singular matrix, but not for matrices which are close to singular. I encountered similar problems as you described and when I checked the matrix inversion the matrix was close to singular. The svd algorithm in Numerical Recipes should do the trick if this is your problem, which you can find out by checking the matrix which is being inverted when your solutions hang. Hope this helps.

-Jackson

David Buchan
09-20-2006, 10:16 AM
Hi Jackson,

You may not see this as I'm replying way too late, however, this idea of using svd instead of gaussj interests me very much.

I recently had mrqmin bomb in gaussj because of a "singular matrix in gaussj."

I replaced gaussj with svdcmp and svbksb and made sure that, for cases for which the original version of mrqmin worked, all the variables going into and out of the svd routines were the same as those going into and out of gaussj. It worked fine. I got the identical answer. But....and here come the buts...

I next tried it on my nearly singular (or so I'm told by gaussj) dataset and it gave wacky results.

I've simplified the problem for diagnosis. My data set is composed of 10 points (x,y):

.10000000000000000E-01 -.19084715075281107E+02
.10000000000000001E+00 -.19049377422018932E+02
.14999999999999999E+00 -.19046598056032021E+02
.20000000000000001E+00 -.19046201003748177E+02
.80000000000000004E+00 -.19055730258560441E+02
.13999999999999999E+01 -.19032701226097448E+02
.22000000000000002E+01 -.18971555174385362E+02
.30000000000000000E+01 -.18921129534337080E+02
.45000000000000000E+01 -.18823851724795119E+02
.59000000000000004E+01 -.18754367575122288E+02

and my function for test purposes is just a straight line:

subroutine func(x,a,y,dyda,ma)
integer ma
double precision x, y, a(ma), dyda(ma)

y=a(1)+(a(2)*x)

dyda(1)=0.d0

dyda(2)=x

return
end

The result is clearly not a "best" fit, as the slope is of the wrong sign.

For my implementation of the svd in mrqmin, I zero singular values that are less than 1D-12 (I'm using double precision FORTRAN) prior to backsubstituting, and when composing my inverse, I zero any (1/singular values) where the singular value is zero. I believe that's standard procedure.

I'm still investigating, but these results seem to imply that mrqmin doesn't like working with a pseudo-inverse.

David Buchan
Toronto

asadullahbeg
01-31-2014, 02:36 PM
Hi David,
I was hoping if you can kindly give me your code. I just need to see the mrqmin in action as I am not able to understand the input variables of the function. Please I require help