Downhill Simplex Algorithm, section 10.4
Hi,
I hope that somebody can give me some help with the Downhill Simplex Algorithm for multi-dimensional optimisation. Has anyone implemented this algorithm?
Does anyone have an easy to understand explanation of the Downhill Simplex method, perhaps a paper or a short lecture note? I can not see the logic of the approach. How does it work? Does it use a steepest descent approach?
Many Thanks,
Sam
Integral
09-19-2003, 02:19 AM
I coded the Nelder Mead simplex method on my Apple II in basic years ago, it is a cool algorithm.
The idea is to evaluate your function at 3 points, this forms your simplex, one of the three points will be the largest (of those 3) functional value. From that point you draw a line through the mid point of the opposite side of the triangle, your next point to sample will be on this line. Start by simply mirroring across the opposite side of the triangle, if that point yields an improvement (smaller value then the old max point) you can extend the line further and sample again, if it is a further improvement take that value as your new simplex point and repeat.
If you do not see an improvement with your initial sample you can retract to a point inside the simplex on the line and sample there.
Note:
I have not examined the code contained in this book, the above is my memory from coding this method 20yrs ago, I am confident that I have the basic method correct, I am not sure how it is implented in this the Numerical Recipies book.
Thanasis Kakali
10-30-2003, 05:03 AM
Hi,
I have a function that it written in the form y=f(x,G1,G2). That function is non-linear and G1 is always greater than G2. From my experiment, I receive 300 points of y and x. My question is: How must one use the downhill simplex algorithm in order to minimize the sum of squares of the following difference
y_experimental-y_calculated
optimizing at the same time the values of G1 and G2? Has anyone worked with amoeba subroutine in fortran 77 for similar problems?:confused:
Thanks anyway