Powell minimization
nekkceb
09-03-2009, 08:14 AM
I have just converted code from ver 2 of NR to ver 3, using Powell to find a solution to a fairly complex modeling problem. The ver 3 code calls the same function as the old code. My problem is that the version 3 calls my function 2-3 times more often in ver 3 compared to ver 2. I hae the same tolerance limit set, and I have tried single-stepping through both, and do not see an obvious difference. Has anyone else had the same experience?
davekw7x
09-04-2009, 03:30 PM
I have just converted code...
Depending on the function and the initial estimate and the way that different compilers handle floating point precision internally, slight differences here and there can make fundamental differences in the output.
So...here some questions that I have:
Are you coming from NR2 C or NR2 C++?
What is the pedigree/provenance of the old and new NR functions code? CD? Download? What?
What compiler(s) do you use (vendor and version number)? Same compiler for both or different?
Did both versions end up at the same result with the only apparent difference being the number of function evaluations?
Can you post the function that you are using along with your initial estimate and expected output? Show how you call the NR function(s).
Regards,
Dave
nekkceb
09-06-2009, 04:41 PM
Thanks for the reply, Dave. I have been fiddling more with this, so I have some more info and replies to your queries:
1. Are you coming from NR2 C or NR2 C++?
Coming all the way from NR2, around which I put my own C++ wrapper. I am now using straight NR3, no custom wrapper.
2. What is the pedigree/provenance of the old and new NR functions code? CD? Download? What?
It has been so long, I am not sure. I believe it was download. I have been a NR user for ages, dating back to Pascal days, though have not kept up to date.
3. What compiler(s) do you use (vendor and version number)? Same compiler for both or different?
Same compiler, Visual C++ the about page says 7.1.6030
4. Did both versions end up at the same result with the only apparent difference being the number of function evaluations?
Yes, essentially the same result, just a lot more calls to my function.
5. Can you post the function that you are using along with your initial estimate and expected output? Show how you call the NR function(s).
Unfortunately, this is a modeling effort, the function is pages of code, and would require posting some fairly substantial data files to run it. With additional info, below, hopefully we can make progress with sending that.
Here's my additional info. I tried changing the "ftol" parameter to Powell, from 0.01 that I was passing. The number of function calls do not change substantially until I set tol to 1.0!! When I poke into the code, I see that tol is used in the following line, in mins_ndim.h:
if (2.0*(fp-fret) <= ftol*(abs(fp)+abs(fret))+TINY)
return p;
This line is the same in the old C-code.
When I set a breakpoint there, the first time that line is reached is after about 32 calls to my function. I believe most of those calls happen in the call to linmin.
So, any clues as to how to better control the tolerance, possibly in linmin?
nekkceb
09-07-2009, 08:38 AM
More information. In the new code (NR3), in the Powell::minimize method, a call is made to the Linemethod::linmin() without passing the "tol" value. Within Linemethod::linmin(), Brent is used with it NOMINAL value for tol, 3.0e-8. In my old code (and I dont know if this is my modification or the way NR2 was originally...), I pass along the tol to the different method calls. This could easily explain why my function gets called many more times in the new code. I will try modifying the NR3 code to pass along the tol value, but I do not like modifying source like this in general (in spite of what I may have done for NR2!).
I would like control over the tolerance. I actually call Powell 4 times. The entire routine is solving for a quantity Q. In the first pass at solving for Q we make a rough estimate for it using an assumed value for a quantity V/Q (=1.0). After this first solution, we get a better estimate for V/Q and take another crack at the solution. With each run, I re-estimate V/Q and tighten up tol a bit, though because it is starting with a closer value to the solution, it takes fewer iterations anyway. I have thought about wrapping this whole loop in a more elegant iterative method but this seems to work well.
I hope this motivates what I am doing.
nekkceb
09-22-2009, 11:41 AM
Just curious if you saw my last post, and could make sense of it. Sorry I can not include much code, the routine that I am minimizing involves thousands of lines and huge data files, and I dont have time to dream up a non-trivial example to pass along.
But here is the issue: should the tol value passed into the powell routine be passed into subroutines? As I pointed out earlier, in the released NR3 code, they are currently NOT passed along, so the line minimization currently gets its nominal, very small, value for tol. Should I report this as a bug or a too-be-fixed issue?
I did change this behavior in my copy of the code, and it works well for my purposes, but that leaves me wondering if there is some basic principle that I violated by doing that.
Thanks for your help!