mcmc
04-23-2011, 07:44 PM
Hello,
I'm currently implementing different minimisation techniques to see how they cope with my 7 dimensional function. There is no way of knowing its analytic form, so none of the techniques can calculate gradients.
So far I have implemented the downhill simplex/Nelder-Mead, Powell's, and Quasi-Newton (with the additional Funcd calculating finite differences to approximate gradients). First question: are there any other popular/effective minimisation techniques I could program?
And secondly, I am having trouble optimising the initial parameters for each of these methods (by parameters I mean the ones used in the minimisation, not the ones which are trying to be optimized to produce a minimum). In downhill simplex I am passing a vector of dels (step size) to the function so that I can control the displacement of each variable. How can I choose the most appropriate step sizes? I am using "numeric_limits<double>::epsilon()" as the value of ftoll (fractional convergence tolerance) but I am not sure what the most sensible value to choose is. The initial solution of my function is in the order 10^7, and the minimum should be around 10^4.
In Powell's method I am not giving the minimisation a value for ftoll, since the solution can get a bit stuck on the same value sometimes. Is this a good idea?
I am also completely unsure what values to use for gtol, TOLX, and STPMAX (or whether I should just leave the defaults!) Currently the minimisation is not working correctly: the algorithm terminates very prematurely (claims 0 iterations) after evaluating the solution about 80 times, increasing the value of the solution at its "minimum" by 1000. I can't see why this is happening, has anyone come across this before and it is clear what is wrong?
Any help would be very much appreciated, sorry for the length of this post!!
I'm currently implementing different minimisation techniques to see how they cope with my 7 dimensional function. There is no way of knowing its analytic form, so none of the techniques can calculate gradients.
So far I have implemented the downhill simplex/Nelder-Mead, Powell's, and Quasi-Newton (with the additional Funcd calculating finite differences to approximate gradients). First question: are there any other popular/effective minimisation techniques I could program?
And secondly, I am having trouble optimising the initial parameters for each of these methods (by parameters I mean the ones used in the minimisation, not the ones which are trying to be optimized to produce a minimum). In downhill simplex I am passing a vector of dels (step size) to the function so that I can control the displacement of each variable. How can I choose the most appropriate step sizes? I am using "numeric_limits<double>::epsilon()" as the value of ftoll (fractional convergence tolerance) but I am not sure what the most sensible value to choose is. The initial solution of my function is in the order 10^7, and the minimum should be around 10^4.
In Powell's method I am not giving the minimisation a value for ftoll, since the solution can get a bit stuck on the same value sometimes. Is this a good idea?
I am also completely unsure what values to use for gtol, TOLX, and STPMAX (or whether I should just leave the defaults!) Currently the minimisation is not working correctly: the algorithm terminates very prematurely (claims 0 iterations) after evaluating the solution about 80 times, increasing the value of the solution at its "minimum" by 1000. I can't see why this is happening, has anyone come across this before and it is clear what is wrong?
Any help would be very much appreciated, sorry for the length of this post!!