17.3.4 Bulirsch-Stoer with dense output
softcore
09-20-2008, 11:12 AM
Hi,
running (exactely) the example from 17.0.4 using the StepperBS gives me a floating point overflow exception on MSVC 2005 when using dense output (out(20); ) whereas running it with output only at the natural step points (out(-1); ) is fine.
Is there a problem in the interpolation routine or am I confusing something completely?
Thx,
Alex
davekw7x
09-21-2008, 02:16 PM
...Is there a problem...
Different algorithms may be more sensitive to things like the tolerance that you specify (atol, ftol) and the initial guess for stepsize (h1).
If you try to tell the algorithm what steps to use (by using out(20)) instead of letting it find out on its own (by using out(-1)), it might miss some important features like big changes in the solution at particular values, or it might go off to never-never land (floating point overflows, etc.) trying to go from one point to the not-very-fortunately-chosen next step.
Some algorithms give very precise approximations with fairly small numbers of steps for systems whose solutions are fairly smooth, but may not be so good for "stiff" systems.
If I run the example exactly as given, StepperBS gives floating point overflows (printed as "nan" with some compilers) between x = .8 and x = .9.
If I look at something approximating the actual solution (see attachment), I can see huge values of y-prime at values of x about .83 and 1.67. In the attachment, the upper graph is y and the lower is y-prime.
Several things to try so that you can get a "feel" for a particular class of problem (insight):
Run the program with out(-1) and see how many steps the algorithm needs. If you require a printed output at more points, then try changing the density.
If you require printed output values at fewer points, then you can make a loop that applies the algorithm going from each step to the next step. The final value from one of your output points is the starting value of the next time through the loop. For each iteration, let the algorithm decide how many steps to use in going from one of your output points to the next.
You can also experiment with one single application of the algorithm but using a tighter tolerance (atol = 1.0e-5 instead of 1.0e-3 for example). Even specifying a smaller initial step size (h1) could make a difference.
Note that if you give very tight tolerance values you may actually get worse results, since increasing the number of steps may introduce too much roundoff error.
Bottom line: Try different things. Try to understand the nature of your specific problem. Maybe that will help you to make a decision on which stepper to use. I mean, after all, if there were a single "best" way to solve all such problems, we wouldn't have so many choices would we?
Regards,
Dave
softcore
09-22-2008, 08:12 AM
Thx for your quick help.
What confused me is that I thought in order to get dense output the values calculated during "normal" stepping are interpolated to find values at the "dense" positions (maybe using additional evaluations of dfdx). This shouldn't affect the actual integration unless the integration was trying to use values aquired during the interpolation (which I thought it doesn't, but aparently it does).
Alex
Saul Teukolsky
09-22-2008, 01:50 PM
Dear Alex,
This example shows some interesting features of the BS method. First, you are correct that dense output means interpolating the solution to give output at the desired points (20 in this case) without changing the integration steps. So you are right to be mystified why changing
Output out(-1);
to
Output out(20);
can produce NaNs.
The reason is (as pointed out in the text at the end of section 17.3.4) for the BS dense output algorithm, the error of the interpolation is monitored. If it is too big, the stepsize gets reduced. So a slightly different set of stepsizes gets used. (This is not true of the dense output for the other algorithms in NR - there is no feedback.)
Now the example problem has a sharp feature around x=0.83. BS likes to take big stepsizes, but with the crude error tolerance of 1.e-3, the routine can easily completely miss the sharp feature at x=0.83. This is exactly what happens with the dense output turned on. Without dense output, it so happens that a step lands close enough to the sharp feature that the error can be detected and the stepsize reduced to navigate safely over the sharp feature. Changing the error tolerance to 1.e-4 (or smaller) completely fixes the problem with dense output.
The moral of the story is that a super-efficient algorithm like BS should be used with care when the error tolerance is very crude and the solution might have sharp features.
Saul Teukolsky
softcore
09-23-2008, 10:49 AM
That makes sense.
Thx for your help and this great book!
Alex