rboesch
03-19-2015, 02:20 PM
Hello all,
In section 9.7, (NR in C, 2ndEd) it's noted that the components of x and F should be rescaled to near unity to avoid root convergence problems. Assume this rescaling is done.
In newt(), a value is assigned to the scaling parameter stpmax which is subsequently used in lnsrch() to adjust the length of the x-increment vector p so that |p| = stpmax where
stpmax = STPMX * max(|x|, n),
STPMX = 100 and n is the dimension of x. This shows stpmax is of order 100, or higher depending on the value of n.
Since the full Newton step |p| is taken initially, the initial step also has order 100. This means that the initial guess for the root (which has order 1) is updated to a new guess of order 100.
Is the choice for the value of STPMX defeating the purpose of the original scaling? Is there a reason it was chosen to be this large? Similarly, is the dependence of stpmax on n also at odds with the rescaling, assuming n >= 10?
Of course, different choices of STPMX will locate different roots given the same initial guess. It's easy to demonstrate that with several roots available, an initial guess near one root will allow convergence to another root further away if STPMX= 100, whereas if STPMX=1 convergence on the nearer root takes place. But my question is more about algorithm stability and rationale behind choosing STPMX=100, and the dependence of stpmax on n.
Many thanks,
Rick
In section 9.7, (NR in C, 2ndEd) it's noted that the components of x and F should be rescaled to near unity to avoid root convergence problems. Assume this rescaling is done.
In newt(), a value is assigned to the scaling parameter stpmax which is subsequently used in lnsrch() to adjust the length of the x-increment vector p so that |p| = stpmax where
stpmax = STPMX * max(|x|, n),
STPMX = 100 and n is the dimension of x. This shows stpmax is of order 100, or higher depending on the value of n.
Since the full Newton step |p| is taken initially, the initial step also has order 100. This means that the initial guess for the root (which has order 1) is updated to a new guess of order 100.
Is the choice for the value of STPMX defeating the purpose of the original scaling? Is there a reason it was chosen to be this large? Similarly, is the dependence of stpmax on n also at odds with the rescaling, assuming n >= 10?
Of course, different choices of STPMX will locate different roots given the same initial guess. It's easy to demonstrate that with several roots available, an initial guess near one root will allow convergence to another root further away if STPMX= 100, whereas if STPMX=1 convergence on the nearer root takes place. But my question is more about algorithm stability and rationale behind choosing STPMX=100, and the dependence of stpmax on n.
Many thanks,
Rick