DavidB
06-19-2013, 02:22 PM
Hello.
(I have the First Edition; perhaps this routine is in the latest edition. If so, please let me know).
I am looking around for an extremely robust vector normalization routine (floating-point variables) for the 2-D Euclidean norm:
||v|| = v/(sqrt(x1^2 + x2^2 + x3^2 . . . ))
In case this notation is not clear:
First, it adds the squares of all the components of the vector to compute a total;
then it computes the square root of this total to compute the norm;
then it divides all components of the original vector by this norm to produce the normalized vector.
I have come across a couple routines that check for underflow or overflow conditions at every step of the process.
And I have browsed some forums in which members have suggested sorting the vectors from smallest to largest and then computing the vector norm by starting with the smallest elements.
That way, if some elements are extremely small and some are very large, the small elements don't get ignored. If the smallest elements get added up first, they may create an amount that is significant enough to not be ignored when it is added to the first large element.
In any case, I thought I would ask this question in this forum too.
I do not need it to be fast, just as long as it is as accurate as possible. Are there any other considerations that should be noted? Any further developments or other routines/developments/articles that are worth reading on the topic?
Any recommendations and suggestions are welcome.
(I have the First Edition; perhaps this routine is in the latest edition. If so, please let me know).
I am looking around for an extremely robust vector normalization routine (floating-point variables) for the 2-D Euclidean norm:
||v|| = v/(sqrt(x1^2 + x2^2 + x3^2 . . . ))
In case this notation is not clear:
First, it adds the squares of all the components of the vector to compute a total;
then it computes the square root of this total to compute the norm;
then it divides all components of the original vector by this norm to produce the normalized vector.
I have come across a couple routines that check for underflow or overflow conditions at every step of the process.
And I have browsed some forums in which members have suggested sorting the vectors from smallest to largest and then computing the vector norm by starting with the smallest elements.
That way, if some elements are extremely small and some are very large, the small elements don't get ignored. If the smallest elements get added up first, they may create an amount that is significant enough to not be ignored when it is added to the first large element.
In any case, I thought I would ask this question in this forum too.
I do not need it to be fast, just as long as it is as accurate as possible. Are there any other considerations that should be noted? Any further developments or other routines/developments/articles that are worth reading on the topic?
Any recommendations and suggestions are welcome.