Singular Value Decomposition (SVD) Algorithm problem


TheDestroyer
03-23-2012, 01:05 PM
Hello guys,

I think there's a problem in the singular value decomposition algorithm in numerical recipes 3.

Singular value decomposition is supposed decompose a matrix A to:

A = U*S*V^T

where U and V are square unitary matrices; S is a rectangular matrix (in general.

When I do the SVD for this matrix
{{2., 2., 5.}, {4., 5., 1.}, {7., 8., 9.}, {13., 11., 12.}}
(which is 4 rows and 3 columns). The U matrix is returned as a rectangular matrix with the dimensions of A, which is totally wrong!!!!

This is because the class SVD, is initialized with having the matrix SVD::u equal to the input matrix A.

Any ideas? I confirmed this mistake by comparing various matrices with Mathematica's results.

Please suggest what I can do for that.

Thank you for any efforts.

davekw7x
03-23-2012, 05:01 PM
Hello guys,

I think there's a problem in the singular value decomposition algorithm in numerical recipes 3.

Singular value decomposition is supposed decompose a matrix A to:

A = U*S*V^T

where U and V are square unitary matrices;
Well, as my dear old granny use to say, "There's more than one way to skin a cat!" (See Footnote.)

You may want to refer to Moler's chapter on Eigenvalues and SVD (http://www.mathworks.com/moler/eigs.pdf) or some such thing.

The matrix on the left is, indeed a square matrix whose size is mxm, where m is the number of rows in the original matrix. For the case of interest, where n is the number of columns and m > n, it "turns out" that the last (m-n) columns of the matrix on the left of the decomposition are "extra." They do not contribute to any of the calculations needed to reconstruct the original matrix, and they are not needed for for eigenvalue (singular value) analysis of the matrix. In numerical analysis, if they don't need things, they don't carry them around just for the heck of it, right? I mean, the "real thing" in the center of the decomposition is an nxn diagonal matrix, but they don't bother to carry around the zeros; they just define a vector that contains the values of the diagonal elements of the matrix.

Anyhow...

Here's the bottom line for me: Apply the Numerical Recipes svd decomposition to your system and tell us what the singular values are, and tell us what is the condition number of the matrix. (That is: what is the ratio of largest and smallest singular values?)

Here's my take on it:

///
// Driver for Singular Value Decomposition routine
// for Numerical Recipes Version 3
//
// Data file format:
// First line is a comment line (a throwaway)
// Second line has number of rows and number of columns
// Successive lines have elements in row-major order
//
// davekw7x
//

//
// Put whatever path you need for the NR3 source files
#include "../code/nr3.h"
#include "../code/svd.h"

int main(int argc, char **argv)
{
const char *inname = "matrx.dat";
int numRows, numCols;
string inLine;

if (argc > 1) {
inname = argv[1];
}

ifstream inFile(inname);
if (!inFile) {
cout << "Can't open file " << inname << " for reading." << endl;
exit(EXIT_FAILURE);
}
else {
cout << "Opened file " << inname << " for reading." << endl;
}
// Read and discard first line
getline(inFile, inLine);

// Second line has number of rows and columns
getline(inFile, inLine); // Will discard rest of line

stringstream ss(inLine);
ss >> numRows >> numCols;

if (!inFile || !ss) {
cout << "Invalid input: Can't read number of rows and columns." << endl;
exit(EXIT_FAILURE);
}
cout << "Reading matrix with " << numRows
<< " rows and " << numCols << " columns."
<< endl;

MatDoub a(numRows,numCols);


// Read the matrix elements
for (int i = 0; i < a.nrows(); i++) {
for (int j = 0; j < a.ncols(); j++) {
inFile >> a[i][j];
if (!inFile) {
cout << "Problem reading a[" << i << "][" << j << "]" << endl;
exit(EXIT_FAILURE);
}
}
}
//
// Instantiation performs the svd decomposition
SVD svd(a);

// All we are really interested in are the svd values,
// i.e. the vector that represents the diagonal of
// the matrix "w," but I'll print out all of the
// decomposition matrix elements for check purposes.
//
cout << "*********After decomposition***********" << endl;
cout << "Matrix svd.u" << endl;
cout << fixed << setprecision(6);

for (int i=0; i < svd.u.nrows(); i++) {
for (int j = 0; j < svd.u.ncols(); j++) {
cout << setw(12) << svd.u[i][j];
}
cout << endl;
}

// These are what we are really interested in:
cout << "Diagonal of matrix w (svd.w)" << endl;
for (int ii = 0; ii < svd.w.size(); ii++) {
cout << setw(12) << svd.w[ii];
}

cout << endl << "Matrix v-transpose (svd.v)" << endl;
for (int i = 0; i < svd.v.nrows(); i++) {
for (int j = 0; j < svd.v.ncols(); j++) {
cout << setw(12) << svd.v[j][i];
}
cout << endl;
}

cout << endl << "Check the product against the original matrix:" << endl;
cout << "Original matrix:" << endl;
for (int i = 0; i < a.nrows(); i++) {
for (int j = 0; j < a.ncols(); j++) {
cout << setw(12) << a[i][j];
}
cout << endl;
}

//
// A short-hand calculation, depending on our intimate
// knowledge of the nature of the matrices and our
// familiarity with matrix operations, namely the fact
// that w is diagonal.
//
cout << "Product u*w*(v-transpose):" << endl;
for (int i = 0; i < numRows; i++) {
for (int j = 0; j < numCols; j++) {
a[i][j] = 0.0;
for (int k = 0; k < numCols; k++) {
a[i][j] += svd.u[i][k] * svd.w[k] * svd.v[j][k];
}
}
}

for (int i = 0; i < a.nrows(); i++) {
for (int j = 0;j < numCols; j++) {
cout << setw(12) << a[i][j];
}
cout << endl;
}
inFile.close();

return 0;
}


Here's the file:

matrx.dat: Matrix for davekw7x's svd analysis with NR3
4 3 Four rows and three columns
2.0 2.0 5.0
4.0 5.0 1.0
7.0 8.0 9.0
13.0 11.0 12.0


Output (Program was compiled with GNU g++ version 4.1.2 on my Centos 5.8 Linux system):

Opened file matrx.dat for reading.
Reading matrix with 4 rows and 3 columns.
*********After decomposition***********
Matrix svd.u
0.200301 -0.593391 0.170310
0.217730 0.768024 0.397079
0.529515 -0.224076 0.673247
0.795039 0.088406 -0.600051
Diagonal of matrix w (svd.w)
26.171503 3.932220 1.609364
Matrix v-transpose (svd.v)
0.585126 0.552922 0.593215
0.372832 0.466199 -0.802281
-0.720155 0.690606 0.066638

Check product against original matrix:
Original matrix:
2.000000 2.000000 5.000000
4.000000 5.000000 1.000000
7.000000 8.000000 9.000000
13.000000 11.000000 12.000000
Product u*w*(v-transpose):
2.000000 2.000000 5.000000
4.000000 5.000000 1.000000
7.000000 8.000000 9.000000
13.000000 11.000000 12.000000




Regards,

Dave

Footnote:
I never really understood that thing about skinning a cat, since my grandmum loved cats. I mean she always had some around the farm and...

Oh, crap.

TheDestroyer
03-23-2012, 06:30 PM
Thank you for your reply.

but then U isn't unitary!!! I want to calculate the pseudo inverse from the product U*S*V^T. I think it wouldn't be possible to do it after ignoring a columns in U, right?

davekw7x
03-23-2012, 10:26 PM
...I want to calculate the pseudo inverse from the product U*S*V^T...
There is an illustration of results here: http://www.nr.com/forum/showthread.php?t=993

Bottom line: if NR3 SVD gives [U], [W] and [V], where A = [U] * [W] * [V]^T, then the pseudo-inverse can be calculated as [V] * [W+] * [U]^T (Where W+ is the vector obtained from taking inverse values of the non-zero members of W.) Of course these examples are for real-valued matrices so that the transpose is equal to the conjugate. (See Footnote.)


Regards,

Dave

Footnote:
In that example, I actually filled out the matrix W from the diagonals given by the NR3 SVD and printed everything out. Then I performed full matrix multiplication in a program built from the SVD example that I showed in this thread. Results are seen to be the same as obtained from other methods (i.e. with other tools) even though some of the intermediate matrix properties might have been different.


"The purpose of computing is insight, not numbers."
---Richard W. Hamming

"Numbers are still the bottom line, and the bottom line is the bottom line."
---davekw7x

TheDestroyer
03-24-2012, 06:33 AM
Thanks for the help :-)

DimpleB
12-04-2012, 12:18 PM
@Dave -Hi...I tried out your matrix parameters and following is the answer for U, W and V...the only thing I changed was float to double.. and matrix starting from 0 (as oppose to 1) incase of svd example suggested in the book...could you please help me with it...attached is the code and the answer that I get while running it through Dev c++ compiler as a .c file

davekw7x
12-04-2012, 11:01 PM
@Dave -Hi...I tried out your matrix parameters and following is the answer for U, W and V...

SVD Decomposition is not unique. The answers that I showed previously were from the NR3 program, which organizes things a little differently. (Rearranges things to give Singular Values in sorted order.)

Bottom line: Check your solution by multiplying U times W times V-transpose.

The product should be (approximately) equal to your original matrix.

Here's what I got using the NR2 svdcmp.cpp. (Distribution files used unchanged.)


Decomposition matrices:
Matrix u
-0.200301 -0.170310 -0.593391
-0.217730 -0.397079 0.768024
-0.529515 -0.673247 -0.224076
-0.795039 0.600051 0.088406
Diagonal of matrix w
26.171503 1.609364 3.932220
Matrix v-transpose
-0.585126 -0.552922 -0.593215
0.720155 -0.690606 -0.066638
0.372832 0.466199 -0.802281

Check product against original matrix:
Original matrix:
2.000000 2.000000 5.000000
4.000000 5.000000 1.000000
7.000000 8.000000 9.000000
13.000000 11.000000 12.000000
Product u*w*(v-transpose):
2.000000 2.000000 5.000000
4.000000 5.000000 1.000000
7.000000 8.000000 9.000000
13.000000 11.000000 12.000000



Regards,

Dave