r,covariance,pca,eigenvalue,princomp

You explicitly told princomp to use correlation matrix in this line: test<-princomp(data,cor=T) If you omit the parameter and just use test <- printcomp(data), it will use covariance matrix and you'll get the results(roughly) you're expecting....

What you want is to find the upper Hessenberg form of your matrix. For a symmetric matrix, this is tridiagonal. Use the command hess to do this: B=hess(A); ...

java,matrix,eigenvector,eigenvalue,jama

Note the return type of PopulateMatrix(): public void PopulateMatrix() { ... } You say it returns nothing, by making it void so when you try to return a Matrix you get an error message saying this is an unexpected return type. If you want to return a Matrix from PopulateMatrix(),...

python,sparse-matrix,eigenvalue

You apply the methods correctly and they will give you the same results if the absolute value of the largest eigenvalue is significantly larger than 0. The reason for the outcome you observe is based on the iterative nature of the algorithm that is used to determine the eigenvalues. From...

scipy,linear-algebra,sparse,eigenvalue

Both eigs and eigsh require that M be positive definite (see the descriptions of M in the docstrings for more details). Your matrix M is not positive definite. Note the negative eigenvalues: In [212]: M Out[212]: array([[ 25.1, 0. , 0. , 17.3, 0. , 0. ], [ 0. ,...

multithreading,optimization,lapack,blas,eigenvalue

You are correct expecting multi-threaded behavior mainly from BLAS and not LAPACK routines. The size of the matrices is big enough to utilize multi-threaded environment. I am not sure about the extend of BLAS usage in ZGGEV routine, but it should be more than a spike. Regarding your specific questions....

machine-learning,cluster-analysis,pca,eigenvalue,eigenvector

As far as I can tell, you have mixed and shuffled aa number of approaches. No wonder it doesn't work... you could simply use jaccard distance (a simple inversion of jaccard similarity) + hierachical clustering you could do MDS to project you data, then k-means (probably what you are trying...

This behaviour is correct. To understand the reason, we need to look at the definition of eigenvectors (source: wikipedia): An eigenvector or characteristic vector of a square matrix A is a non-zero vector v that, when multiplied with A, yields a scalar multiple of itself. [...] That is: Av =...

c++,cuda,eigenvalue,eigenvector,cusolver

It appears (to me) that the cuSolver documentation may be incorrect with respect to the mu parameter. The documentation appears to indicate that this is in the host memory space, i.e. the 2nd to last parameter should be a host pointer. If I change it to be a device pointer:...

The sparse SVD can be computed indirectly. For example, first calculate X'*X or X*X', and then pass the resultant matrix to eigs_sym(). Another way is to first construct a sparse matrix like [zeros(m,m) X; X' zeros(n,n)], where m and n indicate the number of rows and columns in X. You...

python,numpy,scipy,sparse-matrix,eigenvalue

tl;dr: You can use the which='LA' flag as described in the documentation. I quote: scipy.sparse.linalg.eigsh(A, k=6, M=None, sigma=None, which='LM', v0=None, ncv=None, maxiter=None, tol=0, return_eigenvectors=True, Minv=None, OPinv=None, mode='normal') Emphasis mine. which : str [‘LM’ | ‘SM’ | ‘LA’ | ‘SA’ | ‘BE’] If A is a complex hermitian matrix, ‘BE’ is...

c++,opencv,complex-numbers,eigenvector,eigenvalue

So I solved the problem using the 'ComplexEigenSolver' from the Eigen library. //create a multichannel matrix Mat a_com = Mat::zeros(4,4,CV_32FC2); for(int i = 0; i<4; i++) { for(int j = 0; j<4; j++) { a_com.at<Vec2f>(i,j)[0] = a.at<double>(i,j); a_com.at<Vec2f>(i,j)[1] = 0; } } MatrixXcf eigenA; cv2eigen(a_com,eigenA); //convert OpenCV to Eigen ComplexEigenSolver<MatrixXcf>...

c,lapack,eigenvalue,eigenvector

You can not port directly from zgeev to dgeev. The zgeev gets a complex matrix and computes complex eigenvalues. While dgeev gets a real matrix and computes complex eigenvalues. In order to be consistent LAPACK uses WR and WI which is used for the real and imaginary part of each...

matlab,sorting,complex-numbers,eigenvalue

e1 = e(imag(e) >= 0); e2 = e(imag(e) < 0); newe = cat(1,sort(e1),sort(e2)) newe = -0.0156 + 0.5645i -0.4094 + 3.9387i -0.0156 - 0.5645i -0.4094 - 3.9387i ...

Armadillo will do this using eigs_sym Note that computing all the eigenvalues is a very expensive operation whatever you do, usually what is done is to find only the k largest, or smallest eigenvalues (which is what this will do)....

sparse-matrix,armadillo,eigenvalue

I believe that the issue here is that you are running eigs_gen() (which calls DNAUPD) on a symmetric matrix. ARPACK notes that DNAUPD is not meant for symmetric matrices, but does not specify what will happen if you use symmetric matrices anyway: NOTE: If the linear operator "OP" is real...