r,parallel-processing,sparse-matrix

The fundamental problem is that the workers haven't loaded the Matrix package, so they don't know how to subset the Matrix object "mat". You can fix that with the foreach .packages option: temp_vec = foreach(i=iter(df, by='row'), .packages='Matrix', .combine=rbind) %dopar% { # snip } Note that your example fails on all...

python,sparse-matrix,eigenvalue

You apply the methods correctly and they will give you the same results if the absolute value of the largest eigenvalue is significantly larger than 0. The reason for the outcome you observe is based on the iterative nature of the algorithm that is used to determine the eigenvalues. From...

numpy,sparse-matrix,sparse,sparse-array

A restriction of scipy.sparse matrices is that they represent linear operators and are thus kept two dimensional, which leads to the question: Which operation are you seeking to do? All einsum operations on a pair of 2D matrices are very easy to write without einsum using dot, transpose and pointwise...

r,sparse-matrix,text-mining,tm

The documentation is admittedly a little tricky here. You can use the coercing function as.DocumentTermMatrix but not the direct constructor DocumentTermMatrix on a simple_triplet_matrix. library(slam) library(Matrix) mat2 = simple_triplet_matrix(c(1,2,4,5,3), j = c(2,3,4,1,5), v = c(3,2,3,4,1), dimnames = list(paste0("doc", 1:5), paste0("word", 1:5))) mat2 = as.DocumentTermMatrix(mat2, weighting = weightTfIdf) You can check:...

sparse-matrix,armadillo,eigenvalue

I believe that the issue here is that you are running eigs_gen() (which calls DNAUPD) on a symmetric matrix. ARPACK notes that DNAUPD is not meant for symmetric matrices, but does not specify what will happen if you use symmetric matrices anyway: NOTE: If the linear operator "OP" is real...

parameters,null,sparse-matrix,missing-data,ampl

Instead of specifying a default value, you can work with a sparse matrix. For example: param m integer > 0; set C within {1..m,1..m}; param A{C}; data; param m := 4; param: C: A: 1 2 3 4 := 1 36 . . -2 2 . 7 3 . 3...

python,arrays,numpy,scipy,sparse-matrix

You are using an older version of SciPy. In the original implementation of sparse matrices, indices where stored in an int32 variable, even on 64 bit systems. Even if you define them to be uint32, as you did, they get casted. So whenever your matrix has more than 2^31 -...

I assume you want to have a 5 by 5 matrix at the end. also indices start from 0. In [18]:import scipy.sparse as sp In [20]: a = [[0,1,2],[0],[0,3,4]] In [31]: m = sp.lil_matrix((5,5), dtype=int) In [32]: for row_index, col_indices in enumerate(a): m[row_index, col_indices] = 1 ....: In [33]: m.toarray()...

python,performance,optimization,scipy,sparse-matrix

First of all don't name your array dict, it is confusing as well as hides the built-in type dict. The problem here is that you're doing everything in quadratic time, so convert your arrays dict and id to a dictionary first where each word or id point to its index....

c++,cuda,sparse-matrix,matrix-factorization,cusolver

I'm currently working on something similar myself. I decided to basically wrap the conjugate gradient and level-0 incomplete cholesky preconditioned conjugate gradient solvers utility samples that came with the CUDA SDK into a small class. You can find them in your CUDA_HOME directory under the path: samples/7_CUDALibraries/conjugateGradient and /Developer/NVIDIA/CUDA-samples/7_CUDALibraries/conjugateGradientPrecond Basically,...

Intel MKL documentation (for mkl_csrcoo) states: Converts a sparse matrix in the CSR format to the coordinate format and vice versa. And according to the above link you should set job: if job(1)=1, the matrix in the coordinate format is converted to the CSR format....

r,sparse-matrix,dummy-data,glmnet,lasso

I don't think you need a sparse.model.matrix, as all that it really gives you above a regular matrix is expansion of factor terms, and if you're binary already that won't give you anything. You certainly don't need to code as factors, I frequently use glmnet on a regular (non-model) sparse...

python,numpy,scipy,linear-algebra,sparse-matrix

** has been implemented for csr_matrix. There is a __pow__ method. After handling some special cases this __pow__ does: tmp = self.__pow__(other//2) if (other % 2): return self * tmp * tmp else: return tmp * tmp For sparse matrix, * is the matrix product (dot for ndarray). So it...

r,machine-learning,sparse-matrix

You can convert your indices to indices starting at one with as.integer(as.factor(.)). UIMatrix <- sparseMatrix(i = as.integer(as.factor(trainingData$UserID)), j = as.integer(as.factor(trainingData$MovieID)), x = trainingData$Rating) dim(UIMatrix) # [1] 2 4 dimnames(UIMatrix) <- list(sort(unique(trainingData$UserID)), sort(unique(trainingData$MovieID))) UIMatrix # 2 x 4 sparse Matrix of class "dgCMatrix" # 63155 63530 63544 63545 # 10090 2...

I don't know the answer for sure, but I will give you my guess about what is happening. I don't know Fortran, but from a C++ perspective, what you are showing makes sense, we just need to deconstruct that statement. The pseudo-code translation of a = b + c where...

matlab,linear-algebra,sparse-matrix

You're doing it wrong—you need to use: x = A\b; for the equation Ax = b...

python,numpy,scipy,linear-algebra,sparse-matrix

Is the original array already sparse (plenty of zeros), or are those just a product of tril? If the later, you might not be saving space or time by using sparse code. For example def gen1(W): tmp = np.tril(W) d = tmp.sum(0)+tmp.sum(1)-tmp.diagonal() return np.diag(d) - tmp is 8x faster than...

You could try doing a simple ifelse function for the divisor: variable <- variable/ifelse(rowSums(variable)!=0,rowSums(variable),1) Unless there's some reason you need to be dividing by the 0 there, that seems like the simplest way to avoid NANs....

python,r,sparse-matrix,text-analysis

Could you could write out the matrix in MatrixMarket format using scipy mmwrite and then read it into R using readMM from the Matrix package?

It turned out to be a bug in my code that is exposed by how Shiny works. Outside Shiny, the function where the code resides worked seamlessly, being fed input from another function in the right format. In Shiny, I expected the function in server.R to receive the input after...

matlab,random,sparse-matrix,sparse-array

I assume you want random sampling (without replacement); that is, you want to pick n elements out of matrix A randomly. For that you can apply randsample on the linearized, full version of A: result = randsample(full(A(:)), n); If you want to avoid converting A into full (for example, because...

python,numpy,scipy,sparse-matrix,eigenvalue

tl;dr: You can use the which='LA' flag as described in the documentation. I quote: scipy.sparse.linalg.eigsh(A, k=6, M=None, sigma=None, which='LM', v0=None, ncv=None, maxiter=None, tol=0, return_eigenvectors=True, Minv=None, OPinv=None, mode='normal') Emphasis mine. which : str [‘LM’ | ‘SM’ | ‘LA’ | ‘SA’ | ‘BE’] If A is a complex hermitian matrix, ‘BE’ is...

arrays,matlab,matrix,sparse-matrix,sparse-array

You want spdiags: m = 6; %// number of rows n = 10; %// number of columns diags = [-4 -1 0 1 4]; %// diagonals to be filled A = spdiags(ones(min(m,n), numel(diags)), diags, m, n); This gives: >> full(A) ans = 1 1 0 0 1 0 0 0...

For every (i, j) pair (there are n^2 in total), you reach the inner part of the loop where you find the similarity and then conditionally add elements to your sparse matrix. Finding the similarity takes "d" operations (because you need to loop through each of your dimensions) and conditionally...

matlab,matrix,linear-algebra,sparse-matrix,pde

Matlab doesn't store sparse matrices as JxJ arrays but as lists of size O(J). See http://au.mathworks.com/help/matlab/math/constructing-sparse-matrices.html Since you are using the spdiags function to construct A, Matlab should already recognize A as sparse and you should indeed see such a list if you display A in console view. For a...

You should use the summary function. See subset(summary(GminiMat), j > i) and take it from there. Maybe: getUpper <- function(mat) subset(summary(mat), j > i)$x ...

numpy,matrix,scipy,linear-algebra,sparse-matrix

I get close to a 4x speed-up in computing the product M.T * D * M out of the three diagonal arrays. If d0 and d1 are the main and upper diagonal of M, and d is the main diagonal of D, then the following code creates M.T * D...

You can convert from sparse matrix to indices and values of nonzeros entries, and then use sparse to automatically obtain the sum in sparse form: myCell = {sparse([0 1; 2 0]), sparse([3 0; 4 0])}; %// example C = numel(myCell); M = cell(1,C); %// preallocate N = cell(1,C); V =...

python-3.x,dictionary,sparse-matrix

Since you didn't post the rest of your class, I don't know what the internal implementation details look like, but how about this: class SparseMatrix: def __init__(self, rows: int, cols: int, *entries): self.rows = rows self.cols = cols self.matrix = dict() for entry in entries: self.matrix[(entry[0], entry[1])] = entry[2] def...

optimization,sparse-matrix,julia-lang

What you are looking for is the sparse nonzeros function. nonzeros(A) Return a vector of the structural nonzero values in sparse matrix A. This includes zeros that are explicitly stored in the sparse matrix. The returned vector points directly to the internal nonzero storage of A, and any modifications to...

python,numpy,scipy,sparse-matrix

As I said in the comments, the problem appears in multiply, which should produce a sparse matrix for sparse+dense inputs but doesn't. Convert to a sparse format to avoid that problem: >>> s = (W != 0).multiply(sp.csr_matrix(v)) >>> W + s <2x3 sparse matrix of type '<type 'numpy.int64'>' with 2...

python,numpy,scipy,sparse-matrix,rotational-matrices

If you already have the rotation matrix as a dense array you can simply do m = csr_matrix(dense_rot_matrix) One of the two types csr_matrix or csc_matrix should be used. A better option would be to populate already the sparse matrix which can be easily accomplished using the coo_matrix type, which...

python,numpy,matrix,scipy,sparse-matrix

Without getting to the bottom of the issue, you should make sure that you are using a 64 bit build on a 64 bit architecture, on a Linux platform. There, the native "long" data type is of 64 bit size (as opposed to Windows, I believe). For reference, see these...

python,numpy,scipy,sparse-matrix

This worked for me import numpy as np import numpy.random A = numpy.random.randint(5,size=(10,15)) s = A.shape print A ind = np.argsort(A)[:,:s[1]-6] rows = np.arange(s[0])[:,None] A[rows,ind] = 0 print A Output: [[4 3 4 1 1 1 1 2 2 1 1 3 0 4 3] [2 2 4 1 3...

Some observations that may get you started: You create list of heads just the way you create your dynamic arrays Row_A and so on. These lists of heads should not be standalone data structures. They should be part of a Matrix struct, just as head is part of your List...

matlab,sparse-matrix,sparse,adjacency-matrix

Use the spdiags function to convert the degree vector to a sparse diagonal matrix. Then subtract the adjacency matrix from diagonal matrix to get the Laplacian. Example using your code: adj = spconvert(adj); for i=1:size(adj, 1) degree(i) = CalcDegree(adj, i) end D = spdiags(degree, 0, size(adj, 1), size(adj, 2)); L...

algorithm,matlab,linear-algebra,sparse-matrix

I think that you shoul use spparms, from a matlab forum help spparms spparms - Set parameters for sparse matrix routines This MATLAB function sets one or more of the tunable parameters used in the sparse routines. spparms('key',value) spparms values = spparms [keys,values] = spparms spparms(values) value = spparms('key') spparms('default')...

sparse-matrix,sparse,outliers,elki

ArrayAdapterDatabaseConnection is designed for dense data. For sparse data, it does not make much sense to first encode it into a dense array, then re-encode it into sparse vectors. Consider reading the data as sparse vectors directly to avoid overhead. The error you are seeing has a different reason, though:...

What could be happening is that there is contention for the centralized ConcurrentDictionary. If this is the case, you could try the localInit overload of Parallel.ForEach, to give each Task batch its own local (and uncontended) Dictionary, which is then aggregated into the central dictionary at the end: var columnTotals...

python,scipy,sparse-matrix,anaconda

Does this work for you? from scipy import sparse import scipy.sparse.linalg as sp_linalg B = np.random.rand(10,10) A_dense = np.dot(B.T, B) A_sparse = sparse.lil_matrix(A_dense) sp_linalg.eigs(A_sparse, 3) It seems that you have to explicitly import the submodules. scipy does not load those per default....

thread-safety,scipy,multiprocessing,sparse-matrix,python-multiprocessing

As explained here it seems your first option creates one copy of the sparse matrix per process. This is safe, but isn't ideal from a performance point of view. However, depending on the computation you perform on the sparse matrix, the overhead may not be signficant. I suspect a cleaner...

python,scikit-learn,sparse-matrix,knn

The short is that the format you're using is going to cause you a decent amount of grief. The long is that it's still absolutely possible to do this conversion, there's just a decent amount of goo-code that you're going to need. The first thing you're going to need to...

When converting from a full representation of a symmetric matrix to a sparse representation, you will need to scan all the elements on the main diagonal and above (or symmetrically on the main diagonal and below). For an (n x n) matrix matrix this is (n^2+n)/2 entries that need to...

matlab,matrix,sparse-matrix,sparse

As @beaker mentioned above, and as I explain in my Accelerating Matlab Performance book, using the sparse or spdiags functions is much faster than using indexed assignments as in your code. The reason is that any such assignment results in a rebuilding of the sparse data's internal representation, which is...

r,performance,matrix,cluster-analysis,sparse-matrix

I've written some Rcpp code and R code which works out the binary/Jaccard distance of a binary matrix approx. 80x faster than dist(x, method = "binary"). It converts the input matrix into a raw matrix which is the transpose of the input (so that the bit patterns are in the...

python,r,bigdata,sparse-matrix,large-data

First, you want efficient access to rows, and COO format doesn't do that. Convert your voxMapBeamlet to Compressed Sparse Row format, and getrow becomes much more efficient: voxMapBeamlet = sparse.coo_matrix((voxMap_beamlet_val,(voxMap_beamlet_iInd,voxMap_beamlet_jInd)),shape=(1055736,8500)) voxMapBeamlet = voxMapBeamlet.tocsr() Second, getDose is a lot more complicated than it needs to be: def getDose(voxInd): return voxMapBeamlet.getrow(voxInd).sum() I...

Here is a way I found: Using Matrix package. First, the table from the example: > coo_mat <- rbind(c(1,1,1), c(1,2,1), c(2,1,0), c(2,2,1)) > coo_mat [,1] [,2] [,3] [1,] 1 1 1 [2,] 1 2 1 [3,] 2 1 0 [4,] 2 2 1 Now, make it regular format matrix: >...

python,pandas,dataframes,sparse-matrix

All nan columns are buggy ATM in this kind of conversion. If you already had a SparseFrame adding a nan column would work however. If you did this: df.iloc[0] = 0 df.to_sparse() works....

c++,multithreading,openmp,sparse-matrix,slowdown

Well I found the solution to the problem myself: I post the explanation for the community. In the predict_rating function I used try/catch for handling out_of_range errors thrown by my dictionary structure when a key that is not contained in the dictionary is searched. I read on Are exceptions in...

python,numpy,scipy,sparse-matrix,dot-product

Use *: p * q Note that * uses matrix-like semantics rather than array-like semantics for sparse matrices, so it computes a matrix product rather than a broadcasted product....

r,io,probability,sparse-matrix,entropy

Sample data Creating a data frame with only non-zero z values (suppose we can remove all of the zero lines from the flat file before importing data). N <- 50000 S <- N * 0.8 df_input <- data.frame( x = sample(1:N, S), y = sample(1:N, S), z = runif(S)) #...

java,matrix,sparse-matrix,colt

You cannot. From the documentation: Throws: IllegalArgumentException - if rows<0 || columns<0 || > (double)columns*rows > Integer.MAX_VALUE. Instead of creating a matrix addressed with x and y coordinates, returning a Value, create a HashMap<Coordinates, Value>, where Coordinates is a simple class holding x and y....

python,performance,sparse-matrix

In the end, I managed to get this done with index juggling. def set_row_csr(A, row_idx, new_row): ''' Replace a row in a CSR sparse matrix A. Parameters ---------- A: csr_matrix Matrix to change row_idx: int index of the row to be changed new_row: np.array list of new values for the...

c++,time-complexity,sparse-matrix

nnz : non-zero number of sparse matrix row_size : matrix row number column_size : matrix column number There are many ways, their space complexity : Compressed Sparse Row (CSR) : 2*nnz + row_size number of memory Compressed Sparse Column (CSC) : 2*nnz + column_size number of memory Coordinate Format (COO)...

A good starting point may be http://www.mcs.anl.gov/petsc/petsc-current/src/mat/examples/tutorials/ex5.c.html : Opens a separate file for each process and reads in ITS portion of a large parallel matrix. Only requires enough memory to store the processes portion of the matrix ONCE. It makes use of MatSetValues() to set values. Carefully read the documentation...

python,random,scipy,sparse-matrix

Looks like the feature that you want was added about two months ago and will be available in scipy 0.16: https://github.com/scipy/scipy/blob/77af8f44bef43a67cb14c247bc230282022ed0c2/scipy/sparse/construct.py#L671 You will be able to call sparse.random(10, 10, 0.1, random_state=RandomState(1), data_fvs=func) where func "should take a single argument specifying the length of the ndarray that it will return. The...

r,for-loop,matrix,sparse-matrix

You could vectorize this and that should help you. Also, if your data is indeed sparse and you can conduct your analysis on a sparse matrix it definitely is something to consider. library(Matrix) # set up all pairs pairs <- expand.grid(x,x) # get matrix indices idx <- which(pairs[,1] == pairs[,2]...

I've got this answer from someone: MSparse = sparse(M); % replicate: please don't test with n=220512 directly Mblk = repmat({MSparse}, 1, n); % convert do block diagonal Q = blkdiag(Mblk{:}); The matrix is created and stored after about 15 minutes. However, the matrix takes ~9GB of memory and thus slows...

scala,sparse-matrix,scala-breeze

Slicing and broadcasting haven't been fleshed out for CSCMatrix yet, sorry.

optimization,sparse-matrix,julia-lang,sparse,sparse-array

First, you're making term_doc a global variable, which is a big problem for performance. Pass it as an argument, doSparseWay(term_doc::SparseMatrixCSC). (The type annotation at the beginning of your function does not do anything useful.) You want to use an approach similar to the answer by walnuss: function doSparseWay(term_doc::SparseMatrixCSC) I, J,...

matlab,memory,sparse-matrix,allocation

Thanks to @horchler, I've found a solution. Even if MATLAB pre-allocates all the memory it needs before the execution, spparams('spunomi', 3) shows the peaks inside the allocation. Also by doing [L,U,P,Q,R] = lu(A) and then calculating the difference between the number of nonzero elements in L and the number of...

Are you sure you want to use the matrix.csr class? It is from the SparseM package and as far as I can tell, at least from the package documentation, there are no is.na<- or is.na[ methods. The Matrix-package does document is.na-methods: > library(Matrix);M <- Matrix(1:6, nrow=4, ncol=3, + dimnames =...

numpy,computational-geometry,sparse-matrix,kdtree,sparse-array

Following up on Yves' suggestion, here's an answer, which uses scipy's KDTree: from scipy.spatial.kdtree import KDTree import numpy as np def locally_extreme_points(coords, data, neighbourhood, lookfor = 'max', p_norm = 2.): ''' Find local maxima of points in a pointcloud. Ties result in both points passing through the filter. Not to...

It does not matter whether the graph is sparse or not because igraph will still create a dense matrix so it's an O(n2) operation. (Technically, the matrix itself is created in the C layer where the initialization of the matrix to all zeroes takes O(n2) and then it is...

r,machine-learning,sparse-matrix,reshape2

A base R option would be (!!table(cbind(df1[1],stack(df1[-1])[-2])))*1L # values #ID 19 23 42 61 anxiety asthma copd diabetes female male # 1 0 0 1 0 1 1 0 0 0 1 # 2 1 0 0 0 0 1 0 0 0 1 # 3 0 1 0 0...

r,matrix,sparse-matrix,lme4,lmer

If mmList is there, then it's not going away (however poorly documented it may be -- feel free to suggest documentation improvements ...). How about do.call(cbind,getME(m2,"mmList")) (which would seem to generalize correctly for multi-term models)? I agree that it's a bit of a pain that Zt doesn't distinguish correctly between...

python,matrix,scipy,sparse-matrix,sparse

It's not entirely clear what you are asking for, but here's my guess. Let's just experiment with a simple array: Start with 3 arrays (I took these from another sparse matrix, but that isn't important): In [165]: data Out[165]: array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,...

Basically, the document is sightly confusing for me at least. the way to do it is simply: SpMat mat_3 = mat_1 * mat_2 No dense matrix is created along the way. Eigen rocks!...

matlab,extract,sparse-matrix,extraction,sparse

[ii jj] = find(A); answer = unique([ii(:); jj(:)]); should do it. Note that the find command with two outputs gives you the row and column index of all nonzero elements. Since you have a minimum spanning tree, each number you care about needs to occur at least once in the...

python,numpy,scipy,linear-algebra,sparse-matrix

I was able to solve this problem after sleeping on it. def seq_movement(a,b): import numpy as np def counter(items): counts = dict() for i in items: counts[i] = counts.get(i, 0) + 1 return counts def find_2(dic): return [k for k, v in dic.items() if v ==2] sort = sorted(a) sparse...

You may also try library(dplyr) library(tidyr) library(Matrix) d1 <- unnest(dat,col) %>% separate(x, into=c('row', 'val'), ':', convert=TRUE) %>% extract(col, into='col', '\\D+(\\d+)', convert=TRUE) as.matrix(with(d1, sparseMatrix(row, col, x=val))) # [,1] [,2] #[1,] 23 3 #[2,] 0 12 #[3,] 0 0 #[4,] 12 0 #[5,] 0 0 #[6,] 0 3 ...

python,python-2.7,scipy,sparse-matrix

When the first argument of dia_matrix has the form (data, offsets), data is expected to be a 2-d array, with each row of data holding a diagonal of the matrix. Since data is a rectangular matrix, some of the elements in data are ignored. Subdiagonals are "left aligned", and superdiagonals...

python,numpy,vectorization,sparse-matrix

You can get what you want with np.bincount. If your particles where sequentially numbered from 0 up, you could simply do: ci = np.bincount(i, weights= cij) To see what this does: >>> i = [3, 3, 5, 10] >>> j = [5, 10, 3, 3] >>> cij = [0.1, 0.2,...

I have come up with a faster coo diagonal: msk = M.row==M.col D1 = sparse.coo_matrix((M.data[msk],(M.row[msk],M.col[msk])),shape=M.shape) sparse.tril uses this method with mask = A.row + k >= A.col (sparse/extract.py) Some times for a (100,100) M (and M1 = M.tocsr()) In [303]: timeit msk=M.row==M.col; D1=sparse.coo_matrix((M.data[msk],(M.row[msk],M.col[msk])),shape=M.shape) 10000 loops, best of 3: 115 µs...