Since I assume that the OP wants to know the proper mathematical expression for his R code, I think this is not really an R question. I am giving you some Latex code. Let's say c_{ij} denotes the elements of your compare matrix. Then you could use the indicator function...

matlab,matrix,matrix-multiplication

In case you want to sum after multiplication, the answer of knedlsepp using the distributive property of multiplication is the logical choice. If you want to use other operations than sums or differences, than the following answer can be applied more generically Here we go: %// columnvector m x 1...

python,numpy,matrix,matrix-multiplication

Did you actually try this? It works fine here: import numpy as np def main(): A = np.matrix([[2,3,15],[5,8,12],[1,13,4]], dtype=np.object) B = np.matrix([[2,15,6,15,8,14],[17,19,17,7,18,14],[24,14,0,24,2,11]], dtype=np.object) C = ( A*B ) % 26 # = [[25,11,11,21,22,1],[18,5,10,3,0,2], [7,6,19,20,16,6]] print(C) return 0 if __name__ == '__main__': main() Prints: [[25 11 11 21 22 1] [18...

c,arrays,memory-management,matrix-multiplication,triangular

mat1 = calloc(dim,sizeof(int*)); mat1 is a double pointer.You need to allocate memory for your array of pointers and later you need to allocate memory to each of your pointers individually.No need to cast calloc()...

c,multithreading,performance,matrix-multiplication,simd

Here are the times I get building on your algorithm on my four core i7 IVB processor. sequential: 3.42 s 4 threads: 0.97 s 4 threads + SSE: 0.86 s Here are the times on a 2 core P9600 @2.53 GHz which is similar to the OP's E2200 @2.2 GHz...

Building off of Dirk's comment, here are a few cases that demonstrate the Armadillo library's matrix multiplication via the overloaded * operator: #include <RcppArmadillo.h> // [[Rcpp::depends(RcppArmadillo)]] // [[Rcpp::export(".mm")]] arma::mat mm_mult(const arma::mat& lhs, const arma::mat& rhs) { return lhs * rhs; } // [[Rcpp::export(".vm")]] arma::mat vm_mult(const arma::vec& lhs, const arma::mat& rhs)...

matrix,matrix-multiplication,apl,dyalog

APL programmers should generally use Inner Product, as well as Outer Product, as much as possible. Is that correct? It is really up to the APL programmer and the task at hand, but if something makes APL code more concise and efficient, I don't see why a programmer wouldn't...

The BLAS routine is used correctly. The only difference is that BLAS is performing C = 0.0*C + 1.0*A*B and your loop C = C + A*B In your loop you are trying to improve usage of cpu's cache memory. There are variants of BLAS that perform similar actions. I...

arrays,matlab,matrix,matrix-multiplication,multiplication

you can simply do mt = [1:n].'*[1:m] to achieve the matrix you desire. Otherwise, you have some syntax issues in the example code you posted....

performance,matlab,matrix,matrix-multiplication,bsxfun

Send x to the third dimension, so that singleton expansion would come into effect when bsxfun is used for multiplication with A, extending the product result to the third dimension. Then, perform the bsxfun multiplication - val = bsxfun(@times,A,permute(x,[3 1 2])) Now, val is a 3D matrix and the desired...

python,scipy,matrix-multiplication,sparse

I suspect that your sparse matrices are becoming non sparse when you perform the operation have you tried just: A.multiply(B) As I suspect that it will be better optimised than anything that you can easily do. If A is not already the correct type of sparse matrix you might need:...

c++,optimization,matrix,matrix-multiplication

One problem that you have is that at any assignment matrix [i][j] = ..., the compiler doesn't know that a and b are not pointing to this->matrix, so it must assume that elements of a and b are overwritten and needs to read them again. You should get some improvement...

python,python-3.x,operators,matrix-multiplication,python-3.5

@ is the matrix multiplication operator introduced in Python 3.5. @= is matrix multiplication followed by assignment. They map to __matmul__, __rmatmul__ or __imatmul__ similar to how + and += map to __add__, __radd__ or __iadd__. From the documentation: The @ (at) operator is intended to be used for matrix...

python,numpy,matrix-multiplication

I found the problem: cm.pow was modifying the matrixes received as parameter. To solve it, I changed: gpu_X2 = cm.pow(gpu_X, 2).sum(axis=1) gpu_W2 = cm.pow(gpu_W, 2).sum(axis=1) by: gpu_X2 = cm.empty (X.shape) gpu_W2 = cm.empty (W.shape) cm.pow(gpu_X, 2, target=gpu_X2) gpu_X2 = gpu_X2.sum(axis=1) cm.pow(gpu_W, 2, target=gpu_W2) gpu_W2 = gpu_W2.sum(axis=1) ...

java,processing,matrix-multiplication,quaternions,oculus

If you're working with the Oculus SDK Matrix4 class, you can easily use the Matrix4::ToEulerAngles() method. If you're working with another class then you should look for a method in that class that does something similar.

Yes, normal multiplication with b_ as a vector: a_*as.vector(b_) [,1] [,2] [1,] 2 8 [2,] -2 -3 [3,] 3 2 ...

matlab,matrix,vectorization,matrix-multiplication

Vectorized Approach You can do matrix multiplication between A and transpose of B, then sum along dim-2 and finally perform elementwise division with L - result = sum(A.*B.',2)./L Benchmarking This section covers runtime and speedup tests for the proposed approach against the loop-based approach as listed in the early part...

matlab,matrix,matrix-multiplication

If you recall the definition of the svd, it is essentially solving an eigenvalue problem such that: Av = su v is a right-eigenvector from the matrix V and u is a left-eigenvector from the matrix U. s is a singular value from the matrix S. You know that S...

python,matrix-multiplication,xor

As I commented, you can use z.dot(b) % 2 to get the values you want. This is because chained xors are equivalent to addition mod 2. That is, the result will be 1 if the number of 1s was odd, and 0 if it was even.

matrix,octave,matrix-multiplication,broadcasting

This is because Octave (in a notable difference from Matlab) automatically broadcasts. The * operator in Octave is the matrix multiplication operator. So in your case a*b would output (in Matlab as well) a*b ans = 1 2 3 2 4 6 3 6 9 which should be expected. The...

c#,multithreading,matrix,matrix-multiplication

This does what you want using two BackgroundWorker objects public class MatrixCalc { readonly double[,] a, b, c; readonly int a_rows, a_cols, b_rows, b_cols, c_rows, c_cols; bool result_ok; int thread_count; BackgroundWorker bw1, bw2; AutoResetEvent re; public MatrixCalc(double[,] a, double[,] b, double[,] c) { a_rows=a.GetLength(0); a_cols=a.GetLength(1); b_rows=b.GetLength(0); b_cols=b.GetLength(1); c_rows=c.GetLength(0); c_cols=c.GetLength(1); //...

python,arrays,numpy,matrix-multiplication,idl-programming-language

Reading the notes on IDL's definition of matrix multiplication, it seems they use the opposite notation to everyone else: IDL’s convention is to consider the first dimension to be the column and the second dimension to be the row So # can be achieved by the rather strange looking: numpy.dot(A.T,...

matrix,octave,matrix-multiplication

In Octave you could set/update any part of the original matrix. For example, here is how to add vector B to the second row of a matrix A: A(2,:) = A(2,:) + B; ...

c,cuda,parallel-processing,matrix-multiplication

This code will work for very specific dimensions but not for others. It will work for square matrix multiplication when width is exactly equal to the product of your block dimension (number of threads - 20 in the code you have shown) and your grid dimension (number of blocks -...

python,numpy,scipy,matrix-multiplication,sparse

Your question initially confused me, since for my version of scipy, A.dot(B) and np.dot(A, B) both work fine; the .dot method of the sparse matrix simply overrides np.dot. However it seems that this feature was added in this pull request, and is not present in versions of scipy older than...

Are you familiar with valgrind? It will draw your attention to the problematic line straight away. Your trouble appears to be this line: result[i] += (submatrix[(i*size)+j] * vector[j]); What was result[] initially? It was pulled off the heap. Sometimes, if you are lucky, it will be zero. Don't count on...

python,list,matrix-multiplication

Answer assumes Python 2.7x Data: key=[[16, 4, 11], [8, 6, 18], [15, 19, 15]] message=[[0], [12], [8], [6], [15], [2], [15], [13], [3], [21], [2], [20], [15], [18], [8]] One thing that seems to complicate things is that message is a list of lists, to make things easier it will...

c++,multithreading,c++11,matrix-multiplication

What you're seeing is how expensive threads are. It's why most modern frameworks don't even use threads, they use thread pools, a list of already allocated threads just waiting for work to do. Most also provide theoretical support for fibers, like .NET, but actual support was never implemented because the...

c,multidimensional-array,slice,matrix-multiplication,transpose

Definition General array slicing can be implemented (whether or not built into the language) by referencing every array through a dope vector or descriptor — a record that contains the address of the first array element, and then the range of each index and the corresponding coefficient in the indexing...

c++,algorithm,matrix-multiplication,eigen

Make sure you compile with full optimizations, e.g. g++ -O3 -DEIGEN_NO_DEBUG if using g++. Also, turning on parallelization via OpenMP may help, use -fopenmp when compiling and linking. I use the following to compile most Eigen code (with g++): g++ -I/path/to/eigen -O3 -DEIGEN_NO_DEBUG -fopenmp program.cpp ...

matlab,min,matrix-multiplication,bsxfun

I believe you need this correction in your code - [minindex_alongL2, minindex_alongL1] = ind2sub([size(L2,1) size(L1,1)],p) For the solution, you need to add the size of p into the index finding in the last step as the vector whose min is calculated has the "added influence" of alpha - [minindex_alongL2, minindex_alongL1,minindex_alongalpha]...

matlab,optimization,matrix-multiplication,floating-accuracy,numerical

Do you use format function inside your script? It looks like you used somewhere format rat. You can always use matlab eps function, that returns precision that is used inside matlab. The absolute value of -1/18014398509481984 is smaller that this, according to my Matlab R2014B: format long a = abs(-1/18014398509481984)...

c++,input,types,matrix-multiplication

"I would like to ask if it is possible to somehow change the input type in two different calls of a function. " You can make that function a template (supposed all of the operations of the parameters will work equally for any container type passed): template<typename Container, typename...

python,performance,numpy,list-comprehension,matrix-multiplication

Creation of numpy arrays is much slower than creation of lists: In [153]: %timeit a = [[2,3,5],[3,6,2],[1,3,2]] 1000000 loops, best of 3: 308 ns per loop In [154]: %timeit a = np.array([[2,3,5],[3,6,2],[1,3,2]]) 100000 loops, best of 3: 2.27 µs per loop There can also fixed costs incurred by NumPy function...

rotation,directx-11,matrix-multiplication,quaternions

Your code XMVECTOR quaternion = XMVectorSet(random_x, random_y, 0); is not creating a valid quaternion. First, if you did not set the w component to 1, then the 4-vector quaternion doesn't actually represent a 3D rotation. Second, a quaternion's vector components are not Euler angles. You want to use XMQuaternionRotationRollPitchYaw which...

r,matrix,linear-algebra,matrix-multiplication,transpose

You're on the right track: result <- t(m) %*% m dolphins jets bills dolphins 2 0 2 jets 0 1 1 bills 2 1 3 Alternatively, result <- crossprod(m) Edit I was reminded in the comment below that teams have the same outcome when they lose at the same week....

c,sse,precision,matrix-multiplication,rounding-error

I am summarizing the discussion in order to close this question as answered. So according to the article (What Every Computer Scientist Should Know About Floating-Point Arithmetic) in link, floating point always results in a rounding error which is a direct consequence of the approximate representation nature of the floating...

matlab,matrix,matrix-multiplication

M = 1000; N = 1000; L = 3; A = rand(M,N,L); K = rand(L,L); Q = reshape((K * reshape( A, [M*N, L] ).' ).', [M, N, L]); Error check: Z = zeros(M,N,L); for mm = 1 : M for nn = 1 : N Z(mm,nn,:) = K * squeeze(...

opengl,matrix,linear-algebra,matrix-multiplication

I have dealt with this exact problem. There is more than one way to solve it and so I will just give you the solution that I came up with. In short, I store the position, rotation and scale in both local and world coordinates. I then calculate deltas so...

python,performance,numpy,matrix-multiplication

A typical installation of numpy will be dynamically linked against a BLAS library, which provides routines for matrix-matrix and matrix-vector multiplication. For example, when you use np.dot() on a pair of float64 arrays, numpy will call the BLAS dgemm routine in the background. Although these library functions run on the...

javascript,arrays,matrix,matrix-multiplication

You're getting confused with your various temporary arrays. The undefined values are caused by out-of-bounds access on the line below your innermost loop. I recommend that you stick to making a single array for the result of the multiplication. As you're probably aware, the hitch is that JavaScript doesn't allow...

matlab,matrix,linear-algebra,matrix-multiplication

You can use the below formula to the perform the concatenation with multiplication instead of language constructs: . So your matrix P needs to be padded with zeros. Instead of using the zeros function, you can use the blkdiag function which determine the zero padding internally: P = blkdiag(p1,p2); T...

javascript,matrix,three.js,linear-algebra,matrix-multiplication

The problem is in how you're iterating the matrix elements to print them. The .elements property returns the elements in column-major order (despite the constructor and .set() methods parameters being in row-major order!). So you should: function logMatrix(matrix) { var e = matrix.elements; var $output = $('#output'); $output.html(e[0] + '...

ios,matrix,identity,matrix-multiplication,catransform3d

I have found a reason of the problem. Everything is correct because in CATransform3D all the transform matrices are transposed but I thought that they look like the following one: So m44 != 1 because m34 is not the z coordinate or another kind of translation. It means "perspective" which...

algorithm,matrix,time,big-o,matrix-multiplication

Your algorithm for matrix multiplication is wrong, and will yield a wrong answer, since A*B_{i,j} != A_{i,j} * B_{i,j} (with exception for some unique cases like zero matrix) I assume the goal of the question is not to implement an efficient matrix multiplication, since it's a hard and still studied...

matlab,function,matrix,matrix-multiplication,matrix-inverse

Since (A*B)' = B'*A', you probably just need to call matok(inv(D) * Ps) ...

matlab,matrix,matrix-multiplication

%// Data: grid_min = 0; grid_max = 5; w = [.1 .2 .3]; %// Let's go: n = numel(w); gg = cell(1,n); [gg{:}] = ndgrid(grid_min:grid_max); gg = cat(n+1, gg{:}); result = sum(bsxfun(@times, gg, shiftdim(w(:), -n)), n+1); How this works: The grid (variable gg) is generated with ndgrid, using as output...

scanf("%.lf", &obj[c][d]); it is wrong. "%.lf" there is no such a specifier. It must be "%lf" for double type values. Also you are using fflush(stdin); which is not right to flush standart input buffer and it is undefined behaviour according to standarts (C11 7.21.5.2). (okey compilers supports it, but wrong)....

r,list,matrix-multiplication,xts

Here is a possible solution: mapply(function(X, Y) t(c(X) * t(Y[, colnames(X)])), listab, listcd) Produces: [[1]] AMBV4 ARCZ6 BBAS3 BBDC4 BRAP4 [1,] 0.0105 0.002 0.009 0.009 0.1375 [2,] 0.0330 0.010 0.009 0.012 0.1075 [[2]] ACES4 AMBV4 CMIG3 CMIG4 [1,] 0.010 0.063 0.220 0.007 [2,] 0.024 0.066 0.172 0.022 Here, we use...

matlab,min,matrix-multiplication

I don't know your values so i wasn't able to check my code. I am using loops because it is the most obvious solution. Pretty sure that someone from the bsxfun-brigarde ( ;-D ) will find a shorter/more effective solution. alpha = 0:0.05:2; beta = 0:0.05:2; L1(kx3) = [t1(kx1) t2(kx1)...

Firstly, be really sure this is what you want to do. Without describing the manipulations you want to do, it's hard to comment on this, but be aware that matrix multiplication is an n-cubed operation. If your manipulations are not the same complexity, chances are you'll do better simply using...

c++,matrix-multiplication,eigen3

According to the documentation, you can evaluate the lower triangle of a matrix with: m1.triangularView<Eigen::Lower>() = m2 + m3; or in your case: m1.triangularView<Eigen::Lower>() = matA.transpose()*matA; (where it says "Writing to a specific triangular part: (only the referenced triangular part is evaluated)"). Otherwise, in the line you've written Eigen will...

python,numpy,matrix,matrix-multiplication

A common strategy for eliminating for-loops in NumPy calculations is to work with higher-dimensional arrays. Consider for example, the line Arg = Kx[m,n]*u[i] + Ky[m,n]*v[j] This line depends on the indices m, n, i and j. So Arg depends on m, n, i and j. This means Arg can be...

matlab,matrix,multidimensional-array,matrix-multiplication,multiplication

Find the number of rows and columns of your final matrix: n = min(size(a,1), size(b,1)); m = min(size(a,1), size(b,1)); Then extract only the relevant sections of a and b (using the : operator) for your multiplication: c = a(1:n,1:m).*b(1:n,1:m) ...

r,matrix,apply,matrix-multiplication

t3 <- apply(t2, 2, function(v) v/max(v)) or for (i in 1:ncol(t2)) t2[,i] <- t2[,i]/t2[i,i] I'm assuming you want the asymmetric matrix, i.e. percentage of people who purchased product X who also purchased product Y (which is different from percentage of people who purchased product Y who also purchased product X)....

python,numpy,matrix,matrix-multiplication

In [59]: a = b = np.matrix([1,2,3,4]) In [60]: np.dot(a.T, b) # 1 Out[60]: matrix([[ 1, 2, 3, 4], [ 2, 4, 6, 8], [ 3, 6, 9, 12], [ 4, 8, 12, 16]]) In [63]: np.dot(a, b.T) # 2 Out[63]: matrix([[30]]) In [64]: np.dot(a, b) # 3 ValueError: shapes...

r,performance,sum,matrix-multiplication

One simple thing to try is using a larger vector. Using a million. library(microbenchmark) A <- rnorm(1000000) B <- rep(1, 1000000) system.time(sum(A)) user system elapsed 0.012 0.000 0.01 system.time(B %*% A) user system elapsed 0.044 0.000 0.04 microbenchmark(sum(A), B%*%A) Unit: microseconds expr min lq mean median uq max neval sum(A)...

python,numpy,matrix,matrix-multiplication,logarithm

logsumexp works by evaluating the right-hand side of the equation log(∑ exp[a]) = max(a) + log(∑ exp[a - max(a)]) I.e., it pulls out the max before starting to sum, to prevent overflow in exp. The same can be applied before doing vector dot products: log(exp[a] ⋅ exp[b]) = log(∑ exp[a]...

matlab,numpy,matrix,matrix-multiplication,blas

Introduction and Solution Code np.einsum, is really hard to beat, but in rare cases, you can still beat it, if you can bring in matrix-multiplication into the computations. After few trials, it seems you can bring in matrix-multiplication with np.dot to surpass the performance with np.einsum('ia,aj,ka->ijk', A, B, C). The...

python,numpy,matrix-multiplication,numpy-einsum

Didn't you do something like similarity_matrix = np.empty((N,M),dtype=float) at the start of your calculations? You can't index an array, on right or left side of an equation, before you create it. If that full (N,M) matrix is too big for memory, then just assign your einsum value to another variable,...