python,numpy,matrix,machine-learning,linear-algebra

This is a typical application for np.tensordot(): sum = np.tensordot(A, B, [[0,2],[0,1]]) Timing Using the following code: import numpy as np n_examples = 100 A = np.random.randn(n_examples, 20,30) B = np.random.randn(n_examples, 30,5) def sol1(): sum = np.zeros([20,5]) for i in range(len(A)): sum += np.dot(A[i],B[i]) return sum def sol2(): return np.array(map(np.dot,...

r,matrix,linear-algebra,matrix-multiplication,transpose

You're on the right track: result <- t(m) %*% m dolphins jets bills dolphins 2 0 2 jets 0 1 1 bills 2 1 3 Alternatively, result <- crossprod(m) Edit I was reminded in the comment below that teams have the same outcome when they lose at the same week....

javascript,matrix,three.js,linear-algebra,matrix-multiplication

The problem is in how you're iterating the matrix elements to print them. The .elements property returns the elements in column-major order (despite the constructor and .set() methods parameters being in row-major order!). So you should: function logMatrix(matrix) { var e = matrix.elements; var $output = $('#output'); $output.html(e[0] + '...

javascript,math,physics,linear-algebra

This problem is a lot simpler if you determine the desired acceleration ahead of time, and use that during each refresh. Then the entire code for each frame (excluding the drawing logic and assuming one dimension) just becomes: function frame() { var t = new Date().getTime(), tDelta = t -...

opengl,graphics,linear-algebra,glm,arcball

Set the arcball radius to the distance between the point clicked and the center of the object. In other words the first point is a raycast on the cube and the subsequent points will be raycasts on an imaginary sphere centered on the object and with the above mentioned radius....

algorithm,matlab,linear-algebra,sparse-matrix

I think that you shoul use spparms, from a matlab forum help spparms spparms - Set parameters for sparse matrix routines This MATLAB function sets one or more of the tunable parameters used in the sparse routines. spparms('key',value) spparms values = spparms [keys,values] = spparms spparms(values) value = spparms('key') spparms('default')...

arrays,r,matrix,linear-algebra

One possibility: deform the dfarray to usual matrix, multiply and transform back to 3d array. mat <- matrix(1, 1, 13) dim(dfarray) <- c(13, 1000*10) dfarray1 <- mat %*% dfarray dim(dfarray1) <- c(1, 1000, 10) all(dfarray1==dfarray2) [1] TRUE ...

matlab,vector,geometry,linear-algebra

You can use the Matlab function pca (see for example here). For example, you can determine the basis of your plane, the normal vectors to your plane and a point m on the plane as follows: coeff = pca(X); basis = coeff(:,1:2); normals = coeff(:,3:4); m = mean(X); To check...

ios,cocoa-touch,matrix,linear-algebra

Understanding transforms The main thing to realize is that the origin for transforms is the center point of the view rectangle. This is best shown with an example. First we translate the view. v1 is the view at it's original position, v2 is the view at its translated position. p...

Why? What are the steps that it follows to come to that result? The source code for ColorMatrix can be viewed here: https://android.googlesource.com/platform/frameworks/base/+/master/graphics/java/android/graphics/ColorMatrix.java Is it only an Android thing? I believe it kinda is, more like only a ColorMatrix thing. In Android, I believe ColorMatrix is only provided as...

python,vector,linear-algebra,mathematical-optimization,approximate

I agree that in general this is a pretty tough optimization problem, especially at the scale you're describing. Each objective function evaluation requires O(nm + n^2) work for n points of dimension m -- O(nm) to compute distances from each point to the new point and O(n^2) to compute the...

What you need is an iterative eigenvalue solve algorithm. LAPACK uses a direct eigensolver and having an estimation of eigenvectors is of no use. There is a QR iterative refinement in its routines. However It requires Hessenberg matrices. I do not think you could use these routines. You could use...

python,numpy,matrix,scipy,linear-algebra

You've exceeded the maximum value representable in a 64-bit integer. You can try this instead: A = np.array([1L,1L,1L,0L], dtype=float).reshape(2,2) But note when you raise A to the 10000th power, it will give you infinity. At least that's better than negative numbers. If you really need this math to be done,...

My fault. The error told me everything. My error was my matrix had not the correct type. I change the type to CV_64FC1 and everything went right.

python,numpy,matrix,linear-algebra

First, your 3rd row is linearly dependent with 1t and 2nd row. However, your 1st and 4th column are linearly dependent. Two methods you could use: Eigenvalue If one eigenvalue of the matrix is zero, its corresponding eigenvector is linearly dependent. The documentation eig stats the eigenvalues is not necessarily...

arrays,r,matrix,linear-algebra

Here is how I came up with a solution to your question. First, build the system of equations such that A %*% x = b (where x are the values to solve for, those inside T0): n <- prod(dim(T0)) b <- c(Tm1, Tm2, Tm3) m <- length(b) Ti <- array(seq_along(T0),...

matlab,matrix,linear-algebra,sparse-matrix,pde

Matlab doesn't store sparse matrices as JxJ arrays but as lists of size O(J). See http://au.mathworks.com/help/matlab/math/constructing-sparse-matrices.html Since you are using the spdiags function to construct A, Matlab should already recognize A as sparse and you should indeed see such a list if you display A in console view. For a...

matlab,matrix,rotation,statistics,linear-algebra

This can be done using bsxfun twice: Compute rotated row indices by subtracting r with bsxfun and using mod. As usual, mod needs indices starting at 0, not 1. The rotated row indices are left as 0-based, because that's more convenient for step 2. Get a linear index from columns...

collision-detection,linear-algebra,glm-math

I think you may have misinterpreted the triangle-triangle collision test algorithm. The dot product of two vectors is a scalar. It does not make sense to add a scalar to a vector, ergo the compiler error. I think the problem lies in your understanding of d_2 -- from the paper...

javascript,matlab,linear-algebra,lapack

I ended up finding success using the Eigen library, combined with Emscripten. Right now, my test code is hard-coded to 5x5 matrices, but that's just a matter of template arguments. I'm passing data to and from the function by using row major 1D arrays. The code looks something like: #include...

haskell,matrix,linear-algebra,hmatrix

The problem arises from the difference in type signatures. matrix :: Int -> [ℝ] -> Matrix ℝ (><) :: Storable a => Int -> Int -> [a] -> Matrix a So actually matrix 3 [1,2,3,4,5,6,7,8,9 ] has type Matrix ℝ while ( 3 >< 3 ) [1,2,3,4,5,6,7,8,9 ] has type...

opengl,matrix,linear-algebra,matrix-multiplication

I have dealt with this exact problem. There is more than one way to solve it and so I will just give you the solution that I came up with. In short, I store the position, rotation and scale in both local and world coordinates. I then calculate deltas so...

If you implement this: inline urowvec operator|(const urowvec& lhs, urowvec& rhs){ // ToDo - operate on an element by element basis, and return // a urowvec. Decide on something reasonable if the vectors // differ in size. } and make sure this is included in every compilation unit requiring the...

c++,algorithm,vector,linear-algebra

You could hardcode the rule of Sarrus like so if you're exclusively dealing with 3 x 3 matrices. float det_3_x_3(float** A) { return A[0][0]*A[1][1]*A[2][2] + A[0][1]*A[1][2]*A[2][0] + A[0][2]*A[1][0]*A[2][1] - A[2][0]*A[1][1]*A[0][2] - A[2][1]*A[1][2]*A[0][0] - A[2][2]*A[1][0]*A[0][1]; } If you want to save 3 multiplications, you can go float det_3_x_3(float** A) { return...

python,linear-algebra,coordinate-systems,coordinate-transformation,homogenous-transformation

Six points alone is not enough to uniquely determine the affine transformation. However, based on what you had asked in a question earlier (shortly before it was deleted) as well as your comment, it would seem that you are not merely looking for an affine transformation, but a homogeneous affine...

python,numpy,scipy,linear-algebra,sparse-matrix

I was able to solve this problem after sleeping on it. def seq_movement(a,b): import numpy as np def counter(items): counts = dict() for i in items: counts[i] = counts.get(i, 0) + 1 return counts def find_2(dic): return [k for k, v in dic.items() if v ==2] sort = sorted(a) sparse...

python,numpy,vectorization,linear-algebra

You could use scipy.linalg.toeplitz: In [12]: n = 5 In [13]: b = 0.5 In [14]: toeplitz(b**np.arange(n), np.zeros(n)).T Out[14]: array([[ 1. , 0.5 , 0.25 , 0.125 , 0.0625], [ 0. , 1. , 0.5 , 0.25 , 0.125 ], [ 0. , 0. , 1. , 0.5 , 0.25...

I found a solution that works with both Numpy and Theano: c = a[:, :, np.newaxis] * b[:, np.newaxis, :] ...

c++,windows,linear-algebra,eigen

I would not recommend doing this from a code design standpoint, as a linear algebra library is not something you are likely to replace. So encapsulating it will most likely not be beneficial and will make your code more complicated. However if you really want to do this, you would...

algorithm,linear-algebra,linear

In your case, since x and y only take values between 0 and 10, brute force algorithm maybe the best option as it takes less time to implement. However, if you have to find all pairs of integral solution (x, y) in a larger range, you really should apply the...

c,linear-algebra,lapack,lapacke

When it comes to documentation for BLAS and/or LAPACK, Intel is probably the most comprehensive out there. You can look up the docs for ?ptsv, which explains what each parameter is for. (Hint: when searching for a BLAS or LAPACK in Google, be sure to drop the s/d/c/z prefix.) Here's...

c,arrays,matrix,linear-algebra,numerical-methods

jacobi_helper is called with the argument double *u, which is a pointer to a memory address where the results may be stored, but the first thing jacobi_helper does is u = a which means: forget that memory address u and use a instead. You set some values in the array...

python,arrays,numpy,matrix,linear-algebra

You are looking for Schur decomposition. Schur decomposition decomposes a matrix A as A = Q U Q^H, where U is an upper triangular matrix, Q is a unitary matrix (which effects the basis rotation) and Q^H is the Hermitian adjoint of Q. import numpy as np from scipy.linalg import...

matlab,linear-algebra,equation-solving,bisection

The following should solve both equations with y on the left-hand-side: y1 = solve(eqn1,y) y2 = solve(eqn2,y) Result: y1 = - 3*x - 1 y2 = x/3 - 1/3 As an aside, it would be much faster to solve this system by thinking of it it as a matrix inversion...

python,numpy,matrix,linear-algebra

Writing good numpy code requires you to think in a vectorized fashion. Not every problem has a good vectorization, but for those that do, you can write clean and fast code pretty easily. In this case, we can decide on what rows we want to remove/keep and then use that...

c++,matlab,linear-algebra,armadillo

If you have two matrices A and B of the same dimension you could set all of the elements of A where the corresponding element of B is > 0 to a value with using namespace arma; // A and B are matrices of the same shape. mat A =...

opengl,geometry,glsl,linear-algebra,perspectivecamera

You are right in the sense that there is just some affine, linear transformation and no real perspective distortion - when you just interpret the clip space as a 4-dimensional vector space. But the clip space is not the "end of it all". The perspective effect is a nonlinear transformation...

javascript,matrix,linear-algebra,rotational-matrices,euler-angles

Can I average the rotational matrices, or the euler angles themselves? Nope. Or am I going to need to convert the data into Quaternions and then apply some kind of averaging function? Yes, only quaternions are appropriate for inter/extrapolation. See 45:05 here (David Sachs, Google Tech Talk). I haven't...

java,algorithm,math,matrix,linear-algebra

First of, there is one way in which Cramers rule is perfect: It gives the algebraic solution of a linear system as a rational function in its coefficients. However, practically, it has its limits. While the most perfect formula for a 2x2 system, and still good for a 3x3 system,...

mean = data:mean(1) data:add(-1, mean:expandAs(data)) ...

opengl,directx,shader,linear-algebra

Well, you already described your condition: (angle between normal and camera vector is < 90) You have to forward your normals to the fragment shader (don't forget to re-normalize it in the FS, the interpolation will change the length). And you need the viewing vector (in the same space than...

3d,collision-detection,linear-algebra

Intersections between two oriented bounding boxes (or more general between two objects) can be done by the separating axis theorem (here, here and here). For a general intersection tests between objects, one is searching for a plane such that the two objects lie in different halfspaces and the plane does...

python,numpy,scipy,linear-algebra,sparse-matrix

** has been implemented for csr_matrix. There is a __pow__ method. After handling some special cases this __pow__ does: tmp = self.__pow__(other//2) if (other % 2): return self * tmp * tmp else: return tmp * tmp For sparse matrix, * is the matrix product (dot for ndarray). So it...

From https://www.gnu.org/software/gsl/manual/html_node/Matrices.html gsl_matrix is defined as: typedef struct { size_t size1; size_t size2; size_t tda; double * data; gsl_block * block; int owner; } gsl_matrix; And The number of rows is size1. The range of valid row indices runs from 0 to size1-1. Similarly size2 is the number of columns....

matlab,matrix,linear-algebra,matrix-multiplication

You can use the below formula to the perform the concatenation with multiplication instead of language constructs: . So your matrix P needs to be padded with zeros. Instead of using the zeros function, you can use the blkdiag function which determine the zero padding internally: P = blkdiag(p1,p2); T...

matrix,linear-algebra,matlab,matlab-figure

The typical question is how do you modify the matrix without altering its eigen values and thus its definiteness. This is typically done with Givens rotations or Housholder reduction. Although these operations are typically discussed in terms of tridiagonalization of a matrix you could likely use them to do other...

java,python,linear-algebra,sympy,colt

The answer is WHATEVER YOU DO DONT USE RREF in java. Converting to reduced echelon form turns out to have lots of comparisons to 0. If the value is 0 we do one thing. If the value is very close to 0, but not quite 0, we do something completely...

math,machine-learning,neural-network,linear-algebra,perceptron

A linear function is f(x) = a x + b. If we take another linear function g(z) = c z + d, and apply g(f(x)) (which would be the equivalent of feeding the output of one linear layer as the input to the next linear layer) we get g(f(x)) =...

python,numpy,matrix,linear-algebra

Here is a way to vectorize the calculation you specified. If you do a lot of this kind of thing, then it may be worth learning how to use, "numpy.tensordot". It multiplies all elements according to standard numpy broadcasting, and then it sums over the pairs of axes given with...

matlab,matrix,linear-algebra,svd

The answer is simply diag(S) Why? There's a theorem1 that says that the error between a matrix A and its rank-k approximation Ak has a (spectral) norm2 given by the k+1-th singular value of A. That is, the error is given by the first not used singular value. Isn't it...

Since focus and directrix have equal distance from any point on the parabola, they in particular have equal distance from the origin (0,0). You may assume that distance fo be d, so the focus would be at (0,d) and the directrix would be the line y=-d. For any other point...

c,matrix,arduino,calculator,linear-algebra

It's a floating point error, the final value you are getting is very close to zero. Demo. Add a small epsilon value to your final test to allow for floating point inaccuracies: if(fabs(a[cant-1][cant-1]) < 0.000001){ lcd.print("No solucion"); /* if there is no solution print this*/ ...

The right singular vectors of a matrix are unique up to multiplication by a unit-phase factor if it has distinct singular values. When considering real singular vectors, this comes down to a change of sign (more information here). Also, since singular vectors correspond to certain singular values (diagonal entries of...

c++,matlab,linear-algebra,eigen,intel-mkl

There is no problem here with Eigen. In fact for the second example run, Matlab and Eigen produced the very same result. Please remember from basic linear algebra that eigenvector are determined up to an arbitrary scaling factor. (I.e. if v is an eigenvector the same holds for alpha*v, where...

python,statistics,scikit-learn,linear-algebra,linear-regression

Using this kronecker product identity it becomes a classic linear regression problem. But even without that, it is just the transpose of a linear regression setup. import numpy as np m, n = 3, 4 N = 100 # num samples rng = np.random.RandomState(42) W = rng.randn(m, n) X =...

python,numpy,scipy,linear-algebra,sympy

Your equation represents a system of equations, where each element of v0 is expressed as a sum of the respective elements in the arrays v1,v2,v3,v4,v5. This is a perfectly determined case, i.e. the number of unknowns a1,a2,a3,s4,s5 equals the number of equations, which is the length of the vectors v1,v2,v3,v4,v5....

algorithm,linear-algebra,linear-programming,cplex,traveling-salesman

According to this Wikipedia article the travelling salesman problem can be modelled as an integer linear program, which I believe to be the key issue of the question. The idea is to have decision variables of permitted values in {0,1} which model selected edges in the graph. Suitable constraints must...

arrays,r,matrix,linear-algebra

You can avoid looping and replication by (1) 3-dimensionally transposing the numerator array and (2) flattening the denominator array to a vector, such that the division operation will naturally cycle the incomplete denominator vector across the entirety of the transposed numerator array in such a way that the data lines...

c#,vector,linear-algebra,ilnumerics

I'm a bit unsure if I got this correctly. But you certainly can modify ILArray. Just make sure, you understand the basics for working with ILArray and how to handle the different array types. Especially, prevent from using var in conjunction with ILArray! Read about the core array features: http://ilnumerics.net/docs-core.html...

c++,matrix,linear-algebra,covariance

This statement: As far as I'm aware, the next step is to transpose the matrix, and multiply the origin together, take the sum and finally divide by the dimensions X - 1.. And this implementation: cov += d[i][j] * d[j][i] / (d[i].size() - 1); Don't say the same thing. Based...

You need to find the magnitude of the 2D velocity vector in the XZ plane as a scalar quantity, then apply your friction function to that and then convert back to a 2D vector. In C-like pseudo code: // the current velocity in each direction float velocityX, velocityY, velocityZ; //...

python,arrays,scipy,vectorization,linear-algebra

You should just divide your array by the sqrt of the sum of squares of your array's last dimension. In [1]: import numpy as np In [2]: x = np.random.rand(1000, 500, 3) In [3]: normed = x / np.sqrt((x**2).sum(axis=-1))[:,:,None] #None could be np.newaxis Note that if you want to compute...

Read this article: Fanbin Bu and Yimin Wei, The algorithm for computing the Drazin inverses of two-variable polynomial matrices, Applied mathematics and computation 147.3 (2004): 805-836. in appendix there are several MATLAB code. The first one is this: function DrazinInverse1a = DrazinInverse1(a) %----------------------------------------- %Compute the Drazin Inverse of a matrix...

python,numpy,matrix,linear-algebra

Consider what taking the SVD of a matrix actually means. It means that for some matrix M, then we can express it as M=UDV* (here let's let * represent transpose, because I don't see a good way to do that in stack overflow). if M=UDV*: then: M^-1 = (UDV*)^-1 =...

It appears to me that you have a matrix of size 4x6, which means your indices for each dimension of your array go from 0 to 3 and 0 to 5 respectively. Then you have a for loop going from 0 to range(N), where N is a length of a...

Using the numbers you provided: 5-->10 2-->4 9-->18 7-->14 You want to find a, b, c and d that solve the system defined by: ax^3 + bx^2 + cx + d = f(x) So, in your case it is: 125a + 25b + 5c + d = 10 8a +...

c#,unity3d,linear-algebra,algebra

There is no built in way to solve equations in .NET. Symbolic equation parsing/solving is something for advanced math libraries (heck, it wasn't even in TI calculators until the TI-89). The following libraries may be of use: http://smartmathlibrary.codeplex.com/ http://mathnetnumerics.codeplex.com/...

c++,math,linear-algebra,vector-graphics

EDIT: Just figured out why I couldn't reproduce the problem. If I change x /= len in my code below to x *= ((t) 1.) / len then I end up with exactly what you gave for floats, i.e. inconsistent answers beyond the first six digits (which you should not...

Looks like it's a rounding/numerical-approximation error: t[-1] Out[493]: 0.10000000000000001 np.dot(ss,para)[-1] Out[495]: 0.099999999999999964 ...

arrays,matlab,matrix,octave,linear-algebra

The notation eye(10)(a,:) in Octave means: build the size-10 identity matrix (eye(10)) and then pick its rows in the order given by a (note that a is used as the first index, which corresponds to rows, and : as second index, which means "take all columns"). So, for example, the...

On octave 3.8.2 at least, you get a bit more information. octave-cli-3.8.2:2> which mldivide 'mldivide' is a built-in function from the file libinterp/corefcn/data.cc this file can be found on the octave repository. That specific function is on line 6083: DEFUN (mldivide, args, , "-*- texinfo -*-\n\ @deftypefn {Built-in Function} {}...

matlab,linear-algebra,sparse-matrix

You're doing it wrong—you need to use: x = A\b; for the equation Ax = b...

Assuming MATLAB (or Octave): A = [1 3 1 2; 0 2 1 4; 6 5 2 1]; [U,S,V] = svd(A); S(3,3) = 0; A_hat = U*S*V'; This gives: A_hat = 1.37047 2.50649 1.03003 2.30320 -0.20009 2.26654 0.98378 3.83625 5.90727 5.12352 1.99248 0.92411 ...

python,numpy,matrix,linear-algebra

Linear algebra can only solve for multiples of your variables, not powers (that is why it is called linear, ie the equation for a straight line, Ax + By + Cz = 0). For this set of equations you can use the quadratic formula to solve in terms of a:...

The first issue is that eigSym and EigSym are different. eigSym is an object that has an apply method that accepts a DenseMatrix, so we can write eigSym(A), which is syntactic sugar (provided by Scala—it's not Breeze-specific) for eigSym.apply(A). So the following will work: import breeze.linalg._, eigSym.EigSym val A =...

One very useful tool is the cross product (from high school analytic geometry). This takes as an input an ordered pair of 3-dimensional vectors v and w, and produces a 3-dimensional vector vxw perpendicular to both, whose length is the area of the parallelogram whose sides are v and w,...

python,numpy,matrix,linear-algebra

What about this: def is_permuation_matrix(x): x = np.asanyarray(x) return (x.ndim == 2 and x.shape[0] == x.shape[1] and (x.sum(axis=0) == 1).all() and (x.sum(axis=1) == 1).all() and ((x == 1) | (x == 0)).all()) Quick test: In [37]: is_permuation_matrix(np.eye(3)) Out[37]: True In [38]: is_permuation_matrix([[0,1],[2,0]]) Out[38]: False In [39]: is_permuation_matrix([[0,1],[1,0]]) Out[39]: True In...

python,numpy,scipy,linear-algebra,least-squares

Look at this: Ax = b x^TA^T = b^T where A^T indicates the transpose of A. Now define the symbols Ap=x^T and Xp = A^T and bp=b^T and your problem becomes: Ap Xp = bp that is exactly in the form that you can treat with least squares...

I'm not aware on an in-place elementwise matrix multiplication, and I've had a good look in the julia/base/*.jl but can't find one. We have in-place matrix multiplication (e.g. A_mul_B!), but thats more important because we can use BLAS for that. Element-wise matrix multiplication doesn't use BLAS, AFAIK, so might as...

scipy,linear-algebra,sparse,eigenvalue

Both eigs and eigsh require that M be positive definite (see the descriptions of M in the docstrings for more details). Your matrix M is not positive definite. Note the negative eigenvalues: In [212]: M Out[212]: array([[ 25.1, 0. , 0. , 17.3, 0. , 0. ], [ 0. ,...

python,numpy,scipy,linear-algebra,sparse-matrix

Is the original array already sparse (plenty of zeros), or are those just a product of tril? If the later, you might not be saving space or time by using sparse code. For example def gen1(W): tmp = np.tril(W) d = tmp.sum(0)+tmp.sum(1)-tmp.diagonal() return np.diag(d) - tmp is 8x faster than...

optimization,linear-algebra,julia-lang,factorization,levenberg-marquardt

It can be a bit tricky to figure out exactly which code path is taken when your are running into code that uses substitutions during parsing as is the case for '. You could try julia> ( J'*J + sqrt(100)*DtD ) \ -J'fcur to see another substitution taking place. I...

numpy,scipy,linear-algebra,sparse

Without knowing the exact error, it's hard to say what's going wrong. I'm not overly familiar with scipy, but I suspect if there was no solution to these problems due to an inconsistent system, you would get a meaningful error. My best guess would be a memory issue. During Gaussian...

scala,matrix,linear-algebra,scala-breeze

First, gather the rows you still want: val subset = matrix(::, 2 to 3) then add the zeroes: val newMatrix = DenseMatrix.horzcat(subset, DenseMatrix.zeros[Double](1,9)) I might have mixed up rows and columns in the last line....

python,numpy,scipy,signal-processing,linear-algebra

@nivag at Signal Processing pointed out that each dimension can be treated independently: http://dsp.stackexchange.com/questions/19519/extending-1d-window-functions-to-3d-or-higher Here is the code I came up with (with revision help from the scikit-image team): def _nd_window(data, filter_function): """ Performs an in-place windowing on N-dimensional spatial-domain data. This is done to mitigate boundary effects in the...

linear-algebra,quaternions,ros,inverse-kinematics

Isn't this just as simple as exchanging the order of the factors in the quaternion product? A unit quaternion q transforms a vector v in local coordinates to the rotated vector q*v*q' in global coordinates. A modified quaternion a*q*b (a, b also unit quaternions) transforms v as a*(q*(b*v*b')*q')*a', where the...

c++,cuda,linear-algebra,solver,cusolver

1.cusolverSpScsrlsvchol returns wrong results for x: 1 3.33333 2.33333 1 3.33333 2.33333 1.33333 1 2.33333 1.33333 0.666667 1 1 1 1 1 You said: A is positive definite and symmetric. No, it is not. It is not symmetric. cusolverSpcsrlsvqr() has no requirement that the A matrix be symmetric. cusolverSpcsrlsvchol()...

matlab,numpy,scipy,linear-algebra

Given symmetric A and symmetric, positive definite B, the generalized eigenvalue problem is to find nonsingular P and diagonal D such that A P = B P D The diagonal of D holds the generalized eigenvalues, and the columns of P are the corresponding generalized eigenvectors. For such a solution,...

c++,linear-algebra,equation,linear

You don't allocate the second dimension for the EquationHolder. Since is a 2D matrix you have to allocate the second dimension also. Change your double for loop to the following: float ** EquationHolder=new float *[3]; for (int i=0; i<NumEquations; i++) { EquationHolder[i] = new float[3]; cout<<"Please Enter The Information Of...

python,numpy,matrix,linear-algebra

Your Problem In all your data points you have tc=30! When you try to fit your data with a function of to, tc the algorithm is telling you (with the only language that it knows, the language of linear algebra) that you cannot estimate the variability of y as a...

multidimensional-array,scipy,vectorization,linear-algebra,least-squares

You can gain some speed by making use of the stack of matrices feature of numpy.linalg routines. This doesn't yet work for numpy.linalg.lstsq, but numpy.linalg.svd does, so you can implement lstsq yourself: import numpy as np def stacked_lstsq(L, b, rcond=1e-10): """ Solve L x = b, via SVD least squares...

python,vector,ipython,linear-algebra,ipython-notebook

If you do vector_sum(a) the local variable result will be the integer "1" in your first step which is not iterable. So I guess you simply should call your function vector_sum like vector_sum([a,b,a]) to sum up multiple vectors. Latter gives [4,7,10] on my machine. If you want to sum up...

c++,matlab,linear-algebra,mex,lapack

I've tried to reproduce with an example on my end, but I'm not seeing any errors. In fact the result is identical to MATLAB's. mex_chol.cpp #include "mex.h" #include "lapack.h" void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[]) { // verify arguments if (nrhs != 1 || nlhs >...

arrays,vector,scipy,vectorization,linear-algebra

You could use the meshgrid function from numpy: import numpy as np M=10 N=10 D=1 ux=0.5 uy=0.5 xo=1 yo=1 A=np.empty((M,N,3)) x=range(M) y=range(N) xv, yv = np.meshgrid(x, y, sparse=False, indexing='ij') A[:,:,0]=D*ux - (xv-xo) A[:,:,1]=D*uy - (yv-yo) A[:,:,2]=D ...

c++,arrays,numpy,linear-algebra,triangular

The equations going from linear index to (i,j) index are i = n - 2 - floor(sqrt(-8*k + 4*n*(n-1)-7)/2.0 - 0.5) j = k + i + 1 - n*(n-1)/2 + (n-i)*((n-i)-1)/2 The inverse operation, from (i,j) index to linear index is k = (n*(n-1)/2) - (n-i)*((n-i)-1)/2 + j -...

If you have the Communications Systems Toolbox installed inverse in GF(2) can be obtained as simply as >> l = 602; >> w = gf(round(rand (l,l))); >> whos Name Size Bytes Class Attributes l 1x1 8 double w 602x602 1450156 gf >> b = inv(w); >> all(all(b*w == eye(l,l))) ans...