for eigen and fisherfaces, the images have to get 'flattened' to 1 single row, this is only possible, if your Mat is continuous. (lbph is not constraind this way) but i'd rather suspect, that the resp. Mat was empty, because it was not an image at all. please check your...

Briefly: There are currently 25 packages on CRAN which use RcppEigen. That makes 25 working case studies. You could look at one or two. LinkingTo: is generally enough. It may be a bug that the skeleton generator still adds Imports. We no longer do that in RcppArmadillo. When I just...

c++,algorithm,opencv,math,eigen

No, it does not matter, since a corresponding sign difference will appear also on the right singular vectors, so basically U * S * V^adjoint will give you the right result. More precisely, from Wikipedia: Non-degenerate singular values always have unique left- and right-singular vectors, up to multiplication by...

I think you could allocate large matrix once, then for multiplication use block create a view of its part which would contain meaningful data. You can reuse a big matrix then. This will spare you reallocations. Following example fully demonstrates it. ./eigen_block_multiply.cpp: #include <Eigen/Dense> #include <iostream> using namespace std; using...

sm1.nonZeros() does not look at the values of the matrix, rather it returns the size of the inner array that was allocated to store values: /** \returns the number of non zero coefficients */ inline Index nonZeros() const { if(m_innerNonZeros) return innerNonZeros().sum(); return static_cast<Index>(m_data.size()); } If you were to look...

You need to convert you matrix to a rank-revealing decomposition. For instance FullPivLU. If you have a matrix3f it looks like this : FullPivLU<Matrix3f> lu_decomp(your_matrix); auto rank = lu_decomp.rank(); Edit decompose the matrix is the most common way to get the rank. Although, LU is not the most reliable way...

sourceClouds.push_back(sourceCloud); This line only copy the PointCloud::Ptr and does not copy the point cloud data. try this: int main(int argc, char** argv) { //Create Point Clouds //PointCloud<PointXYZ>::Ptr sourceCloud(new PointCloud<PointXYZ>); vector < PointCloud<PointXYZ>::Ptr, Eigen::aligned_allocator <PointCloud <PointXYZ>::Ptr > > sourceClouds; //save PointClouds to array for (int i = 1; i < (argc...

There are several things wrong with that code. The return value of performcholesky, for example, is a pointer to a local variable that does not exist anymore after performcholesky has returned, and you overwrite a pointer to allocated memory with this dangling pointer, leaking that memory. Why don't you work...

here is a solution i've made. I've implemented the diagonal(i) because this function is not taken account by my eigen version (how can i know which version i use?). I obtain a good results with this, but i don't know if can more optimize it : void spdiags(Eigen::SparseMatrix<double> A) {...

c++,matlab,matrix,binaryfiles,eigen

When you write values using the 'int' value for the precision argument to fwrite, it writes them as 4-byte integers, so your 10x10 matrix will take up 10x10x4 = 400 bytes. But you are only allocating a buffer that is 10x10 = 100 bytes long.

You can use conservativeResize for that purpose: mat.conservativeResize(mat.rows(), mat.cols()+1); mat.col(mat.cols()-1) = vec; ...

If you don't/can't use auto in davidhigh's solution try this std::pair<Matrix4cd, Vector4d> eig(const Matrix4cd& A, const Matrix4cd& B) { Eigen::GeneralizedSelfAdjointEigenSolver<Matrix4cd> solver(A, B); Matrix4cd V = solver.eigenvectors(); Vector4d D = solver.eigenvalues(); return std::make_pair(V, D); } ...

You need to pass a row-major matrix type to Map, e.g.: Map<Matrix<double,4,4,RowMajor> > M(data); then you can use M as an Eigen matrix, and the values of data will be modified, e.g.: M = M.inverse(); If you want to copy the data to a true column-major Eigen matrix, then do:...

You should use DenseBase instead of the too general EigenBase base classe. This discrepancy will probably go away in 3.3.

L is an n x n lower triangular matrix. alpha is the solution of the linear system L * alpha = y. alpha is then recomputed in-place as the solution of the linear system adjoint(L) * x = alpha....

try this: F_2(F_1_broken<double>,x,out); The compiler has no way to deduce the type here. So you have to help him :-)...

The issue is that (u1.transpose() * u2) is not actually (by default) a matrix object but rather a Eigen::SparseSparseProduct. You can get around this by either assigning it to a temporary variable: u3 = (u1.transpose() * u2); sequence(0,0) = u3.coeffRef(0,0); or by casting it: sequence(0,0) = (SparseMatrix<int>(u1.transpose() * u2)).coeff(0,0); ...

It seems like Eigen doesn't implement any of this functionality itself. In general, it looks like the best you can do is to replace NaN values with something else via select: for example, the following replaces elements of x less than 3 with 0 x = (x.array() < 3).select(0, x);...

You need to cast the second matrix to the form of the first, like this: Eigen::Matrix< float, 3, 1 > mf; Eigen::Matrix< unsigned int, 3, 1 > mi; mf.dot(mi.cast<float>()); Also, Eigen gives ready to use types for vectors, like Eigen::Vector3f for floats and Eigen::Vector3i for ints. (but none for unsigned...

On the lines of Dawid's answer (which has a small issue, see the comments), you can do: static Eigen::Matrix4d foo = [] { Eigen::Matrix4d tmp; tmp << 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16; return tmp; }(); Return value optimization takes...

eigen,face-recognition,pca,eigenvector

The accuracy would depend on the classifier you are using once you have the data in the PCA projected space. In the original Turk/Pentland eigenface paper http://www.face-rec.org/algorithms/PCA/jcn.pdf they just use kNN / Euclidean distance but a modern implementation might use SVMs e.g. with an rbf kernel as the classifier in...

Can you make sure that 2.7.6 is the version that gdb uses by making the following check from it? Start gdb. Type: import sys print (sys.version) End with CTRL+D ...

Your first approach is, as you pointed out, a mapping of the matrices. Your second example is a mapping of copies of the cv matrix (the split function copies). Therefore, the first approach will be more efficient.

No, you cannot "forward declare" type aliases: neither MatrixXd nor VectorXd are classes; they are type aliases. The best you can do is manually introduce the type aliases early yourself by writing out the typedef statement. This is probably a bad idea. BTW that last line of output is highly...

Let me summarize what's is going on and why it's wrong. First of all, let's instantiate the auto keywords with the types they are taking: typedef GeneralProduct<MatrixXd,MatrixXd> Prod; Prod c = a * b; GeneralProduct<Transpose<Prod>,Prod> cTc = c.transpose() * c; Note that Eigen is an expression template library. Here, GeneralProduct<>...

You can use the Eigen::Transform<>::linear() method for that purpose: normMatrix = m_world.linear(); which is a shortcut for the MatrixBase::topLeftCorner() method: normMatrix = m_world.matrix().topLeftCorner<3,3>(); ...

Here's the problem: In MATLAB, B/A solves the equation xA=B. In Eigen, the solve method solves the equation Ax=B. In MATLAB, this would be expressed as x = A\B. These are very different - matrix multiplication is not commutative! In general, the matrix product Ax has the same number of...

If I understand correctly... You can just dereference the iterator to a temporary reference inside the loop for convenience and access coefficients inside like with any Eigen object: for(auto it = uvVertices.begin(); it != uvVertices.end(); ++it) { Matrix<float, 2, 1>& v = *it; //auto& v = *it; // should also...

matrix,permutation,eigen,sparse

If you want to apply a symmetric permutation P * Y * P^-1, then the best is to use the twistedBy method: SpMat z = y.twistedBy(perm); otherwise you have to apply one and then the other: SpMAt z = (perm * y).eval() * perm; ...

You could try to declare vecCompare as a functional object rather than a function. typedef VectorXd Vec; struct vecCompare { bool operator()(const Vec &v, const Vec &w) const { int n = v.rows(); for (int i = 0; i < n; ++i) { if (v(i) < w(i)) { return true;...

It's not as straightforward as it should be, but you can make it work. Element access in Eigen looks to mostly be done through operator(): // (copied from http://eigen.tuxfamily.org/dox/GettingStarted.html) MatrixXd m(2,2); m(0,0) = 3; m(1,0) = 2.5; m(0,1) = -1; m(1,1) = m(1,0) + m(0,1); Therefore, we need to define...

Eigen provides a rank-revealing LU decomposition, which provides an isInvertible member function. See class Eigen::FullPivLU< MatrixType > ...

double zmat[N+1][N+1]; This is what is causing you trouble. Declaring a large matrix as a local variable in a function is not a good idea. Local variables are allocated on the stack. Many machines limit the size of the stack to a smallish amount of data. On my machine, the...

The unsupported Tensor module is only found in the unstable version for now (Current stable: 3.2.5).

You can use Eigen's aligned_allocator. See this page for the details, but basically you can do: std::vector<ClusterNode, Eigen::aligned_allocator<ClusterNode> > Also don't forget to overload operator new for your class. Again, all the details are in the Eigen's documentation....

c++,performance,matrix,visual-studio-2013,eigen

Short answer You're timing the lazy (and therefore lack of) evaluation in the col multiplication version, vs. the lazy (but evaluated) evaluation in the direct version. Long answer Instead of code snippets, let's look at a full MCVE. First, "you're" version: void ColMult(Matrix3Xd& dest, Matrix3Xd& points) { Eigen::Matrix3d T =...

You are probably looking for the Ref<> class which will allow you to write a function that accepts both columns of a matrix and vectors without templates nor copies: void threshold(Ref<ArrayXf> params) { params = (params >= 0 ).cast<float>(); } ArrayXf a; ArrayXXf A; /* ... */ threshold(a); threshold(A.col(j)); ...

operator< is defined only in the array world, so you have to use .array() to see your MatrixXd as an ArrayXXd (no copy here), and then the result is a array of boolean, so if you want double, then you have to cast explicitly: MatrixXd M(3,3); M << 0, 1,...

c++,matrix,vector,division,eigen

After the suggestion of Avi, I put my comment as an answer so that the question is closed. Fixed it. Apparently I was thinking too complicated . A simple RowVector3d adsf = difference * normTransform.inverse(); did the trick...

Boost is including the C++11 std::array header array but your include path is picking up the Array header (apparently obsolete) from Eigen. You must be on a file system that doesn't distinguish files by case. It appears that the proper way to include an Eigen header is: #include <Eigen/Eigen> See...

Your inner loop: for (int i = 0; i < 80 * x; i++){ f += 2; } is optimized away by compiler. Compiling on VC++ for x86 the whole loop folds into one single assembly instruction: lea esi, DWORD PTR [esi+ecx*2] Where ecx is value of 80*x, and esi...

c++,templates,boost,eigen,boost-mpl

I am currently using the following solution: #include <boost/utility/enable_if.hpp> #include <boost/mpl/has_xxx.hpp> #include <boost/mpl/and.hpp> //! The macro is used for enable_if if T is an Eigen Type #define ENABLE_IF_EIGEN_TYPE(T)\ typename boost::enable_if< is_eigen_type<T> >::type /** The macro enables Eigen new() when required. T can be any type * * Example Usage: *...

According to documentation, you must #include <Eigen/StdVector> and use std::vector<Matrix<double,4,1>, Eigen::aligned_allocator<Matrix<double,4,1> > > ...

First question is whether this process for including an RcppEigen function into an existing R package is correct? (I completely ignored any Makevars files or any .h files -- I don't really know what they do... Also don't really understand the changes to the NAMESPACE file. I tried to...

c++,matlab,linear-algebra,eigen,intel-mkl

There is no problem here with Eigen. In fact for the second example run, Matlab and Eigen produced the very same result. Please remember from basic linear algebra that eigenvector are determined up to an arbitrary scaling factor. (I.e. if v is an eigenvector the same holds for alpha*v, where...

You can use Eigen::Map for that purpose: VectorXd res = some_mat * VectorXf::Map(vec, size); Note that Map object are read-write, so res could also be a Map....

c++,algorithm,matrix-multiplication,eigen

Make sure you compile with full optimizations, e.g. g++ -O3 -DEIGEN_NO_DEBUG if using g++. Also, turning on parallelization via OpenMP may help, use -fopenmp when compiling and linking. I use the following to compile most Eigen code (with g++): g++ -I/path/to/eigen -O3 -DEIGEN_NO_DEBUG -fopenmp program.cpp ...

c++,performance,numeric,eigen,numerical

First of all, make sure you benchmarked with compiler optimizations ON, aka Release mode in visual studio. Second, what's the purpose of the outer_product_3d function, why not doing rvec*rvec.transpose()? Finally, you can significantly reduce the operation count as follows: rpy.block<3,3>(3 * (i + 1) - 3, 3 * (j +...

Assuming you don't want to migrate to a GPU and if you want to trust Eigen's benchkmark page, Eigen is pretty fast. You specifically mentioned matrix vector products for which in the range you specified, Eigen is on top. Make sure that OpenMP is enabled as Eigen will take advantage...

This has nothing to do with Eigen, you simply omitted the class prefix Test:: in the cpp file: void Test::mySum(VectorXi vec){ cout << vec.sum() << endl; } Moreover, the trailing ; was not needed in proper C++, and you should rather pass the vec object by reference declaring the argument...

You can either cast it as a Transform<float,3,AffineCompact,RowMajor> which is ugly, or copy the coefficients using a Map: AffineCompact3f A; A = Map<Matrix<float,3,4,RowMajor> >(data); ...

c++,arrays,matrix,eigen,eigen3

Rather off-topic but since you stressed performance: Eigen assembly isn't always optimal - there is some overhead due to poor register reuse and writing back to memory (this is not to blame Eigen by any means - doing this in generic templates is an impossible task). If your kernels are...

Well, this doesn't completely tell you what's wrong, but hopefully will point you in the right direction. The problem is that quaternionbase_assign_impl doesn't support 3X1 matrices, it seems. (Unless there is other code that is getting into the translation unit somehow.) There is a forward declaration of it: template<typename Other,...

Basically, the document is sightly confusing for me at least. the way to do it is simply: SpMat mat_3 = mat_1 * mat_2 No dense matrix is created along the way. Eigen rocks!...

Eigen can use MKL under the hood, so you could just use the Eigen interface for your matrices and let Eigen deal with MKL. All you have to do is #define EIGEN_USE_MKL_ALL before you include any Eigen headers.

Eigen::Block<...> (which the block() member function returns) does not make a copy of the values involved but acts as a reference to part of the matrix. You can see this with A.block(0, 0, 2, 4) *= 2; // this will change A Because of this, an Eigen::Block object remains valid...

python,pipe,multiprocessing,eigen

multiprocessing uses pickle (or cPickle, depending on the version). Have you tried checking like this? >>> import pickle >>> pik = pickle.dumps(model) >>> _model = pickle.loads(pik) If that succeeds, it's serializable by pickle. If it's not, you might try using a more powerful serializer, and a fork of multiprocessing that...

c++,inheritance,vector,constructor,eigen

According to the rules of construction in C++, the child constructor will always print a size of 1. The child's construction code simply does not execute until the parent's constructor is fully completed (more at C++: Construction and initialization order guarantees). It looks like the Child class is using a...

vecCompare is not a total ordering. It returns true if any coordinate in the left operand is lesser than the corresponding coordinate in the right operand. For the vectors a = (1, 2), b = (2, 1), for example, both vecCompare(a, b) and vecCompare(b, a) are true. If you meant...

Check for uninitialized variables. Quite often, if your program works as per expectations and then fails the next time you try running it again for the same workflow, the cause is uninitialized variables. Of course, there are many other things that can cause this, but uninitialized variables are a very...

No. The MatrixXd object has to be defined as row/column major. See the example below. #include <Eigen/Core> #include <iostream> #include <vector> using std::cout; using std::endl; int main(int argc, char *argv[]) { std::vector<int> dat(4); int i = 0; dat[i] = i + 1; i++; dat[i] = i + 1; i++; dat[i]...

On 64 bit platforms Eigen uses 64 bit integers to encode the dimensions of its matrices. The MKL wrapper uses 32 integers, which might overflow if your matrix size exceeds 2 billion rows or columns.

If what you want is read data from a file which does not specify the matrix size explicitly, then I would recommend to push back the entries in a std::vector and at the end of the parsing copy the data from the std::vector using Map: MatrixXf A; std::vector<float> entries; int...

' in Matlab does the transpose and takes the complex conjugate (http://uk.mathworks.com/help/matlab/ref/ctranspose.html). If you want to just do the transpose use .' (with a dot infront). Thus, if you change your MATLAB test to testTest*testTest.' the results should be the same. If you want the complex transpose in eigen then...

I suspect you compiled with assertions disabled. Eigen overloads operator<< to enable matrices to be initialized with a comma separated list. (Note that this also means there's an override of the comma operator lurking behind the scenes). That comma separated list of items can contain numbers, but it can also...

Yes, internally an Affine3f stores a MatrixXf, so you can do: Eigen::Affine3f A; Eigen::Matrix4f M; M = A.matrix(); A = M; // assume that M.row(3) == [0 0 0 1] ...

I believe the mistake is that the pointer should be const, not the pointee. I.e. try Eigen::aligned_allocator< std::pair<Frame* const, Sim3> > as the allocator type.

Two things. You are using the c'tor for mapping a matrix: Map ( PointerArgType dataPtr, Index nbRows, Index nbCols, const StrideType & a_stride = StrideType() ) Constructor in the dynamic-size matrix case. Parameters dataPtr pointer to the array to map nbRows the number of rows of the matrix expression nbCols...

c++,eigen,quaternions,euler-angles

For Euler to quaternion see this page, and for quaternion to Euler angles, first copy it to a 3x3 matrix and then see the eulerAngles method.

Where do you see reallocation? As you can see in the source code, the four helper vectors residual, p, z, and tmp, are declared and allocated outside the while loop, that is, before the iterations take place. Moreover, recall that Eigen is an expression template library, so a line code...

Stupid mistake on my part, debugging my code in release mode (vs2013). Be sure your settings are in debug if you want to step through the code. Changing this sped everything up to a few ms as ggael suggests.

Eigen::Map is used to wrap a C-style array so that it can be used as an Eigen::Matrix. Normally, this allows it to even write through to the underlying array. Since you only have a T const*, writing is not allowed. To remain const-correct, you need to tell Eigen the mapping...

For future reference, it would be a lot easier for people to help you (and you to help yourself) if you would prepare a Minimal, Complete, and Verifiable example, which I did below. For me, it helped me understand what you tried to ask. For you, it would help you...

For this particular case, you can use X.cwiseProduct(Y) (X + Y).cwiseMin(1) This depends on the elements being only 0 and 1. More generally, you can define custom binary functors: struct BinaryOr { typedef int result_type; int operator()(int a, int b) const { return a | b; } }; struct BinaryAnd...

c++,eigen,armadillo,expression-templates

Just for future reference, this is how I decided to implement my solution: I overloaded the operator+ in the following way: template< typename T1, typename T2 > auto operator+( const my_vec< T1 > & X, const my_vec< T2 > & Y ) ->decltype( X.data() + Y.data() ) { return X.data()...

What do you mean you don't know the size? You can use the member functions cols()/rows() to get the size. Also, I assume by concatenation you mean direct sum? In that case, you can do something like #include <iostream> #include <Eigen/Dense> int main() { Eigen::MatrixXd *A = new Eigen::MatrixXd(2, 2);...

The last line is indeed doing a copy, so you can use a Map as follow: Map<Matrix4d,Eigen::Aligned> a(alpha.data()); a behaves like a Matrix4d and it is read-write. The Eigen::Aligned flag tells Eigen that the pointer you pass to Map is properly aligned for vectorization. The only difference with a pure...

Does this count as elegant enough? #include <Eigen/Sparse> #include <iostream> using namespace Eigen; using std::cout; using std::endl; int main(int argc, char *argv[]) { Matrix4f m = Matrix4f::Random(); Matrix3f A = Matrix3f::Constant(0.1); Vector4f b = Vector4f::Constant(0.2), c = Vector4f::Constant(0.3); cout << m << endl << endl; cout << A << endl...

Try: params = (params >= 0.0f).template cast<float>(); This usage basically is just an overloading of the template keyword. Here, it's a hint to the compiler that you are trying to call a member template. See this for full details....

If all you want is to access the data of an Eigen::Matrix via a raw C pointer, then you can use the .data() function. Coefficient are stored sequentially in memory in a column major order by default, or row major if you asked for: MatrixXd A(10,10); double *A_data = A.data();...

c++,windows,linear-algebra,eigen

I would not recommend doing this from a code design standpoint, as a linear algebra library is not something you are likely to replace. So encapsulating it will most likely not be beneficial and will make your code more complicated. However if you really want to do this, you would...