Ophion led me the right way. Despite the documentation, one have to transfer the parameter of mkl_set_num_thread by reference. Now I have defined to functions, for getting and setting the threads import numpy import ctypes mkl_rt = ctypes.CDLL('libmkl_rt.so') mkl_get_max_threads = mkl_rt.mkl_get_max_threads def mkl_set_num_threads(cores): mkl_rt.mkl_set_num_threads(ctypes.byref(ctypes.c_int(cores))) mkl_set_num_threads(4) print mkl_get_max_threads() # says 4...

cuda,icc,nsight,intel-mkl,makefile-project

Nsight is based on CDT so this tutorial should work. If you have any CUDA C sources, use "CUDA C" project wizard and select CUDA Toolkit as a toolchain.

Eigen can use MKL under the hood, so you could just use the Eigen interface for your matrices and let Eigen deal with MKL. All you have to do is #define EIGEN_USE_MKL_ALL before you include any Eigen headers.

Put -lm as the last parameter, the order of parameters is important while linking.

Here is what I use, and what works for me. char matdescra[6] = {'g', 'l', 'n', 'c', 'x', 'x'}; /* https://software.intel.com/sites/products/documentation/hpc/mkl/mklman/GUID-34C8DB79-0139-46E0-8B53-99F3BEE7B2D4.htm#TBL2-6 G: General. D: Diagonal L/U Lower/Upper triangular (ignored with G) N: non-unit diagonal (ignored with G) C: zero-based indexing. */ Complete Example Here is a complete example. I first...

To 1: The main reason many people are using Gohlke's MKL based libraries - afaik - is, that there's no free 64bit fortran compiler for windows out there. So using MKL is not primarily based on performance reasons. Check e.g. the comments on this answer: http://stackoverflow.com/a/11200146/2319400 To 2: No you...

c++,distributed-computing,blas,intel-mkl,scalapack

The issue may come from : MPI_Bcast(&lda, 1, MPI_INT, 0, MPI_COMM_WORLD); Before this line, lda is different on each process if the dimension of the matrix is odd. Two processes handle 2 rows and two processes handle 3 rows. But after the MPI_Bcast(), lda is the same everywhere (3). The...

c,mpi,intel-mkl,mpich,scalapack

This answer is courtesy of Ying from Intel, all the credits go to him! The int in C are supposed to be 32bit, you may try lp64 mode. mpicc -o test_lp64 ex1.c -I/opt/intel/mkl/include /opt/intel/mkl/lib/intel64/libmkl_scalapack_lp64.a -L/opt/intel/mkl/lib/intel64 -Wl,--start-group /opt/intel/mkl/lib/intel64/libmkl_intel_lp64.a /opt/intel/mkl/lib/intel64/libmkl_core.a /opt/intel/mkl/lib/intel64/libmkl_sequential.a -Wl,--end-group /opt/intel/mkl/lib/intel64/libmkl_blacs_intelmpi_lp64.a -lpthread -lm -ldl [[email protected] scalapack]$ mpirun -n 4...

The issue is on the lda. From the reference we get that lda: The size of the first dimension of matrix A The CblasRowMajor and CblasColMajor describe the memory storage sequence of a two dimensional matrix. The CblasRowMajor storage of a matrix A(nrow,ncol) means that first are stored the ncol...

c,matrix,vector,blas,intel-mkl

Short answer is, yes you can use dgemm for rank-1 update. The dger is of course suggested, since it is expected to be better optimized for this operation. As far as the use of cblas_dgemm . As you know the definition of leading dimension is: lda: The size of the...

r,performance,compiler-construction,intel-mkl,intel-parallel-studio

Very rough order of magnitude: parallel / multi-core BLAS such as the MKL will scale sublinearly in the number of cores but only for the parts of your operations that are actually BLAS calls ie not for your basic "for-loops, bootstrap simulation and so on" byte-compiling your R code may...

Intel MKL documentation (for mkl_csrcoo) states: Converts a sparse matrix in the CSR format to the coordinate format and vice versa. And according to the above link you should set job: if job(1)=1, the matrix in the coordinate format is converted to the CSR format....

c,segmentation-fault,malloc,valgrind,intel-mkl

The documentation suggests including mkl.h. So change: #include "mkl_types.h" #include "mkl_spblas.h" to #include "mkl.h" ...

c++,matlab,linear-algebra,eigen,intel-mkl

There is no problem here with Eigen. In fact for the second example run, Matlab and Eigen produced the very same result. Please remember from basic linear algebra that eigenvector are determined up to an arbitrary scaling factor. (I.e. if v is an eigenvector the same holds for alpha*v, where...

On 64 bit platforms Eigen uses 64 bit integers to encode the dimensions of its matrices. The MKL wrapper uses 32 integers, which might overflow if your matrix size exceeds 2 billion rows or columns.

python,numpy,scipy,warnings,intel-mkl

If a function decides to print an error message directly to stdout/stderr without using the normal Python error reporting mechanism (i.e. exception handling and warnings), there's little you can do to stop it from doing so. If it really annoys you, you can apparently suppress writing to stderr altogether. There...

c++,linux,makefile,linker,intel-mkl

"... but I want to know which the correct way." The correct way is to keep up with make standard variable names, in particular CXXFLAGS and LDFLAGS. You don't want to specify libraries for the linker flags like you try to do here (specifying subjects actually): FLAGS = ......

c++,c,3d,convolution,intel-mkl

You can't reverse the FFT with real-valued frequency data (just the magnitude). A forward FFT needs to output complex data. This is done by setting the DFTI_FORWARD_DOMAIN setting to DFTI_COMPLEX. DftiCreateDescriptor( &fft_desc, DFTI_DOUBLE, DFTI_COMPLEX, 3, sizes ); Doing this implicitly sets the backward domain to complex too. You will also...

Well they will be still functional (packages don't know the status of your license), but you'll be breaking the license by using them. If you mean to remove all MKL libraries from your PC when you stop complying with license, then the packages would stop working (or parts of them).

lapack,hpc,scientific-computing,intel-mkl

The function LAPACKE_dptsv() corresponds to the lapack function dptsv(), which does not feature the switch between LAPACK_ROW_MAJOR and LAPACK_COL_MAJOR. dptsv() is implemented for column-major ordering, corresponding to matrices in Fortran, while most of C matrices are row-major. So LAPACKE_dptsv(LAPACK_ROW_MAJOR,...) performs the following steps : transpose the right-end side b call...