machine-learning,artificial-intelligence,neural-network,convolution

After the last convolutional layer, you have N feature maps, with WxH resolution. This can be seen as a feature vector X of size NxWxH if you concatenate all the values. This is how you connect it to an MLP: i.e X acts as an input of a linear transformation...

It seems like Eigen doesn't implement any of this functionality itself. In general, it looks like the best you can do is to replace NaN values with something else via select: for example, the following replaces elements of x less than 3 with 0 x = (x.array() < 3).select(0, x);...

matlab,sum,continuous-integration,convolution,fourier-descriptors

That is because conv is only defined for numeric inputs. If you want to find the convolution symbolically, you'll have to input the equation yourself symbolically using integration. If you recall, the convolution integral is defined as: Source: Wikipedia Therefore, you would do this: syms x tau; F = int(h(tau)*x1(x-tau),'tau',-inf,+inf);...

matlab,image-processing,convolution

You could try a hard cap. Either save the locations of the white points before the convolution or find the location of all points > 1 and set them to 1 like this: B(B>1) = 1 ...

You are declaring img to be a 2D array of integers, that is to say every entry in the array is 4 bytes. You are also reading the file 4 bytes at a time using sizeof(int). int img[256][256]; // create an array to store image data for(int i=0; i<N; i++)...

python,numpy,matplotlib,signal-processing,convolution

When you convolute discrete signals, you need to scale appropriately to keep the signal's energy (integral over |x(t)|²) constant: import numpy as np import matplotlib.pyplot as plt n = 1000 t = np.linspace(0, 8, n) T = t[1] - t[0] # sampling width x1 = np.where(t<=4, 1, 0) # build...

python,numpy,vectorization,convolution,probability-density

You can compute the convolution of all your PDFs efficiently using fast fourier transforms (FFTs): the key fact is that the FFT of the convolution is the product of the FFTs of the individual probability density functions. So transform each PDF, multiply the transformed PDFs together, and then perform the...

python,numpy,matrix,convolution

Yes, that function computes the convolution correctly. You can check this using scipy.signal.convolve2d import numpy as np from scipy.signal import convolve2d kernel = np.array([(1, 1, -1), (1, 0, -1), (1, -1, -1)]) file = np.ones((5,5)) x = convolve2d(file, kernel) print x Which gives: [[ 1. 2. 1. 1. 1. 0....

c++,wolfram-mathematica,gaussian,convolution,fftw

Using FFT to do convolutions is only efficient when you have very large convolution kernels. In most blurring applications the kernel is much much smaller than the image, e.g. 3x3, so FFT would be significantly slower. There are many implementations for doing small-kernel convolutions. Most modern hardware supports such intrinsic...

c++,c,3d,convolution,intel-mkl

You can't reverse the FFT with real-valued frequency data (just the magnitude). A forward FFT needs to output complex data. This is done by setting the DFTI_FORWARD_DOMAIN setting to DFTI_COMPLEX. DftiCreateDescriptor( &fft_desc, DFTI_DOUBLE, DFTI_COMPLEX, 3, sizes ); Doing this implicitly sets the backward domain to complex too. You will also...

matlab,math,image-processing,signal-processing,convolution

I believe convolution is usually performed by "flipping" the kernel (left-right, up-down) and then sliding it across the matrix to perform a sum of multiplications. In other words, what matlab is actually computing is: a = matrix(1,1) * kernel(3); a = a + matrix(1,2) * kernel(2); a = a +...

kernelkernel<<<grid, 1>>> This is a significant issue; threads on nVidia GPUs work in warps of 32 threads. However, you've only assigned a single thread to each block, which means 31 of those threads will sit idle while a single thread does work. And usually, for kernels where you have the...

image-processing,computer-vision,convolution,edge-detection

Based on the information given in the paper Vehicle Detection Method using Haar-like Feature on Real Time System I can't tell how the group has done it exactly. However I can suggest a way on how this could be implemented. The main difference between a haar-like feature and a convolution...

Caffe supports resuming as explained here: We all experience times when the power goes out [...] Since we are snapshotting intermediate results during training, we will be able to resume from snapshots. This is available via the --snapshot option of the main caffe command-line tool, e.g: ./build/tools/caffe train [...] --snapshot=caffenet_train_10000.solverstate...

The following code implements only a part of what I can see in the description. It generates the noise processes and does what is described in the first part. The autocorrelation is not calculated with the filter coefficients but with the actual signal. % generate noise process y y =...

convn will work with an n-dimensional matrix and a 2-dimensional filter. Simply: A = ones(5,5,5); B = convn(A, ones(2), 'same'); ...

neural-network,convolution,theano,conv-neural-network

I deduce from this that you intend to have tied weights, i.e. if the first operation were are matrix multiplication with W, then the output would be generated with W.T, the adjoint matrix. In your case you would thus be looking for the adjoint of the convolution operator followed by...

performance,matlab,convolution

For each iteration, you can replace the elementwise multiplication and double summations with a fast matrix multiplication. That is - z(i,j) = sum(sum(padded_x(i:i+My-1,j:j+Ny-1).*y)); would be replaced by - M = padded_x(i:i+My-1,j:j+Ny-1); z(i,j) = M(:).'*y(:); Thus, the loopy portion of the original code could be replaced by - z = zeros(Mx+My-1,Nx+Ny-1);...

I just got the answer in the matlab forums. http://www.mathworks.com/matlabcentral/answers/169713-combine-convolution-filters-bandpass-into-a-single-kernel The gist is that you have to use padding to fill in both sides of the shorter filter, and then you can just combine the vectors. Convolution is a linear operation so yes, you can combine the two filtering operations...

python,performance,cython,convolution

For one thing, the typing of M, eg and convolution doesn't allow fast indexing. The typing you've done is not particularly helpful at all, actually. But it doesn't matter, because you have two overheads. The first is converting between Cython and Python types. You should keep untyped arrays around if...

matlab,signal-processing,fft,convolution

That's very easy. That's a convolution on a 2D signal only being applied to 1 dimension. If we assume that the variable k is used to access the rows and t is used to access the columns, you can consider each row of H and S as separate signals where...

computer-vision,artificial-intelligence,neural-network,convolution

How is the convolution operation carried out when multiple channels are present at the input layer? (e.g. RGB) In such a case you have one 2D kernel per input channel (a.k.a plane). So you perform each convolution (2D Input, 2D kernel) separately and you sum the contributions which gives...

python,numpy,convolution,cross-correlation

Remember that Python indexing starts at zero rather than 1. You want index 19999 rather than 20000: x = np.random.randn(20000) y = np.random.randn(20000) np.correlate(x, y, 'valid')[0] # -29.778322045152521 np.correlate(x, y, 'full')[19999] # -29.778322045152521 ...

This function filter2D() meets your requirement. Pay attention to the int ddepth paratmer, when you apply floating-point kernels on uchar image.

matlab,signal-processing,convolution

You almost have it correct. There are two things slightly wrong with your understanding: You chose valid as the convolution flag. This means that the output returned from the convolution has its size so that when you are using the kernel to sweep over the matrix, it has to fit...

While I believe there's no conv1d in theano, Lasagne (a neural network library on top of theano) has several implementations of Conv1D layer. Some are based on conv2d function of theano with one of the dimensions equal to 1, some use single or multiple dot products. I would try all...

performance,matlab,image-processing,filtering,convolution

This is a function code that does zero-padding for points around the boundaries and achieves the "selective convolution" - function out = selected_conv(I,pts,h) %// Parameters hsz = size(h); bxr = (hsz-1)/2; Isz = size(I); %// Get padding lengths across all 4 sides low_padlens = max(bsxfun(@minus,bxr+1,pts),[],1); low_padlens = (low_padlens + abs(low_padlens))./2;...

To do this, you need to create a Gaussian that's discretized at the same spatial scale as your curve, then just convolve. Specifically, say your original curve has N points that are uniformly spaced along the x-axis (where N will generally be somewhere between 50 and 10,000 or so). Then...

matlab,image-processing,matrix,convolution

Don't use conv2. That is designed for general 2D signals. However, given your description, you have 15 x 15 image patches stored in a 3D matrix, as well as a 15 x 15 filter. imfilter is specifically designed to filter images, but this won't help you either because only full...

numpy,sample,convolution,smoothing

If you want the output of the convolution to be the same size as the input Kp1 you could do the convolution using the 'same' option: Kp1smo=np.convolve(Kp1,np.ones(5)/5),'same') According to the documentation for numpy.convolve this will return a result of size max(M,N), where M and N are the size of the...

python,matlab,scipy,convolution,moving-average

Solved. It turned out that the issue was really to do with the subtle differences in MatLab's conv2d and scipy's convolve2d, from the docs: C = conv2(h1,h2,A) first convolves each column of A with the vector h1 and then convolves each row of the result with the vector h2 This...

matlab,image-processing,convolution

See if this works for you - h = fspecial('gaussian',20,4); H=convmtx2(h,size(I)); I_conv = reshape(H*I(:),size(h)+size(I)-1); s1 = round(size(h,1)/2); blurred = I_conv(s1+1:s1+size(I,1),s1+1:s1+size(I,2)); ...