matlab,image-processing,gaussian,feature-detection

You actually need to apply a gaussian filter with 2 different sets of parameters, then subtract the filters and perform a convolution of the input image with that new filter, i.e. the difference of gaussians. Here is an example with the coins.png demo image...The code is commented; don't hesitate to...

The fit actually works perfectly - I get mu == 646.6 and std = 207.07, which are exactly equal to the mean and standard deviation of your y values. I think you're just confused about what you're actually plotting. norm.pdf evaluates the probability density function of the Gaussian distribution. That...

Let's use a struct to store the meta-parameters action.awake_in_bed = [1 5*60 1*60]; action.out_of_bad = [3 30 10]; action.out_of_bedroom = [2 2*60 15]; ACTIVITY = {'awake_in_bed','out_of_bad','out_of_bedroom'}; After these pre-definitions, we can sample an activity vector ACTIVITY_WAKE = cell(1,numel(ACTIVITY)); for ii = 1:numel( ACTIVITY ) %// foreach activity cp = action.(ACTIVITY{ii});...

You just need to make sure that the result of rand() isn't 0 so that you won't have to do the conversion to double and division again and again int r = rand(); while (r == 0) r = rand(); u1 = (double) rand() / RAND_MAX; An even simpler solution...

c++,matrix,segmentation-fault,gaussian

The problem is that in C/C++ the first element of an array should have index 0, so your for(int i=1; i<=order; i++) should be for(int i=0; i<order; i++) in the gaussElimination function....

So we might have to differentiate between mean values before and after truncation, and you apparently intend to control the observable mean values that truncated samples would presumably converge to, although rnorm() (and probably rtruncnorm(), which I do not know) expect "before"-means; while some statisticians at stats.stackexchange.com might provide you...

r,histogram,curve-fitting,gaussian

lines(density(rnorm(1000,mean=mean(f),sd = sd(f))),col=1,lwd=3)

If you consult the article on Wikipedia about the general elliptical version of the Gaussian 2D PDF, it doesn't look like you're rotating it properly. In general, the equation is: Source: Wikipedia where: Usually, A = 1 and we'll adopt that here. The angle theta will rotate the PDF counter-clockwise,...

matlab,image-processing,3d,signal-processing,gaussian

The logic behind findpeaks for 1D arrays / vectors is that it looks at local 3 element neighbourhoods and sees whether the centre of the window is the maximum element. If it is, this could potentially be a peak. You would also need to apply a threshold to ensure that...

python,numpy,integration,gaussian

As Will says you're getting confused between arrays and functions. You need to define the function you want to integrate separately and pass it into gauss. E.g. def my_f(x): return 2*x**2 - 3*x +15 gauss(m_f,2,1,-1) You also don't need to loop as numpy arrays are vectorized objects. def gauss1(f,n): [x,w]...

If you want a dynamic blurred effect you can use the UIVisualEffectView with a UIBlurEffect. Documentation: https://developer.apple.com/library/prerelease/ios/documentation/UIKit/Reference/UIBlurEffect_Ref/index.html ...

java,random,distribution,nan,gaussian

In your code dRandom1 can be negative, while real logarithms only take arguments from (0, +inf)

Answer found in previous post - works great. f<-function(x, theta) { m<-theta[1]; s<-theta[2]; a<-theta[3]; b<-theta[4]; a*exp(-0.5*((x-m)/s)^2) + b } fit<-nls(y~f(x,c(m,s,a,b)), data.frame(x,y), start=list(m=12, s=5, a=12, b=-2)) ...

c++,wolfram-mathematica,gaussian,convolution,fftw

Using FFT to do convolutions is only efficient when you have very large convolution kernels. In most blurring applications the kernel is much much smaller than the image, e.g. 3x3, so FFT would be significantly slower. There are many implementations for doing small-kernel convolutions. Most modern hardware supports such intrinsic...

swift,gaussian,blurry,uiblureffect,uivisualeffectview

Since there is no other parameter in UIBlurEffect , I think the only way is to use the CIFilter preset CIGaussianBlur to blur the background View and use its key inputRadius to adjust the level. If you want to achieve the same effect as so called light/dark/ExtraLight, you can compose...

You only need to model one dimension of the data with a 1D gaussian distribution in this case. If you have two-dimensional data {(x1,x2)_i} whose covariance matrix is singular, this means that the data lies along a straight line. The {x2} data is a deterministic function of the {x1} data,...

Dr Vanderplas has written a blog post detailing how to do this with three separate libraries: Kernel Density Estimation in Python: Scipy, Statsmodels, and scikit-learn. Should be a good start....

python,numpy,scipy,curve-fitting,gaussian

You have to pass an initial guess for popt, otherwise the fit starts with [1,1,1] as initial guess which is pretty poor for your dataset! The following gives reasonable results for me: popt, pcov = curve_fit(func, xk, Kp4, p0=[20,630,5]) The initial guess could be [np.mean(Kp4), np.mean(xk),5*(max(xk)-min(xk))/len(xk)], to have a general...

The obvious thing to do is remove the NaNs from data. Doing so, however, also requires that the corresponding positions in the 2D X, Y location arrays also be removed: X, Y = np.indices(data.shape) mask = ~np.isnan(data) x = X[mask] y = Y[mask] data = data[mask] Now you can use...

A Gaussian distribution isn't bounded, but you can make it unlikely that you will sample outside your range. For example, you can sample numbers with a mean of 400 and a standard deviation of 200/3, meaning being outside the range [200, 600] will be outside of 3 standard deviations. mean...

As I understood from the trace, you are only allowed to use new Size(x,y) where x & y are odd

python,scikit-learn,gaussian,normal-distribution

Try looking into pypr. From the documentation, here is how you would find a GMM conditioned on one or more of the variables: # Now we will find the conditional distribution of x given y (con_cen, con_cov, new_p_k) = gmm.cond_dist(np.array([np.nan, y]), \ cen_lst, cov_lst, p_k) As far as i remember,...

Use this implementation and all will be happy: public class GInt { private int real; private int imag; public GInt(int r) { imag=0; real=r; } public GInt(int r, int i) { real = r; imag = i; } GInt add(GInt rhs) { GInt added; int nReal = this.real + rhs.real;...

python,numpy,scipy,gaussian,smooth

This will blow up for very large datasets, but the proper calculaiton you are asking for would be done as follows: import numpy as np import matplotlib.pyplot as plt np.random.seed(0) # for repeatability x = np.random.rand(30) x.sort() y = np.random.rand(30) x_eval = np.linspace(0, 1, 11) sigma = 0.1 delta_x =...

excel,distribution,gaussian,normal-distribution

The results from NORM.DIST are correct... if you directly implement the Gaussian function in your sheet using: =(1/($F$8*SQRT(2*PI())))EXP( -((M3-$F$7)^2)/(2$F$8^2)) which is an implementation of the standard Gaussian function e.g. f(x) on: http://mathworld.wolfram.com/GaussianFunction.html then the results exactly match Excel's NORM.DIST built in function. When you say the values "should be" in...

android,ios,opencv,blur,gaussian

By the looks of things, You have to convert it to a cv::Mat, then you can use the normal guassian blur c++ method and then convert it back to ULLImage The above link demonstrates how to convert from and to the two image types. Once you have converted it to...

python,python-2.7,optimization,gaussian

Probably your callback is called in curve_fit with a different number of parameters. Have a look at the documentation where it says: The model function, f(x, ...). It must take the independent variable as the first argument and the parameters to fit as separate remaining arguments. To make sure this...

I assume uintMatrix is a two-dimensional array of 32-bit ints, and that you've packed the red, green, and blue channels into that. If so, that's your problem. You need to blur each channel independently....

math,image-processing,graphics,gaussian,imaging

One approach is to create an image with a width and height equal to the next 2^m+1,2^n+1, but instead of up-sampling the image to fill the expanded dimensions, just place it in the top-left corner and fill the empty space to the right and below with a constant value (the...

r,gaussian,normal-distribution

With qnorm: qnorm(.025) # [1] -1.959964 qnorm(.5) # [1] 0 qnorm(.975) # [1] 1.959964 ...

python,scikit-learn,gaussian,text-classification

You can do the following: class DenseTransformer(TransformerMixin): def transform(self, X, y=None, **fit_params): return X.todense() def fit_transform(self, X, y=None, **fit_params): self.fit(X, y, **fit_params) return self.transform(X) def fit(self, X, y=None, **fit_params): return self classifier = Pipeline([ ('vectorizer', CountVectorizer ()), ('TFIDF', TfidfTransformer ()), ('to_dense', DenseTransformer()), ('clf', OneVsRestClassifier (GaussianNB()))]) classifier.fit(X_train,Y) predicted = classifier.predict(X_test) Now,...

python,scikit-learn,gaussian,naivebayes

Yes, you will need to convert the strings to numerical values The naive Bayes classifier can not handle strings as there is not a way an string can enter in a mathematical equation. If your strings have some "scalar value" for example "large, medium, small" you might want to classify...

You're experiencing the classical problem of supplying an incorrect guess to the curve fitting algorithm. That is entirely due to your unnecessary upside down flipping of the matrix T and then not taking into account the new locations of the gaussians (the parameter called center, passed to gaussian() - I...

In that case you should use nextDouble(). The Gaussian distribution is a distribution that ranges over the entire collection of double values (mathematically speaking, from minus infinity to positive infinity) with a peak around zero. The Gaussian distribution is thus not uniform. The nextDouble() method draws numbers uniformly between 0...

matlab,opencv,image-processing,javacv,gaussian

There are 2 reasons. First one is purely mathematical. Say you have a row of 3 numbers (pixels). How many possible cumulative sums it generates? the answer is 4. You can take the sum of 0 first pixels, 1 pixel, 2 pixels or all the 3 pixels. The amount of...

python,matplotlib,histogram,gaussian

You probably want to use numpy to generate a Gaussian, and then simply plot it on the same axes. There is a good example here: Fitting a Gaussian to a histogram with MatPlotLib and Numpy - wrong Y-scaling? If you actually want to automatically generate a fitted gaussian from the...

Your error is pretty simple. You're not pivoting properly - specifically here inside your if statement: for j = i+1:n if abs(A(array(i),i)) < abs(A(array(i),i)) %row interchange <------- temp = array(i); array(i) = array(j); array(j) = temp; end end Your check to see which coefficient to pivot from is not correct...

python,random,gaussian,normal-distribution

Thread-safe pieces of code must account for possible race conditions during execution. This introduces overhead as a result of synchronization schemes like mutexes, semaphores, etc. However, if you are writing non-reentrant code, no race conditions normally arise, which essentially means that you can write code that executes a bit faster....

There are two parts of the algorithm: uniform random number generator, and convert the uniform random number to a random number according to Gaussian distribution. In your case, e2 is your uniform random number generator given the seed rd, std::normal_distribution<float>(m, s) generates an object which does the 2nd part of...

cluster-analysis,data-mining,gaussian,elki

The delta parameter in EM is necessary to detect convergence. Since EM uses soft assignments internally, it will continue updating the values to arbitrary digits (technically, it will eventually run out of precision, and stop). As long as you choose a small enough value, you should be fine. However, EM...

python,pandas,time-series,gaussian

As there has been no specific Pandas solution posted for this question (or the similar linked question), I am posting a solution using standard numpy and scipy functions. This will produce a smoothed curve using gaussian weighting, and works for any magnitude data (does not have offset issues). def smooth_gaussian(data,window,std):...

I'm no expert with 3D-plots in matplotlib, but I believe your data wrong. As you can see in the sourcecode in this tutorial, your X,Y and Z data have to be 2-dimensional arrays. Your X and Y are one-dimensional, and your Z is a simple list. Try reshaping your data...

machine-learning,classification,svm,gaussian,supervised-learning

The other answers are correct but don't really tell the right story here. Importantly, you are correct. If you have m distinct training points then the gaussian radial basis kernel makes the SVM operate in an m dimensional space. We say that the radial basis kernel maps to a space...

matlab,random,distribution,gaussian,sampling

Might use Irwin-Hall from https://en.wikipedia.org/wiki/Irwin%E2%80%93Hall_distribution Basically, min(IH(n)) = 0 max(IH(n)) = n peak(IH(n)) = n/2 Scaling to your [1.9...2.1] range v = 1.9 + ((2.1-1.9)/n) * IH(n) It is bounded, very easy to sample, and at large n it is pretty much gaussian. You could vary n to get narrow...