machine-learning,svm,libsvm,deep-learning

In machine learning applications it is hard to say if an algorithm will improve the results or not because the results really depend on the data. There is no best algorithm. You should follow the steps given below: Analyze your data Apply the appropriate algorithms by the help of your...

Probably found the answer. Question 1. What this tool does is: given sets of label/feature_parameters, chooses the most "efficient" and "minimum" feature_parameters by performing grid search. Am I correct? The answer is No. grid.py performs grid search and estimates the best cost and gamma value. So it helps making SVM...

Finding the solution to something like this is tricky over the internet, but let's have a try. This post is comprised of questions rather than answers. However, I believe that if you answer them all you will find your bug without further help – or at least be 90% of...

So what is your proposed searching intervals for gamma and cost parameters? Basically, you should do a heuristic grid search by taking some educated guesses at the grid cell sizes, hoping to find a good optimum. Take a look in the Grid.py file in the LIBSVM package. It will...

python,scikit-learn,svm,libsvm,svc

So after a bit more digging and head scratching, I've figured it out. As I mentioned above z is a test datum that's been scaled. To scale it I had to extract .mean_ and .std_ attributes from the preprocessing.StandardScaler() object (after calling .fit() on my training data of course). I...

If you are using libsvm, there are three possible return values from the svmpredict function. [predicted_label, accuracy, decision_values] = svmpredict(testing_label_vector, testing_instance_matrix, model [,'libsvm_options']); If you don't specify that you want all the return values, by assigning the output to using several variables, you will only get the first variable, predicted_label....

you need float data (and integer labels) 1 row per feature, 1 label per row. float f1,f2; for (int i=(0+(68*count_FOR)); i<(num_landCKplus+(68*count_FOR)); i++) { fin_land >> f1; fin_land >> f1; trainData.push_back(f1); // pushing the 1st thing will determine the type of trainData trainData.push_back(f2); } trainData = trainData.reshape(1, numItems); SVM.train(trainData, trainLabels,...

Is there a way to easily export the model generated by scikit-learn and import it into LibSVM? No. The scikit-learn version of LIBSVM has been hacked up severely to fit it into the Python environment and the model is stored as NumPy/SciPy data structures. Your best shot is to...

python,list,dataset,libsvm,zero

what row.insert(0, row.pop()) is actually moving the last element of the list to be the first and shifting the rest of the list to the right. Also list_new.pop(0) is removing the element you have just inserted. I suggest you to put some print statements to see what your code is...

matlab,classification,svm,libsvm,vlfeat

There isn't a way to represent the data to the vl_svmtrain method other than the D x N matrix that it's talking about. However, what you can do is unroll the cell array and transform each feature matrix so that it becomes a column vector. You would then construct your...

First let me address the R solution; From what I understand, the e1071 package is simply a wrapper around the libsvm library. Therefore assuming you use the same settings and steps in both, you should be getting the same results. I'm not a regular R user myself, but from I...

There are a variety of things that are commonly done in this setup which is called imbalanced data. There are many important problems in computer science that are like this: search engines have millions of documents and only a handful are relevant to a search term, face detector will have...

If you look into documentation you'll see that the function you are using relies on "random numbers". The term "random" is somewhat ambiguous in computer science. In truth there is an algorithm that creates what are called "pseudo-random" numbers. That algorithm (in basic terms) takes in one parameter (where it...

opencv,machine-learning,svm,libsvm

1) Length of features does not matter per se, what matters is predictive quality of features 2) No, it does not depend on number of samples, but it depends on number of features (prediction is generally very fast) 3) Normalization is required if features are in very different ranges of...

machine-learning,weka,svm,libsvm

Yes, the default kernel is RBF with gamma equal to 1/k. See other defaults in javadocs here or here. NB: Weka contains its own implementation - SMO, but it also provides wrapper for libsvm, and "LibSVM runs faster than SMO" (note that it requires installed libsvm, see docs)....

machine-learning,svm,libsvm,cross-validation

There seems to be some confusion about overfitting here. In short, "overfitting" does NOT mean that your accuracy on fitting the training set is (disproportionately) higher than fitting a generic test set. Rather, this is the effect and not the cause. "Overfitting" means that your model is trying too hard...

matlab,machine-learning,classification,svm,libsvm

If you want to use liblinear for multi class classification, you can use one vs all technique. For more information Look at this. But if you have large database then use of SVM is not recommended. As Run time complexity of SVM is O(N * N * m) N =...

java,libsvm,multilabel-classification

An overall accuracy isn't that informative if there is not an even distribution of classes, which may be true in your case. You could still calculate one though if you wanted, see: http://spokenlanguageprocessing.blogspot.com/2011/12/evaluating-multi-class-classification.html To answer your other question about how they are related, the results are being calculated on a...

python,machine-learning,svm,libsvm

Here's a step-by-step guide for how to train an SVM using your data and then evaluate using the same dataset. It's also available at http://nbviewer.ipython.org/gist/anonymous/2cf3b993aab10bf26d5f. At the url you can also see the output of the intermediate data and the resulting accuracy (it's an iPython notebook) Step 0: Install dependencies...

machine-learning,svm,libsvm,gate,svmlight

The problem is in the multiClassiﬁcation2Binary string. There is a single glyph ﬁ that contains two joined characters "fi" together. You probably copied the text from some pdf... Simply replace ﬁ by fi and the error should go away.

you could use "sys.path.append('thatdirectory')" then import

I am not sure how you are running it on shell, you can test it in irb by following sample code from provided in the documentation from https://github.com/febeling/rb-libsvm require 'libsvm' # This library is namespaced. problem = Libsvm::Problem.new parameter = Libsvm::SvmParameter.new parameter.cache_size = 1 # in megabytes parameter.eps = 0.001...

machine-learning,classification,svm,libsvm

In the case of C-SVM, you should use a linear kernel and a very large C value (or nu = 0.999... for nu-SVM). If you still have slacks with this setting, probably your data is not linearly separable. Quick explanation: the C-SVM optimization function tries to find the hyperplane having...

If it really is a string (and not a nominal value), you can use StringToWordVector Converts String attributes into a set of attributes representing word occurrence (depending on the tokenizer) information from the text contained in the strings. The set of words (attributes) is determined by the first batch filtered...

I think this is what you want: library(e1071) data(iris) df <- iris df <- subset(df , Species=='setosa') #choose only one of the classes x <- subset(df, select = -Species) #make x variables y <- df$Species #make y variable(dependent) model <- svm(x, y,type='one-classification') #train an one-classification model print(model) summary(model) #print summary...

machine-learning,scikit-learn,libsvm

They are just different implementations of the same algorithm. The SVM module (SVC, NuSVC, etc) is a wrapper around the libsvm library and supports different kernels while LinearSVC is based on liblinear and only supports a linear kernel. So: SVC(kernel = 'linear') is in theory "equivalent" to: LinearSVC() Because the...

I figured out in order to use svm_predict_probability you should have set the value of "probability" attribute to 1 in its model before training. (model.param.probability=1). This will generate probA and probB in the model and they will be used in svm_predict_probability. If there is no probA and probB in the...

The reason you're not getting output predictions is that you are calling svmpredict incorrectly. There are two ways to call it: [predicted_label, accuracy, decision_values/prob_estimates] = svmpredict(testing_label_vector, testing_instance_matrix, model, 'libsvm_options') [predicted_label] = svmpredict(testing_label_vector, testing_instance_matrix, model, 'libsvm_options' With the output of one argument and of 3, but not 2. So to fix...

In the second line you are concatenating an uint8 with an double, which casts both to uint8. Minimal example: [256;uint8(1)] To solve this, use fprintf with multiple input arguments: fprintf(formatSpec,id , row(id)); ...

There isn't built in function for confusion matrix at libsvm, but you can use matlab confusion matrix function like that [cMatrix,cOrder] = confusionmat(label,predictedLabel); ...

Best to look at the available literature first: http://www.pyoudeyer.com/emotionsIJHCS.pdf http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.467.7166&rep=rep1&type=pdf http://www.researchgate.net/profile/Theodoros_Iliou/publication/267698141_Classification_on_Speech_Emotion_Recognition_-_A_Comparative_Study/links/5519be060cf244e9a4584c07.pdf http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6531196...

machine-learning,svm,libsvm,rapidminer

From the first, you seem to confuse different implementations and algorithms. As far as I know, libsvm, mysvm and JmySVM are standard implementation which solve the SVM optimization problem by algorithms such as sequential minimal optimization. On the contrary, the other SVMs you mentioned (additionally) use less common approaches like...

python,libsvm,cross-validation

I am going to answer my own question. I saved my data from the database in a csv file and used csv2libsvm.py to convert csv to libsvm data: csv2libsvm.py <input file> <output file> [<label index = 0>] [<skip headers = 0>] eg: python csv2libsvm.py mydata.csv libsvm.data 0 True Convert CSV...

Not every parameter has an exact equivalent when porting from LibSVM in matlab to OpenCV SVM. The term criteria is one of them. Keep in mind that the SVM of opencv might have some bugs depending on the version you use (not an issue with the latest version). You should...

gnuplot,classification,svm,libsvm

Replace your colours with numerical indices, e.g., like this: 5.1 3.5 1.4 0.2 0 4.9 3 1.4 0.2 0 7 3.2 4.7 1.4 1 6.4 3.2 4.5 1.5 1 7.1 3 5.9 2.1 2 6.3 2.9 5.6 1.8 2 A simple search-and-replace script should be able to do this for...

First of all: The code described in the libsvm does something different than your code: It maps every column independently onto the interval [0,1]. Your code however uses the global min and max to map all the columns using the same affine transformation instead of a separate transformation for each...

svm,libsvm,n-gram,rapidminer,concept

You have to use the Support Vector Machine (LibSVM) Operator. In contrast to the classic SVM which only supports two class problems, the LibSVM implementation (http://www.csie.ntu.edu.tw/~cjlin/papers/libsvm.pdf) supports multi-class classification as well as regression.

matlab,sorting,text,awk,libsvm

Store all the info in an array a[] and then sort using indices: awk '{delete a for (i=2; i<=NF; i++) a[$i+0]=$i n=asorti(a, sorted, "@ind_num_asc") printf "%s%s", $1, OFS for (i=1;i<=n;i++) printf "%s%s", a[sorted[i]], (i==n?ORS:OFS)}' file Explanation This uses asorti() and @ind_num_asc to define the ordering mode. For every line, we...

The model selection process in SVM which helps you to select the best model, based on different parameters of function. In LibSVM library, model selection is done with the use of cross validation method. What it does is partitions your training data in several subsets and trains the model with...

c++,machine-learning,classification,libsvm

Support Vector Machines like almost all classifiers require that the training samples are represented as feature vectors that lie in a feature space. In order to create such feature vectors you'll have to do feature extraction to your signals. That is, you have to extract some measurable discriminating scale invariant...

python,dictionary,scikit-learn,libsvm

The reason that the solution proposed to you in the previous question had Insufficient results (I assume) - is that the feature were poor for this problem. If I understand correctly, What you want is the following: given the sentence - Apple iPhone 5 White 16GB Dual-Core You to get-...

machine-learning,scikit-learn,classification,weka,libsvm

You can look at RandomForest which is a well known classifier and quite efficient. In scikit-learn, you have some class that can be used over several core like RandomForestClassifier. It has a constructor parameter that can be used to define the number of core or a value that will use...

I find the error it was about the dimension of the problem.I let my code as an example of using svmlib on android for the bignners like me. I hope it will be useful the others. The solution is: int count=0; for(j=0; j<BAse.size(); j++){ photoList2 = new ArrayList<Photo>(); photoList2.addAll(BAse.get(j)); for(int...

reducing a small set is a bad idea. keep all samples. if the classes are separable everything is fine. if not you can use the 'weight' feature to boost classes with little representation.

If you explicitly use the -classpath flag, the %CLASSPATH% variable is not used. You can either add libsvm to the -classpath (it's semicolon separated on windows) or add weka to the CLASSPATH variable.

c++,opencv,machine-learning,libsvm,multilabel-classification

Some implementations of the SVM algorithm do provide probability estimates. However, the SVM does not inherently provide probability estimates. It is a function that is "tacked on" after the algorithm was created. These probability estimates are not "trustworthy", and if I remember correctly, the ability to compute probability estimates was...

matlab,time-series,libsvm,forecasting

A Support-Vector-Regression based predictor is used for exactly that. It shall stand for PH >= 1. The value of epsilon in the epsilon-SVR model specifies the epsilon-tube, within which no penalty is associated in the training loss function with points predicted within a distance epsilon from the actual value Y(t)....

matlab,machine-learning,classification,svm,libsvm

The output of an svm are not probabilities! The score's sign indicates whether it belongs to class A or class B. And if the score is 1 or -1 it is on the margin, although that is not particularly useful to know. If you really need probabilities, you can convert...