recursion,clojure,hidden-markov-models

Ok, it's a long shot, but it looks like your atom-updating functions: #(mod (inc @m) 2) and #(inc @islands) are of 0-arity, and they should be of arity at least 1. This leads to the answer to your last question: the #(body) form is a shortcut for (fn [...] (body))....

matlab,matrix,machine-learning,classification,hidden-markov-models

The statement/case tells to build and train a hidden Markov's model having following components specially using murphyk's toolbox for HMM as per the choice: O = Observation's vector Q = States vector T = vectors sequence nex = number of sequences M = number of mixtures Demo Code (from murphyk's...

speech-recognition,state-machines,hidden-markov-models

what is it mean by 3 state? The model that describes the phone S consist of tree states - S1, S2 and S3. what actually S1, S2 & S3 mean? (I know it is state but it represent what?) S1 represents probability distribution of feature vector in the beginning...

r,hidden-markov-models,predict

I'm not a user of this package and this is not really an answer, but a comment would obscure some of the structures. It appears that the "proportion" value of your model is missing (so the structures are different. The "mean" value looks like this: $ mean :List of 5...

matlab,computer-vision,hidden-markov-models

I you want to fit a HMM to your chicken example, you will assume successively that there are only 1 state, then 2 states, then 3 etc. governing this laying process. If you know a little about the chicken way of life, you may assume the number of state based...

nlp,tagging,text-mining,hidden-markov-models,pos-tagging

Considering a second order HMM, Maximum Likelihood Estimate gives: P(SomeTag | <BOS>,<BOS>) = count(<BOS>,<BOS>,SomeTag) / count(<BOS>,<BOS>) It corresponds to your second proposal: (number of sentences beginning with one tag)/(all sentences) ...

matlab,hidden-markov-models,htk

Ah, I figured out why there has such a time difference. The time stamp in HTK result is the "total frame time", even there is overlapping. Say, in my example, the window size is 25ms, window step is 10ms, and 188 frames in total. For HTK, 188*0.025=4.7(s). But this time...

python,python-3.x,dictionary,hidden-markov-models

Both '__name__' and '==' can serve as the key of dictionary: >>> d = {'__name__':1, '==':2} >>> d['__name__'] 1 >>> d['=='] 2 ...

debugging,machine-learning,neural-network,svm,hidden-markov-models

What you refer to as "debugging" is known as optimizing in the machine learning community. While there are certain ways to optimize a classifier depending on the classifier and the problem, there is no standard way for this. For example, in a text classification problem you might find out through...

matlab,machine-learning,sequence,prediction,hidden-markov-models

From what I understand, I'm assuming you're training 200 different classes (HMMs) and each class has 500 training examples (observation sequences). O is the dimensionality of vectors, seems to be correct. There is no need to have a fixed T, it depends on the observation sequences you have. M is...

design,speech-recognition,cmusphinx,hidden-markov-models,htk

If someone uses HTK, isn't there a high chance of using only one of these types? The usage of these types is probably mutually exclusive. For speech you can use both semi-continuous and continuous models, often together. So my question is, why not have separate training and recognition tools...

statistics,scikit-learn,hidden-markov-models

First of all, HMM is deprecated in sklearn. You need to check out https://github.com/hmmlearn/hmmlearn, which is Hidden Markov Models in Python, with scikit-learn like API BTW, The problem you ask seems like a bug. When you set emissionprob_, the _set_emissionprob is called. This tries re-normalize by calling normalize(emissionprob): if not...

r,machine-learning,hidden-markov-models

The problem of initialization is critical not only for HMMs and HSMMs, but for all learning methods based on a form of the Expectation-Maximization algorithm. EM converges to a local optimum in terms of likelihood between model and data, but that does not always guarantee to reach the global optimum.

This sounds like the standard HMM scaling problem. Have a look at "A Tutorial on Hidden Markov Models ..." (Rabiner, 1989), section V.A "Scaling". Briefly, you can rescale alpha at each time to sum to 1, and rescale beta using the same factor as the corresponding alpha, and everything should...

I asked the authour of the package directly and got the following answer: See ?makeDepmix if you run the example with multivariate data like so: # generate data from two different multivariate normal distributions m1 <- c(0,1) sd1 <- matrix(c(1,0.7,.7,1),2,2) m2 <- c(1,0) sd2 <- matrix(c(2,.1,.1,1),2,2) set.seed(2) y1 <- mvrnorm(50,m1,sd1)...