The reason seems to be the introduction of thinning into your Gibbs sampling. Thinning is used to reduce the effect of correlation between consecutive samples. Gibbs sampling generates a Markov Chain of samples and the nearby samples are correlated, while typically the intention is to draw samples that are independent....

It appears to me that both of the workers are doing as much work as is performed in the sequential version. The workers should only perform a fraction of the total work in order to execute faster than the sequential version of the code. That might be accomplished by dividing...

The only way sequential updating works sensibly is in two different models. Specifying them in the same model does not make any sense, since we have no posteriors until after MCMC has completed. In principle, you would examine the distribution of theta1 and specify a prior that best resembles it....

The best approach is to code a self-tuning algorithm that starts with an arbitrary variance for the step size variance, and tune this variance as the algorithm progresses. You are shooting for an acceptance rate of 25-50% for the Metropolis algorithm.

r,performance,matrix,data.table,mcmc

If you convert the outer for loop into a foreach loop with 10,000 tasks, the performance isn't good because the tasks are too small. It's often better to make the number of tasks equal to the number of workers. Here's a simple way to do that using the idiv function...

I don't do a lot of survival analysis (and you don't state which distribution you would like to use for this part - there are different options), but this code should get you started for the interval censoring part: library("runjags") # Your data t1 <- c(1.73, NA, NA, NA, NA,0.77,...

r,probability,markov-chains,mcmc,mixture-model

There are of course other ways to do this, but the distr package makes it pretty darned simple. (See also this answer for another example and some more details about distr and friends). library(distr) ## Construct the distribution object. myMix <- UnivarMixingDistribution(Norm(mean=2, sd=8), Cauchy(location=25, scale=2), Norm(mean=10, sd=6), mixCoeff=c(0.4, 0.2, 0.4))...

python,fortran,mpi,mcmc,mpi4py

What you are trying to do is not exactly textbook MPI, so I don't have a textbook answer for you. It sounds like you do not know how long a "bad" result will take. You ask "Presumably if the code is always listening out (while loop?) it will slow down...