Some background information first: We have discussed adding deviance() or logLik() methods for ctree objects. So far we haven't done so because conditional inference trees are not associated with a particular loss function or even likelihood. Instead, only the associations between response and partitioning variables are assessed by means of...

matlab,curve-fitting,goodness-of-fit

Yep. Get all the output parameters of lsqcurvefit and use them in nlparci like so: [x,resnorm,residual,exitflag,output,lambda,jacobian] =... lsqcurvefit(@myfun,x0,xdata,ydata); conf = nlparci(x,residual,'jacobian',jacobian) Now conf contains an N x 2 matrix for your N fit parameters. Each row of conf gives the upper and lower 95% confidence interval for the corresponding parameter....

sas,model-fitting,goodness-of-fit

I do not have access to SAS/ETS so cannot confirm this with proc severity, but I imagine that the difference you are seeing come down to the way the distribution parameters are fitted. With your proc univriate code you are not requesting estimation for several of the parameters (some are...

matlab,statistics,goodness-of-fit,exponential-distribution

At 36 values, you have a very small sample set. From the second sentence of Wikipedia's article on the chi-squared test (emphasis added): It is suitable for unpaired data from large samples. Large in this case usually means around at least 100. Read about more assumptions of this test here....

python,scipy,statsmodels,goodness-of-fit

An approximate solution for equal probability bins: Estimate the parameters of the distribution Use the inverse cdf, ppf if it's a scipy.stats.distribution, to get the binedges for a regular probability grid, e.g. distribution.ppf(np.linspace(0, 1, n_bins + 1), *args) Then, use np.histogram to count the number of observations in each bin...