Here's how you can add a ribbon. You can, of course, change the formulas for ymin and ymax to suit your needs: ggplot(df, aes(x=1:length(v), y=v, group=t, colour=t)) + geom_ribbon(aes(ymin=v-0.1*v, ymax=v+0.1*v, fill=t), alpha=0.2) + geom_line() ...

matlab,ocr,text-extraction,matlab-cvst,confidence-interval

The easiest way would be to create a logical index based on your threshold value: bestWordsIdx = ocrtxt.WordConfidence > 0.8; bestWords = ocrtxt.Words(bestWordsIdx) And same for Text: bestTextIdx = ocrtxt.CharacterConfidence > 0.8 bestText = ocrtxt.Text(bestTextIdx) ...

You can request a discrete vertical axis, and specify the ordering method using the yaxis statement: yaxis discreteorder = data type = discrete; This will tell SAS to ignore the values in N and display them based on the order in which they are read from the dataset. You will...

You can selectively choose which data to pass to the regression plotter. Consider this example: set.seed(10) #Make sample data df <- data.frame( group=rep(c("A","B"), each=10), X = rep(1:10, 2)) df$Y <- 2*df$X + runif(20, -20, 20) #Create y values with lots of noise #Reduce the noise for group A df[df$group ==...

here is you go this is the code calculate Confidence Interval /** * * @author alaaabuzaghleh */ public class TestCI { public static void main(String[] args) { int maximumNumber = 100000; int num = 0; double[] data = new double[maximumNumber]; // first pass: read in data, compute sample mean double...

r,variance,confidence-interval

You can calculate the standard errors of the differences among factor levels in a linear model using the "aov" function from the stats package. These can then easily be extracted for graphing: # graphing differences among factor levels (with standard errors) # require(stats) m <- lm(mpg ~ gear, data=mtcars) plot(TukeyHSD(aov(m)))...

You could do this in ggplot by first reshaping your data so each row is a confidence interval, getting the appropriate x-axis spacing for your grouped data: plot.dat <- data.frame(x=c(seq(.8, 1.2, length.out=6), seq(1.8, 2.2, length.out=6)), lb=unlist(c(dat[3,-1], dat[6,-1])), mid=unlist(c(dat[2,-1], dat[5,-1])), ub=unlist(c(dat[1,-1], dat[4,-1]))) plot.dat # x lb mid ub # 1 0.80...

r,frequency,confidence-interval,discretization

Updated after some comments: Since you state that the minimum number of cases in each group would be fine for you, I'd go with Hmisc::cut2 v <- rnorm(10, 0, 1) Hmisc::cut2(v, m = 3) # minimum of 3 cases per group The documentation for cut2 states: m desired minimum number...

r,ggplot2,replication,correlation,confidence-interval

You can add the chart and axis titles yourself, but this code does what I think you're looking for using ggplot2 and the 'psychometric' package: library(ggplot2) library(psychometric) corSamp <- function(x) { # return the correlation between price and carat on diamonds for a given sample size index <- sample(1:nrow(diamonds), x)...

r,confidence-interval,cox-regression

As per @shadow's comment, the CI of the parameter estimate is based on the whole dataset, if you want age conditional CI's you need to subset your data. If instead you want to generate expected survival curves conditional on a set of covariates (including age) this is how you do...

You can use the following commands to extract the proportion and the confidence interval from the object prop.ci: # the proportion as.vector(prop.ci) # [1] 0.02185792 # the confidence interval attr(prop.ci, "ci") # 2.5% 97.5% # 0.0006639212 0.1077784084 If you want to access the values of the confidence interval separately, you...

excel,math,confidence-interval

You are conflating 2 things: with a normal distribution 95% of values are in the range mean +/- 2 standard deviations given the sample mean, what is a confidence interval for the true mean. Excel is telling you that there is a 95% chance that the true mean is in...

I assume you are looking for something like this?: apply(as.matrix(df), 1, function(x){mean(x)+c(-1.96,1.96)*sd(x)/sqrt(length(x))}) of course you can extend the example easily to others than 95%-CIs......

r,function,confidence-interval

Have a look at up and low. The values are far too large for a standard-normal distribution. mean(up) mean(low) [1] 81.47071 [1] -81.43904 You probably have a thinking error somewhere in your CI() function. If I understand it correctly, you have a multidimensional standard-normal distribution dat and want to get...

r,ggplot2,predict,confidence-interval

There's a difference between a prediction interval and a confidence interval. Observe predict(LinearModel.2,newdata50,interval="predict") # fit lwr upr # 1 82.24791 72.58054 91.91528 predict(LinearModel.2,newdata50,interval="confidence") # fit lwr upr # 1 82.24791 80.30089 84.19494 ggplot draws the confidence interval, not the prediction interval....

r,ggplot2,prediction,confidence-interval,holtwinters

Typically you call confidence intervals for predictions "prediction intervals". The predict.HoltWinters function will give those to you if you ask for them with prediction.interval=T. So you can do pred <- predict(hw, n.ahead = 10, prediction.interval = TRUE) Now this will change the shape of the values returned. Rather than a...

r,ggplot2,group,confidence-interval

#get the data.frame data <- read.table("DATA.csv",header=T,sep=";") require(ggplot2) #build and plot ggplot object g <- ggplot(data=data,aes(x=exposureperiod,y=coef)) g <- g + facet_grid(.~variable) g <- g + geom_errorbar(aes(ymin=coef_lb,ymax=coef_ub)) g ...

r,plot,statistics,confidence-interval

So, the hard part of this is transforming your data into the right shape, which is why it's nice to share something that really looks like your data, not just a single column. Let's say your data is this a matrix with 10,000 rows and 10 columns. I'll just use...

r,data-visualization,confidence-interval,ggplot2,table

Another solution could be to use ReporteRs package using FlexTable API and send the object to a docx document : library( ReporteRs ) data = iris[45:55, ] MyFTable = FlexTable( data = data ) MyFTable[data$Petal.Length < 3, "Species"] = textProperties( color="red" , font.style="italic") MyFTable[data$Sepal.Length < 5, 1:4] = cellProperties( background.color="#999999")...

You could use bootstrapping on this. Simply re-sample your data with the bootstrapping package and record the principal components computed every time. Use the resulting empirical distribution to get your confidence intervals. The boot package makes this pretty easy. Here is an example calculating the Confidence Interval at 95% for...

You code doesn't run as is, which is why no one has bothered to respond for the last 10 hours. Assuming you mean: component=c("PC1","PC1","PC1","PC1","PC1","PC2","PC2","PC2","PC2","PC2","PC3","PC3","PC3","PC3","PC3") and that you want the 95% CL for the correlation vs. distance, this will provide it: library(ggplot2) ggplot(data1,aes(x=distance,y=correlation,color=component))+ geom_line(size=1)+ geom_point(size=1.5)+ stat_smooth(aes(fill=component), alpha=.2, method=lm, formula=y~1, se=TRUE, level=0.95)+...

You have not said anything about where a and b come from. It is not clear that you are doing any statistics for which confidence bands make sense. If a and b are fixed/known constants then there is no need for confidence bands, or the confidence bands have 0 width...

I found an answer myself: 1) add an indicator column with values of 0 (if within) or 1 (if outside the 90CI) using: df$IND <- ifelse(df$CONC < df$CI90low|df$CONC > df$CI90hi,1,0) 2) calculate the percentage outside by: Percernt_Out <- sum(df$IND)/length(df$IND)*100 Note that sum(df$IND) will give the total number of observations outside...