I think this does it: n=1000 x <- rnorm(n,0,1) y <- rnorm(n,0,1) z <- rnorm(n,0,1) dep <- 1 + 2*x + 3*z + rnorm(n,0,1) m<-step(lm(dep~x+y+z),direction="backward") matt <- attributes(m$terms) matt$term.labels #[1] "x" "z" v <- c("x","y","z") as.integer(v %in% matt$term.labels) #[1] 1 0 1 ...

The I() function means 'as is' whereas the ^n (to the power of n) operator means 'include these variables and all interactions up to n way' This means: I(X^2) is literally regressing Y against X squared and X^2 means include X and the 2 way interaction of X but since...

you should use a GLfloat and note a glm::vec3. but here it is a any way: for (int i = 0; i != 10; i++) { GLint originsLoc = glGetUniformLocation(programID, "origins[i]"); glUniform3f(originsLoc, origins[i].x, origins[i].y, origins[i].z); } ...

I think the error is caused by failing to understand R's syntax to define a function (and a further error in not knowing that column names such as "month" are not available as global variables. Try instead: multifactorglm <- function(x){ glm(rained ~ temp + humidity, data=x, family="binomial") } do.call(rbind, do(df,...

You didn't return anything in Thing getThingAtIndex(int index) { world.at(index); } // returns garbage !! Please correct in: Thing getThingAtIndex(int index) { return world.at(index); } // now returns something ...

There are several possible approaches. First, to evaluate the model out of sample, you have to pick a performance metric. Say it's MSE, and suppose your test set is called test, then you would use: mean((test$response - predict(m, newdata = test, type = "response"))^2) For logistic regression you could calculate...

r,machine-learning,glm,prediction,random-forest

You need to specify type='response' for this to happen: Check this example: y <- rep(c(0,1),c(100,100)) x <- runif(200) df <- data.frame(y,x) fitgbm <- gbm(y ~ x, data=df, distribution = "bernoulli", n.trees = 100) predgbm <- predict(fitgbm, df, n.trees=100, type='response') Too simplistic but look at the summary of predgbm: > summary(predgbm)...

I fixed the compile error by using asMETHODPR, rather than asMETHOD: RegisterObjectMethod("Vec3", "Vec3& opAssign(const Vec3 &in)", asMETHODPR(glm::vec3, operator=, (const glm::vec3&), glm::vec3&), asCALL_THISCALL); I also needed to change the opAssign method to return a Vec3&, instead of Vec3. And I changed the GetTranslation() method to return a Vec3& as well: RegisterObjectMethod("Transform",...

One very useful tool is the cross product (from high school analytic geometry). This takes as an input an ordered pair of 3-dimensional vectors v and w, and produces a 3-dimensional vector vxw perpendicular to both, whose length is the area of the parallelogram whose sides are v and w,...

c++,matrix,camera,transformation,glm

glm::mat4x4 cameraTransformation; cameraTransformation = glm::rotate(cameraTransformation, glm::radians(alpha)/*alpha*(float)M_PI/180*/, glm::vec3(0, 1 ,0)); cameraTransformation = glm::rotate(cameraTransformation, glm::radians(beta)/*beta*(float)M_PI/180*/, glm::vec3(1, 0, 0)); This can be simplified by using matrix multiplication and using a different glm call: glm::mat4x4 cameraTransformation = glm::rotate(glm::radians(alpha), glm::vec3(0,1,0)) * glm::rotate(glm::radians(beta), glm::vec3(1,0,0)); Next: glm::vec4 cameraPosition =...

Many thanks to the folks in the comments. I'm not really sure what caused this error, but here's what fixed it: Removed a problematic object, res.bestglm Re-installed bestglm Saved image and closed the project Re-opened the project and loaded packages I'm not sure why the object in #1 was problematic,...

r,loops,glm,logistic-regression

Rather than messing around with building a formula dynamically, i might suggest subsetting the columns of your data.frame and not bothering with building strings with pluses. #SAMPLE DATA train.data<-data.frame(class=sample(1:5, 50, replace=T), matrix(runif(50*12), ncol=12)) library(VGAM) varlist <- list("X2", c("X8","X2"), c("X8","X2","X11")) models <- lapply(varlist, function(x) { vglm(class ~ ., data = train.data[,...

Since I got the answer from the segmented package maintainer, I decided to share it here. First, up-date the package to version 0.3-1.0 by install.packages("segmented",type="source") After updating, running the same commands leads to: > Y<-c(13,21,12,11,16,9,7,5,8,8) > X<-c(74,81,80,79,89,96,69,88,53,72) > age<-c(50.45194,54.89382,46.52569,44.84934,53.25541,60.16029,50.33870, + 51.44643,38.20279,59.76469) > dat=data.frame(Y=Y,off.set.term=log(X),age=age) >...

So, you can fix the problem by forcing dglm to evaluate the call where you input p. In the dglm function, on about line 73: if (family$family == "Tweedie") { tweedie.p <- call$family$var.power } should be: if (family$family == "Tweedie") { tweedie.p <- eval(call$family$var.power) } You can make your own...

c++,opengl,computational-geometry,glm

I think your main issue is actually the z coordinate. When you consider a point on the screen, this will not just specify a point in object space, but a straight line. When you use a persepctive projection, you can draw a line from the eye position to any object...

opengl,graphics,linear-algebra,glm,arcball

Set the arcball radius to the distance between the point clicked and the center of the object. In other words the first point is a raycast on the cube and the subsequent points will be raycasts on an imaginary sphere centered on the object and with the above mentioned radius....

Yes, the default is the logit link function. You can find out the link function of a family object using $link: binomial()$link # [1] "logit" ...

Building your own math library for computer graphics is a great way to fully teach yourself all the required concepts. However, GLM finished this job for you and also made their implementations quite efficient. So, unless you don't like their design philosophy you'd likely want to to use GLM for...

Most of the model components are descriptive, and are not necessary for predict to work. A helper function (HT: R-Bloggers) can be used to remove the fat: stripGlmLR = function(cm) { cm$y = c() cm$model = c() cm$residuals = c() cm$fitted.values = c() cm$effects = c() cm$qr$qr = c() cm$linear.predictors...

The following should work. non.part2$p_x1 <- predict(probit, yourDataToPredictOn, type = "response") ...

I received word from the package's developer that this was indeed a bug and that is has been fixed in the pre-released package here, which will presumably be upgraded to the CRAN in the next iteration -- or when his book is released.

You are using glm(...) incorrectly, which IMO is a much bigger problem than offsets. The main underlying assumption in least squares regression is that the error in the response is normally distributed with constant variance. If the error in Y is normally distributed, then log(Y) most certainly is not. So,...

Although it is a bit strange that na.action =na.omit dit not solve the NA problem. I decided to filter out the data. `library(epicalc) # for lrtest vars=c(“y”, “x1”, “x2”) #variables in the model n.data=data[,vars] #filter data f.model=glm(data = data, formula = y~x1 +x2, "binomial) n.model=update(f.model, . ~ 1) LR= lrtest(n.model,f.model)...

You're confused about the difference between a binomial and a negative binomial model; this is a common confusion. For proportions, you should use a binomial (not a negative binomial) model ... model <- glm(cbind(Count, Rest)~ Region*Plasmid, family=binomial, data=initial) or initial <- transform(initial, total=Rest+Count, prop=Count/(Rest+Count)) model <- glm(prop ~ Region*Plasmid, weights=total,...

At least in the current version of glm (0.9.6) there is no glm::scale function taking three floats as argument. There is only one overload that takes a matrix which should be scaled a vector containing the scaling factors. The correct solution for your code sample would (according to here) be...

summary(o)$coefficients[order(summary(o)$coefficients[,4]),] # Estimate Std. Error z value Pr(>|z|) #Var1 1.1931750 1.1774564 1.0133497 0.3108932 #Var3 -0.1085742 0.2252867 -0.4819379 0.6298501 #Var2 -0.4337253 1.2724925 -0.3408470 0.7332187 #(Intercept) 0.2177110 1.5984713 0.1361995 0.8916635 ...

Window-space ("screen coordinates") Z=1.0 is your far plane. You have solved for the most distant point that projects to (x, height - y). That is not particularly useful most of the time. There is no way to get a single point in world-space from 2D window-space coordinates; projection does not...

Well, you should first look a bit into regression analysis like has been commented. You have some issues in understanding there. But, this is what you want: obsGroupA <- round(runif(40, 240, 63535)) obsGroupB <- round(runif(40, 2478, 95063)) obsGroupC <- round(runif(40, 3102, 104799)) propGroupA <- obsGroupA/(obsGroupA + obsGroupB + obsGroupC) propGroupB...

The much better approach here is to not use a pointer at all. glm::vec3 is a fixed size type that probably uses 12 or 16 bytes. I see absolutely no need to use a separate dynamic allocation for it. So where you currently declare your class member as: glm::vec3 *position;...

The problem is that the names of the kind s(age3) are not valid R names. So when R sees that it thinks you are trying to evaluate the function p$preplot$s on a variable called age3. You can do this to retreive these values: p[[c("preplot", "s(age3)", "y")]] Hope this works....

python,statistics,glm,statsmodels

There isn't, unfortunately. However, you can roll your own by using the model's hypothesis testing methods on each of the terms. In fact, some of their ANOVA methods do not even use the attribute ssr (which is the model's sum of squared residuals, thus obviously undefined for a binomial GLM)....

I would not be comfortable using glm::vec3 in this way, as I don't recall seeing any documentation specifying its internal layout. The fact that there is a glm::value_ptr(obj) helper defined in type_ptr.hpp makes me even more suspicious. That said, you can inspect its source code and verify for yourself that...

machine-learning,glm,logistic-regression

The main benefit of GLM over logistic regression is overfitting avoidance. GLM usually try to extract linearity between input variables and then avoid overfitting of your model. Overfitting means very good performance on training data and poor performance on test data.

The result from summary.glm(fit)$coefficients is what you want and it would be simple matter to change the name of the 8th rowname to the desired character value. Perhaps using write.table would save some of the clunkiness that capture.output imposes (unless, of course, you really do want the wrap-around with the...

You're looking for the lsmeans package. Check it out: lstrends(mod, specs = c('cat1', 'cat2', 'cat3'), var = 'cont1') cat1 cat2 cat3 cont1.trend SE df lower.CL upper.CL a c e 0.01199024 0.08441129 984 -0.15365660 0.1776371 b c e 0.01083637 0.08374605 984 -0.15350502 0.1751778 a d e 0.03534914 0.09077290 984 -0.14278157 0.2134799...

This is not directly saved as a TRUE/FALSE flag in the model object. A way to make this work would be grepl("log", names(m1$model)[[1]]) grepl("log", names(m2$model)[[1]]) which will search for the word "log" in the model part of the lm-object....

What is causing this error is a mistake in the way you specify the formula This will produce the error: mod <- glm(mtcars$cyl ~ mtcars$mpg + ., data = mtcars, na.action = "na.exclude") cv.glm(mtcars, mod, K=11) #nrow(mtcars) is a multiple of 11 This not: mod <- glm(cyl ~ ., data...

machine-learning,glm,statsmodels

Sourceforge is down right now. When it's back up, you should read through the documentation and examples. There are plenty of usage notes for prediction and GLM. How to label your target is up to you and probably a question for cross-validated. Poisson is intended for counts but can be...

Your Get Matrix function is wrong. When you iterate through the loop, you're not skipping over the right amount of elements in arr, the first iteration is arr[0] = modelview[0][0]; arr[1] = modelview[0][1]; arr[2] = modelview[0][2]; arr[3] = modelview[0][3]; and the second goes arr[1] = modelview[1][0]; arr[2] = modelview[1][1]; arr[3]...

Given its still drawing, rotation in the shader is probably a valid matrix. If it were an issue with the uniform it'd probably be all zeroes and nothing would draw. As @genpfault says, ctm needs to be initialized: ctm = glm::mat4_cast(rotation); See: Converting glm quaternion to rotation matrix and using...

I would probably recommend using predict() for this. The intercept is just the value a time x=0, and the slope is the difference in the values between x=1 and x=0. So you can do int <- predict(m, cbind(groups,x=0)) t1 <- predict(m, cbind(groups,x=1)) data.frame(group=groups$groups, int=int, slope=t1-int) You didn't set a seed...

c++,opengl,directory,directory-structure,glm

GLM is not part of OpenGL. It's a C++ math library that has much of the same syntax as GLSL. In order to use it you need to download it from here or install it using your package manager (although if you don't have administrative rights on this machine, then...

r,loops,global-variables,environment-variables,glm

Try this example: #dummy data set.seed(123) df <- data.frame( id=rep(c(1,2,3),10), response_var=rep(c(1,2),15), variableA=runif(30), variableB=runif(30), variableC=runif(30)) #split by id df_list <- split(df,df$id) #loop through every id do.call(rbind, lapply(df_list, function(x){ fit <- glm(response_var ~ variableA + variableB + variableC, family=gaussian(), data=x) coef(fit) })) #output # (Intercept) variableA variableB variableC # 1 0.630746 1.4443321...

Without looking at your data, I'm going to guess that you have close to complete separation on your response classes on your welfare variable. An estimate of (+/-) 13 on the logistic scale is essentially (+/-) infinity, which corresponds to estimated probabilities of zero or one. Julia's estimate of -9.9...

You are trying to get an idea of the in sample fit using a confusion matrix. Your first approach using the glm() function is fine. The problem with the second approach using train() lies in the returned object. You are trying to extract the in sample fitted values from it...

Take a cross section of the viewing frustum (the blue circle is your mouse position): Theta is half of your FOV p is your projection plane distance (don't worry - it will cancel out) From simple ratios it is clear that: But from simple trignometry So ... Just calculate the...

The model fit object returned by glm() records the row numbers of the data that it excludes for their incompleteness. They are a bit buried but you can retrieve them like this: ## Example data.frame with some missing data df <- mtcars[1:6, 1:5] df[cbind(1:5,1:5)] <- NA df # mpg cyl...

Here is the source code of pscl:::pR2.glm: function (object, ...) { llh <- logLik(object) objectNull <- update(object, ~1) llhNull <- logLik(objectNull) n <- dim(object$model)[1] pR2Work(llh, llhNull, n) } <environment: namespace:pscl> If the offset is specified in the formula, it gets lost in the second line (update to compute the intercept-only...

I don't think you can fit a model where some of the independent variables have fixed parameters. What you can do is create a new variable y2 that equals the predicted value of your first model with x1+x2+x3. Then, you can fit a second model y~y2+x4 to include it as...

you want to use the confint function (which in this case will call the MASS:::confint.glm method), as in: confint(Fit) Since the standard errors is the model scale linearly with the linear changes in the scale of the variable 'Exposure' in your model, you can simply multiply the confidence interval by...

Try formula: formula(res.best.logistic$BestModel) ...

Here's a quick hack of a class of linear functions. I'm fairly sure something better must exist somewhere... But anyway: linear <- function(betas){ betas = matrix(betas, ncol=1) ret = list( pred = function(z){ (cbind(1,z) %*% betas)[,1] } ) class(ret)="linear" ret } predict.linear <- function(object, newdata, ...){ object$pred(newdata) } Then you...

simplify in terms.formula does the opposite to what you think it does. You actually want simplify = FALSE, but there's no way to do that using the default stats::update.formula. Here's a version that does what you want. Note that the default method has just been changed to use my version...

Your lines zeros[i] <- 0 zeros[i] ~ dpois(zeros.mean[i]) cause a problem. In JAGS you can't redefine the given value of a variable. I think you should drop the line: zeros[i] <- 0 from your code...

r,statistics,glm,categorical-data

Don't convert your categorical variable into numeric variables - this will create a very different model [your attempts would not have worked anyway] There is no such thing as a "regression" estimate for the entire variable. If a categorical variable has n categories, the standard approach will create n-1...

c++,opengl,vector,parameters,glm

As far as I can judge from your code, the issue could arise if the vector points that is passed as an argument is empty (contains no elements). In that case referencing the element with index zero that you are doing on the first line is invalid. What you should...

You need to be more careful in matching up the variable names used in the model, and those used during prediction. The error you are getting is because the names in the data.frame in the preidct function do not match the names of the terms in your model so you're...

If you want binary responses, you need to decide on a cutoff value -- this is not at all trivial (there is a whole statistical literature about ROC [receiver-operator curves] and the tradeoff between sensitivity and specificity), but a reasonable default option is to choose 0.5. Data: dat <- data.frame(a...

If you are talking about the interpretation of the glm() output and remain on the log-odds scale than it is exactly analogous to how you would interpret the output from lm(). In both cases it is better to talk about predictions rather than trying to separately interpret the coefficients. When...