r,if-statement,dataframes,lapply

Just putting what everyone has said into a full answer: df1 <- data.frame(var1 = c(1,2,3), var2 = c(1,2,3)) df2 <- data.frame(var1 = c(1,2,3), var2 = c(1,2,3)) df3 <- data.frame(var1 = c(1,2,3), var2 = c(1,2,3), var3 = c(1,2,3)) dframes <- list(df1,df2,df3) dfframes_fmt <- lapply(dframes, function(df) { if(! "var3" %in% colnames(df)) {...

I believe you want to use apply rather than lapply which apply a function to a list. Try this: Null_Counter <- apply(indata, 2, function(x) length(which(x == "" | is.na(x) | x == "NA" | x == "-999" | x == "0"))/length(x)) Null_Name <- colnames(indata)[Null_Counter >= 0.3] ...

Your function returns the data frames unchanged because this is the last thing evaluated in your function. Instead of: columnselect<-function(df){ df[,c("Block","Name","F635.Mean","F532.Mean","B635.Mean","B532")] df} It should be: columnselect<-function(df){ df[,c("Block","Name","F635.Mean","F532.Mean","B635.Mean","B532")] } Having the last df in your function simply returned the full df that you passed in the function. As for the second...

I would keep the datasets in the list rather than updating the dataset objects in the global environment as most of the operations can be done within the list (including reading the files and writing to output files ). But, if you insist, we can use list2env which will update...

You can loop using names of the list object and save lapply(names(mylistdf), function(x) { x1 <- mylistdf[[x]] save(x1, file=paste0(getwd(),'/', x, '.RData')) }) ...

Just use a for loop to add the names. There's probably a fancy *apply way, but for is easy to use, remember, and understand. Start by adding names: names(mylist) = paste0("G", seq(from = 100, by = 1, length.out = length(mylist))) Add SITE column as before: for (i in seq_along(mylist)) {...

You can use Map Map(cbind, a, b) ...

As in this answer, just use paste: a <- matrix(5, nrow=5, ncol=5) for (i in 1:10) { a <- a + 1 write.csv(a, paste("a",i,".csv",sep="")) } ...

Try this indx <- gsub("\\D", "", grep("A_X|B_X", names(DF), value = TRUE)) # Retrieving indexes indx2 <- DF[grep("A_X|B_X", names(DF))] # Considering only the columns of interest DF[paste0("D_X", unique(indx))] <- sapply(unique(indx), function(x) rowSums(indx2[which(indx == x)])*DF$Var) DF # A_X01 A_X02 B_X01 B_X02 C_X01 C_X02 Var D_X01 D_X02 # 1 34 2 24 4...

We could flatten the list elements do.call(c,..) get the number of columns (ncol) for each list element ("indx"), use this to split the list, rbindlist the resulting elements. library(data.table) my_list1 <- do.call(`c`, my_list) indx <- sapply(my_list1, ncol) lst <- lapply(split(my_list1, indx), rbindlist) lst #$`2` # SKU PROMOCIONES #1: 2069060020005P PROMOCIONES...

If I understand what you are trying to do correctly, you really only need to call sprintf once with all your columns to do the formatting. For example writeData <- function(DataSet,FirstLine, FmtString,fName){ outlines <- do.call("sprintf", c(FmtString, DataSet)) writeLines(outLines,fName) return(0) } Most functions in R are meant to work with vectors...

You can use do.call to treat your list like a bunch of parameters to rbind. do.call("rbind", lista) Or you could use Reduce to bind them in one at a time Reduce(rbind, lista) ...

You can use the subset operator [<-: x <- texts is1997 <- str_detect(names(texts), "1997") x[is1997] <- lapply(texts[is1997], str_extract, regexp) x # $AB1997R.txt # [1] "abcdef" # # $BG2000S.txt # [1] "mnopqrstuvwxyz" # # $MN1999R.txt # [1] "ghijklmnopqrs" # # $DC1997S.txt # [1] "abcdef" # ...

r,parallel-processing,lapply,sapply

?mclapply help page says that this is possible (argument SIMPLIFY), although only for mcmapply. As you've already figured it out, (mc)mapply with only one object passed is a special case and is equivalent to (mc)lapply.

I know code only answers are deprecated but I thought you were almost there and could just use the nudge to use the formula function (and to include 'the_response in the substitution): test.func<-function(some_data,the_response,the_predictors) { lapply(the_predictors,function(a) {print( form<- formula(substitute(resp~i, list(resp=as.name(the_response), i=as.name(a))))) glm(form, data=some_data,family=binomial) }) } Test: > test.func(df,dependent,independent) outcome ~ person...

Try DT[,list(count=length(unique(Locale))),by=c("Locus","Cohort")] Trying to sum the unique values of Locale when you want the length of the vector of unique values....

Here is the solution: For each group, identify subgroups using cut and drop the absentee subgroups using droplevels. Allocate weights as (x/2^n)/freq. Then identify the minimum weights and adjust them such that sum of weights in a group add upto 1. dat <- read.table("clipboard", header = T) groupIDs <- unique(dat$GroupID)...

Try: do.call(rbind, lapply(1:ncol(df1), function(m) USER_FUNCTION(df1[[m]],df2[[m]],x,y,z,l,m))) ...

You may need to use lapply(mferg_matrix, function(x) { x[] <- lapply( lapply(x,gsub, pattern = ",", replacement= "."), as.numeric) x}) data mferg_matrix <- list(structure(list(X2 = c("1,606E-07", "2,883E-07"), X3 = c("1,642E-07","2,789E-07"), X4 = c("1,731E-07", "2,554E-07")), .Names = c("X2", "X3", "X4"), class = "data.frame", row.names = c("84", "85")), structure(list(X2 = c("1,606E-07", "2,883E-07"), X3...

Reduce() works nicely with merge() on a list. Reduce(function(...) merge(..., by = "Group.1", all = TRUE), lst) # Group.1 x.x x.y x # 1 1 2000 NA 2000 # 2 3 4000 400 NA # 3 5 5000 500 2500 ...

The issue with your code is that when you call cor on a whole data.frame (of all numeric columns), it will return a correlation matrix, containing the pairwise correlations of all columns - with the values on the diagonals being the respective column's correlation with itself (which is always equal...

r,parallel-processing,lapply,snow,snowfall

The problem is that the forecast package isn't loaded on the cluster workers which causes lapply to iterate over the ts objects incorrectly. You can load forecast on the workers using clusterEvalQ: clusterEvalQ(clus, library(forecast)) To answer your second question, your attempt at nested parallelism failed because the workers don't have...

This is similar to the discussion on subset (In R, why is `[` better than `subset`?) in the sense that transform is used interactively. Because your usage is more programmatic here (you are passing variable names via objects), you better move away from transform and start using [[ to access...

r,nested,time-series,lapply,sapply

Using plyr: As a matrix (time in cols, rows corresponding to rows of df): aaply(df, 1, function(x) weisurv(t, x$sc, x$shp), .expand = FALSE) As a list: alply(df, 1, function(x) weisurv(t, x$sc, x$shp)) As a data frame (structure as per matrix above): adply(df, 1, function(x) setNames(weisurv(t, x$sc, x$shp), t)) As a...

r,loops,for-loop,rstudio,lapply

This seems simple enough. I didn't test as we weren't given data but you should get the gist. myFun<-function(col) { myDF$multiple[grep(" Mbps",myDF[,col])] <- 1000000 myDF[,col] <- gsub(" Mbps","",myDF[,col]) myDF$multiple[grep(" Kbps",myDF[,col])] <- 1000 myDF[,col] <- gsub(" Kbps","",myDF[,col]) myDF$multiple[grep(" bps",myDF[,col])] <- 1 myDF[,col] <- gsub(" bps","",myDF[,col]) myDF[,col] <- as.numeric(myDF[,col]) * myDF$multiple }...

I would melt your "all.data" list and then dcast it to a wide form. Something like: ## Sample data set1 <- set2 <- data.frame(sol1 = c("s1", "s1", "s1", "s1"), sol2 = c("s2", "s3", "s4", "s5"), Istat = c(0.435, 0.456, 0.845, 0.234)) set2$Istat <- set2$Istat + 1 ## Just to see...

r,error-handling,xts,lapply,quantmod

You need to put the try block inside your function: quotes <- lapply(tickers, function(x) try(getSymbols(x, ...))) Note we use the simpler try here. If there is an error, your quotes object will contain an object of try-error class at location of element that caused the error....

Using base R xtabs(values~V1+V2, transform(stack(S), V2=sub('\\..*', '', ind), V1=sub('.*\\.', '', ind))) # V2 #V1 v31 v32 v33 # v41 0.50 0.25 0.35 # v42 0.50 0.25 0.35 # v43 0.50 0.25 0.35 data S <- structure(c(0.5, 0.25, 0.35, 0.5, 0.25, 0.35, 0.5, 0.25, 0.35 ), .Names = c("v31.v41", "v32.v41", "v33.v41",...

apply takes matrix arguments. data.frames will be coerced to matrix before anything else is done. Hence the conversion of everything to the same type (character). lapply takes list arguments. Therefore it coerces the data.frame to a list and does not have to convert the arguments. ...

This is a bit akward, but your function might return the matrix instead of just the names. lst <- lapply(lst, function(x) {colnames(x) <- LETTERS[1:5];x}) ...

Using data.table library(data.table) dcast(data.table(produce), produce~produce)[veggies] produce apple blueberry corn horseradish rutabega tomato #1: carrot NA NA NA NA NA NA #2: corn 0 0 1 0 0 0 #3: horseradish 0 0 0 1 0 0 #4: rutabega 0 0 0 0 2 0 ...

You are doing two expensive things in an otherwise reasonable algorithm: You are recreating a matrix from your list for every iteration; this is likely slow You are recomputing the entire row sums repeatedly, when in reality you just need to calculate the marginal changes Here is an alternative. We...

r,parallel-processing,lapply,mclapply

This is an update of my related answer. library(parallel) finalResult <- local({ f <- fifo(tempfile(), open="w+b", blocking=T) if (inherits(parallel:::mcfork(), "masterProcess")) { # Child progress <- 0.0 while (progress < 1 && !isIncomplete(f)) { msg <- readBin(f, "double") progress <- progress + as.numeric(msg) cat(sprintf("Progress: %.2f%%\n", progress * 100)) } parallel:::mcexit() }...

r,loops,merge,excel-2007,lapply

Try library(tools) source <- file_path_sans_ext(filelist) files1 <- Map(cbind, files, Source=source) files1 #[[1]] # Col1 Col2 Source #1 A 0.5365853 Filname1 #2 A 0.4196231 Filname1 #[[2]] # Col1 Col2 Source #1 A 0.847460 Filname2 #2 C 0.266022 Filname2 #[[3]] # Col1 Col2 Source #1 C -0.4664951 Filname3 #2 C -0.8483700 Filname3...

You were almost there: lapply(x, function(z) z[! (z %in% bad.words)]) Alternatively, you could do lapply(x, function(z) setdiff(z,bad.words)) which seems more elegant to me....

Remove the split column first: split(DF[-1], DF[[1]]) or split(subset(DF, select = -A), DF$A) Update: Added last line....

Since list.files() just returns a character vector, you can use function like grep to search for particular values in the list. If you want to find files in "SubFolderB" (and you don't want to just re-run list.files() in that directory), you can do foldB <- grep("/SubFolderB/", SF, value=T) foldB #...

You can use cbind to add the new column to the data frames in the list: lapply(l, function(x) cbind(x, x[,2]*2)) # [[1]] # c.1..2..3. c.2..3..4. x[, 2] * 2 # 1 1 2 4 # 2 2 3 6 # 3 3 4 8 # # [[2]] # c.7..8..9. c.5..6..2....

Assuming you meant for both elements of test to contain a 3-columned matrix, you can use mapply() and provide separately the list and the list's names: test <- list("a" = matrix(1, ncol = 3), "b" = matrix(2, ncol = 3)) make_df <- function(x, y) { output <- data.frame(x) names(output) <-...

As you can read in ?jpeg you can use a filename with a "C integer format" and jpeg will create a new file for each plot, e.g.: jpeg(filename="Rplot%03d.jpeg") plot(1:10) # will become Rplot001.jpeg plot(11:20) # will become Rplot002.jpeg dev.off() In your case the following should work: jpeg(filename="Rplot%03d.jpeg") lapply(L1, gc) dev.off()...

You could try data_list <- lapply(data_list, function(x) {x$year <- substr(x$year, 1,4) x}) ...

r,function,matrix,lapply,cbind

First of all, in general, var <- expr evaluates the R expression expr and assigns the result to the variable var. If the statement occurs inside a function, then var becomes a function-local variable, otherwise, it becomes a global variable. c(0,0,0,1,0,1,0,1,1,1,1,0) Combines 12 double literals into a double vector in...

r,parallel-processing,lapply,mclapply

The problem is the <<- which is bad practice in general as far as I can gather. The function can be rearranged thusly: readNhist <- function(n,mconst) { l <- raster(filename, varname=var, band=n, na.rm=T) gain(l) <- mconst hist <- hist(l, plot=F, breaks=histbreaks,right=FALSE) return(hist) } And called like this: hists <- mclapply(...

The problem with the lapply(unique(dat$site), get_maf, data = dat) expression is that it tries to pass two arguments to get_maf: first comes from lapply, and the second comes from data=dat. You can fix it like that: lapply(unique(dat$site), function(s) {get_maf(data=dat[dat$site==s,]}). Alternatively, you can use library(dplyr) dat %>% group_by(site) %>% get_maf PS:...

r,functional-programming,lapply,sapply

lapply returns a list by default: From documentation: lapply returns a list of the same length as X, each element of which is the result of applying FUN to the corresponding element of X. sapply returns a vector by default: From documentation: sapply is a user-friendly version and wrapper of...

You were close. After indexing, the elements can be collapsed to a single string. Here, I am using a wrapper (toString) for paste(., collapse=', ') f1 <- function(x,y) toString(x[y]) mapply(f1,x,indices) #[1] "a, b" "a, c" ...

I don't understand the final outcome but confusion matrices would be obtained by the following. library(caret) set.seed(10) dat <- data.frame(Group = c(rep(1, 10), rep(2, 10), rep(3, 10)), Actual = round(runif(30, 0, 1)), Preds1 = round(runif(30, 0, 1)), Preds2 = round(runif(30, 0, 1)), Preds3 = round(runif(30, 0, 1))) dat[] <- lapply(dat,...

Data.table is a good package for stuff like this, especially since it has the super fast fread() function to read files. Something like this should give you a data table (which is also a data frame): data1 <- rbindlist(lapply(dir(), fread)) ...

Try looping over the colnames or the index of columns lapply(seq_along(samp), function(i) samp[,i,with=FALSE]) ...

You could try by assigning the output to a new object (spt2) or update the old object (spt1) and then use write.table spt2 <- lapply(spt1, fun) lapply(names(spt2), function(x) {write.table(spt2[[x]], file = paste("met", x, ".met", sep = ""),append=T, sep=paste(rep(" ",6), collapse=""), quote=F, row.names = FALSE, col.names=FALSE)}) head(read.table('met1985.met', header=FALSE),3) # V1 V2...

bucket declared outside of the function and bucket inside of the function are not necessarily the same thing. When inside the function, your call of bucket <- c(bucket, genelist.info.u[x, "Gene"]) updates the bucket in that function. Because you do not return bucket at the end, the one you initialized at...

Maybe not the one liner you want but this works for me # Add a column to each data frame with the row index for (i in seq_along(mylist)) { mylist[[i]]$rowID <- 1:nrow(mylist[[i]]) } # Stick all the data frames into one single data frame allData <- do.call(rbind, mylist) # Split...

r,web-scraping,readline,lapply

Again, decorator is one of the possible good way to go: strongify <- function(f) { function(...){ tryCatch({ f(...) }, error=function(e) return(NA) }) } strongReadLines = strongify(readLines) player1_html = lapply(player1,strongReadLines) Giving you NA when an error occurs. Obviously the function you decorate should not return NA ...or pimp your decorator!...

If v is the vec set.seed(24) v+floor(runif(N, min=-4, max=4)) #[1] -1 2 4 8 which is the same as set.seed(24) for(i in 1:N){ v[i] <- v[i]+ floor(runif(1, min = -4, max = 4)) } v #[1] -1 2 4 8 If you need apply family solution set.seed(24) mapply(`+`, v, floor(runif(N,...

I made some minor changes to your function. You should just return the object and save the result of the function rather than using <<- #example data element1 <- c("control", "control", "variation", "variation") element2 <- c("control", "variation", "variation", "control") element3 <- c("variation", "control", "variation", "variation") metric <- c(10,15,20,25) other <-...

I think this is what you want library(dplyr) agg.measurements <- df %>% group_by(firstMeasurement) %>% summarise(records=n()) That should do it for the one. ...

You can build the list using the foreach package. The foreach function will return you a list and you can assign its names afterwards. library(foreach) vectorNames <- seq(from=5, to=50, by=5) outlist <- foreach(p = vectorNames) %do% { x <- c(rnorm(20)) min <- min(x) max <- max(x) median <- median(x) quant...

This seems to achieve what you want lapply(d, function(x){x[2:3][x[2:3]>100] <-100;x}) ...

You can try Map Map(cbind, ls, text_id=names(ls)) ...

In this case, an anonymous function seems to make the most sense: > lapply(dat.fake,function(x) is(x)[1]) $x [1] "numeric" $y [1] "character" $z [1] "factor" To answer the question posed in the title, use basic extraction: > temp <- lapply(dat.fake, is) > lapply(temp, "[", 1) $x [1] "numeric" $y [1] "character"...

You can do this in a single gsub using back references (the parenthesised parts of the pattern expression). x <- names(df)[17:26] gsub( "X([0-9]+)." , "Reach\\1" , x ) # [1] "Reach1" "Reach2" "Reach3" "Reach4" "Reach5" "Reach6" "Reach7" "Reach8" "Reach9" "Reach10" We match the digits in your names vector using [0-9]+...

Still not sure I completely get the problem. Really sorry if I missed some point. I added the set number for x_val in the data.frame's called set_nbr. I modified the test data creation to get a full list like this: data.list <- lapply(seq(3.2,8,0.2), function(x) { nrep <- sample(10:20, 1) numbers<-c(seq(1,-1,length.out...

Try listAB <- Map(`rbind`, listA, listB) sapply(listAB, dim) # x y #[1,] 15 15 #[2,] 5 5 ...

You could do: library(dplyr) df %>% # create an hypothetical "customer.name" column mutate(customer.name = sample(LETTERS[1:10], size = n(), replace = TRUE)) %>% # group data by "Parcel.." group_by(Parcel..) %>% # apply sum() to the selected columns mutate_each(funs(sum(.)), one_of("X.11", "X.13", "X.15", "num_units")) %>% # likewise for mean() mutate_each(funs(mean(.)), one_of("Acres", "Ttl_sq_ft", "Mtr.Size"))...

Here is a fully implementation of the suggestion I made in the comments. First we simulate some data: listOfDataframes<- list( df1 = data.frame(X = runif(100), Y = runif(100), Z = runif(100)), df2 = data.frame(X = runif(100), Y = runif(100), Z = runif(100)), df3 = data.frame(X = runif(100), Y = runif(100),...

You just need an in/else in your function: rankall <- function(rank) { split_by_state <- split(df, df$state) ranked_hospitals <- lapply(split_by_state, function (x) { indx <- x$rank==rank if(any(indx)){ return(x[indx, ]) else{ out = x[1, ] out$hospital = NA return(out) } } } ...

If I understand correctly you want to evaluate the first expression with the first value of x, the second with the second etc. You could do: mapply(function(ex, x) eval(ex, envir = list(x = x)), funs.list[1:2], c(7, 60)) ...

df.meeshu has the output of dput. Your code didn't have anything about converting the csv file you had to the dput output. SO I just used the dput. use your own dataframe which is finaltab in place of df.meeshu df.list.select <-lapply(df.meeshu, function(x) x[4,]) df.select <-do.call("rbind", df.list.select) head(df.select) you can also...

Try drop=FALSE. When there is a single column, you can keep the structure intact with drop=FALSE as it can change from matrix to vector by dropping the dimension attribute. Reduce(function(a,b){ ab <- cbind(a, b[match(rownames(a), rownames(b)), ,drop=FALSE]) ab <- ab[order(rownames(ab)), ] ab },Filter(is.matrix, lst)) # a b c d e #X2005.06...

Try lst <- lapply(df2, function(x) {df1[difftime(df1[,1], x - days(365)) >= 0 & difftime(df1[,1], x) <= 0, ]}) n1 <- max(sapply(lst, nrow)) output <- data.frame(lapply(lst, function(x) x[seq_len(n1),])) ...

r,list,data.table,lapply,set-operations

This is kind of like a graph problem so I like to use the igraph library for this, using your sample data, you can do library(igraph) #build edgelist el <- do.call("rbind",lapply(data, embed, 2)) #make a graph gg <- graph.edgelist(el, directed=F) #partition the graph into disjoint sets split(V(gg)$name, clusters(gg)$membership) # $`1`...

You may use mapply. It can take corresponding elements from both the vector and list and divide /. In this case, there is only a single element for 'denominator', so it will be recycled. If the length of the elements are the same in both 'numerator' and 'denominator', the corresponding...

Update for your Toy Example You need to subset both sides of your assignment, and also convert your conditions to logical subsetting vectors. logical1 <- !is.na(test1[match(test$A, test1$A),2]) # TRUE/FALSE logical2 <- !is.na(test1[match(test$A, test2$A),2]) test[t1,] <- test1[t1,] # selects only TRUE rows test[t2,] <- test2[t2,] I recommend you look at each...

Update : Here's the easiest dplyr method I've found so far. And I'll add a stringi function to speed things up. Provided there are no identical sentences in df$text, we can group by that column and then apply mutate() Note: Package versions are dplyr 0.4.1 and stringi 0.4.1 library(dplyr) library(stringi)...

If there are no gaps between groups, i.e. after each "e" follows an "a" for the next group, you can use cumsum easily: df$x.3 <- cumsum(df$x.1 == "a") df # x.1 x.2 x.3 #1 a 1 1 #3 c 6 1 #4 d 6 1 #5 e 9 1 #6...

Try df$xx <- sapply(df$last2, function(x) toString((stri_split_fixed(x, ", ")[[1]] == "00") + 0)) df # dir E_numdir last2 xx # 1 a 1 1 0 # 2 PJE INDEPENDENCIA 96 5 96, 5 96, 5 0, 0 # 3 PJE INDEPENDENCIA 96 5 96, 5 96, 5 0, 0 # 4...

You just need the header for this. So I would do this: library("tuneR") filist <- list.files("dir", recursive=TRUE, pattern="\\.wav$", full.names = TRUE) file_len <- function(fil) { if (file.info(fil)$size != 0) { wavHeader <- readWave(fil, header = TRUE) wavHeader$samples / wavHeader$sample.rate } else { 0 } } len_file <- sapply(filist, file_len) I've...

This should get you close. I'll use a new vector of file names so we don't overwrite your current files. myfiles <- list.files(pattern = "\\.csv$") ## make a vector of new file names 'cat*.csv' where * is 1:length(myfiles) newfiles <- sprintf("cat%d.csv", seq_along(myfiles)) Map(function(x, y) { df <- read.table(x, header =...

One attempt using Map and sweep, which I think gives the intended result: Map(function(x,y) abs(sweep(x,2,y,FUN="-"))/(sweep(abs(x),2,abs(y),FUN="+")), listA, listB) E.g.: listA <- list(x=matrix(1:9, nrow=3), y=matrix(1:9, nrow=3)) listB <- list(x=matrix(1:3, nrow=1), y=matrix(4:6, nrow=1)) Map(function(x,y) abs(sweep(x,2,y,FUN="-"))/(sweep(abs(x),2,abs(y),FUN="+")), listA, listB) #$x # [,1] [,2] [,3] #[1,] 0.0000000 0.3333333 0.4000000 #[2,] 0.3333333 0.4285714 0.4545455 #[3,] 0.5000000 0.5000000...

r,for-loop,data.frame,apply,lapply

Use list2env, you''ll need to save your lapply results first and then give names to the list objects: temp <- lapply( setdiff(levels(dat$city), "b"), function(i){ ret <- dat[dat$city %in% c("b", i), ] ret[order(ret$var, ret$city), ] }) names(temp) <- c("citybcitya", "citybcityc") list2env(temp, envir = .GlobalEnv) citybcitya # city var value # 1...

I made a real bonehead mistake...I should have been referencing y$ mydate at the end. here is what that lapply function should look like: d<-lapply(adult2,function(x){ mydate<-seq(from=x$FDay,to=x$Lday,by='week') newband<-rep(x$BandNo,length(mydate)) newfate<-rep("Survive",length(mydate)) newfate[length(mydate)]<-x$Fate y<-data.frame(newband,mydate,newfate) y$FieldName<-x$FieldName y$Sex<-x$Sex y$Landscape<-x$Landscape y$WeekID<-week(y$mydate) y$Year<-year(y$mydate) return(y)} ) ...

If we need the changes to reflect in the dataframe objects as well, list2env or assign can be used. But, I would do all the computations within the list itself. list2env(lapply(mget(c('H','G')), function(x) {x$perc<-round((x$diff/x$X10)*100,1);x}), envir=.GlobalEnv) ...

Here is my solution, it requires that your output of lapply is stored with the name ll but it can be easily modified I think. ll <- lapply(1:40, function(x){ (1+x)}) nam <- "list_output_" val <- c(1:length(ll)) for(i in 1:length(val)){ assign( paste(nam, val, sep = "")[i], ll[[i]] ) } The output...

y1 <- rnorm(100) x1 <- rnorm(100) model.out <- lm(y1~x1) sink("~/Desktop/TEST.txt", type=c("output", "message")) model.out sink(NULL) Based on this answer: How to save all console output to file in R?...

Try lst <- lapply(data.list, function(x) { x$group <- cumsum(x$time==90) x}) lst1 <- split(as.data.frame(min_time), min_time$set_nbr) res <- Map(function(x, y) { val <- mean(y$Mz) Rows <- x[ceiling(x$time-y$time)==0,] val1 <- Rows$Mz-val subset(x, group==Rows$group[which.min(val1)])}, lst, lst1) ...

You can use: lapply(number.of.days, `-.Date`, e1=dates) Part of the problem is - is a primitive which doesn't do argument matching. Notice how these are the same: > `-`(e1=5, e2=3) [1] 2 > `-`(e2=5, e1=3) [1] 2 From R Language Definition: This subsection applies to closures but not to primitive functions....

There's no need for an external function or for a package for this. Just use an anonymous function in lapply, like this: df[recode.list] <- lapply(df[recode.list], function(x) 6-x) Using [] lets us replace just those columns directly in the original dataset. This is needed since just using lapply would result in...

If you put the count step in your data_bin step I think it accomplishes what you want, though I am a little hazy on exactly what you mean but I think this works: (Note that you can remove the . assignment from the first argument of lapply, that's the default...

r,null,scope,environment-variables,lapply

The reason is that the assignment is taking place inside a function, and you've used the normal assignment operator <-, rather than the superassignment operator <<-. When inside a function scope, IOW when a function is executed, the normal assignment operator always assigns to a local variable in the evaluation...

This can be done with subset(), reshape(), and merge(): merge(Dataset,reshape(subset(Dataset,time%in%c(1990,1991)),dir='w',idvar='geo',sep='_')); ## geo time var1 var2 var1_1990 var2_1990 var1_1991 var2_1991 ## 1 AT 1990 1 7 1 7 2 8 ## 2 AT 1991 2 8 1 7 2 8 ## 3 AT 1992 3 9 1 7 2 8 ##...

How about dd <- as.data.frame(mat) dd[sapply(dd,function(x) all(x>=0))] ? sapply(...) returns a logical vector (in this case TRUE TRUE FALSE TRUE) that states whether the columns have all non-negative values. when used with a data frame (not a matrix), single-bracket indexing with a logical vector treats the data frame as a...

r,for-loop,plot,lattice,lapply

This is in the R-FAQ. Need a print statement around grid graphics (lattice or ggplot) when used inside a function, and the for loop is a function: # Needed require(data.table) # before defining the object. pdf() # pdf is a multipage device. for (i in 3:5) { # generate a...

You should return a structure that include all your outputs. Better to return a named list. You can also return a data.frame if your outputs have all the same dimensions. otutput <- lapply(dflis,function(lismember){ outputvss <- vss(lismember,n=9,rotate="oblimin",diagonal=F,fm="ml") nefa <- (EFA.Comp.Data(Data=lismember, F.Max=9, Graph=T)) list(outputvss=outputvss,nefa=nefa) ## or data.frame(outputvss=outputvss,nefa=nefa) }) When you return a...

lapply(X, FUN, ...) applies the function FUN over each element of the list X. It appears the solveCache function operates on the list as a whole, not just a single element of the list.