I assume it would be quicker to use the built-in numpy function. np.rollaxis(array_name,0,3).shape ...

x2<-reshape(mydata,idvar=c("id.1","sex.1","group.1"),direction="long",varying=list(c(7,10),c(8,11),c(9,12)), v.names=c("status","beg","end")) head(x2) id sex group id.1 sex.1 group.1 time status beg end 1000.1.a.1 1000 1 a 1000 1 a 1 Vocational <NA> S2007 1001.1.a.1 1001 1 a 1001 1 a 1 Vocational <NA> S2007 1004.1.a.1 1004 1 a 1004 1 a 1 Vocational <NA> S2008 1006.2.a.1 1006 2 a...

How about this method using dplyr library(dplyr) sample %>% group_by(gene, gender) %>% do(slope=lm(expression~time, .)$coef[2]) %>% ungroup() This will return gene gender slope 1 gene1 female -20.91111 2 gene1 male -11.33333 3 gene2 female -20.91111 4 gene2 male -11.33333 ...

If you merge rows with duplicated user values back to the ones with no dupes you get the information you need and then a bit of massaging delivers the desired arrangement: > merge(df[!duplicated(df$user), ], df[duplicated(df$user), ], by="user") user item.x time.x item.y time.y 1 u1 i1 1 i2 2 2 u2...

I would melt your "all.data" list and then dcast it to a wide form. Something like: ## Sample data set1 <- set2 <- data.frame(sol1 = c("s1", "s1", "s1", "s1"), sol2 = c("s2", "s3", "s4", "s5"), Istat = c(0.435, 0.456, 0.845, 0.234)) set2$Istat <- set2$Istat + 1 ## Just to see...

When you create the array, concatenate the lists with + instead of packing them in another list: x = np.array([0,-1,0]*12 + [-1,0,0]*4) ...

Original data (notice correction of 'PRICE1' on second row). df = pd.DataFrame({'BORDER':['GERMANY','FRANCE','ITALY','USA','CANADA','MEXICO','INDIA','CHINA','JAPAN' ], 'ASID':[21, 32, 99, 77,66,55,44,88,111], 'HOUR1':[2 ,2 ,2 ,4 ,4 ,4 ,6 ,6, 6],'HOUR2':[3 ,3 ,3, 5 ,5 ,5, 7, 7, 7], 'HOUR3':[8 ,8 ,8, 12 ,12 ,12, 99, 99, 99], 'PRICE1':[2 ,2 ,2 ,4 ,4 ,4 ,6...

Here's a possible solution using data.table package library(data.table) setDT(df)[, Ingredient := paste0("Ingredient_", seq_len(.N)), Product] dcast(df, Product ~ Ingredient, value.var = "Ingredients") # Product Ingredient_1 Ingredient_2 Ingredient_3 # 1: A Chocolate Vanilla Berry # 2: B Chocolate Berry2 NA # 3: C Vanilla NA NA Alternavely, we could do this with...

r,bioinformatics,collapse,reshape

You could try library(data.table) setDT(df)[,list(chr=chr[1], start=start[1], stop=stop[.N]) , by=list(gain, loss, pvalue_gain, pvalue_loss)] Or using dplyr library(dplyr) df %>% group_by(gain, loss, pvalue_gain, pvalue_loss) %>% summarise(chr=chr[1], start=start[1], stop=stop[n()]) Update Based on @Michael Lawrence's comments about non-overlapping matches, one way to correct this would be: setDT(df)[, .ind:= cumsum(c(TRUE,start[-1]!=stop[-.N])), list(gain, loss, pvalue_gain, pvalue_loss)][, list(chr=chr[1],...

Try library(reshape2) df1 <- transform(df, result=as.character(result), red= factor(red, levels= unique(red))) dcast(df1, mult~red, value.var='result', fill='')[-1] # 1 0.9 0.8 0.7 #1 value1 #2 value2 #3 value3 #4 value4 ...

performance,matlab,matrix,reshape

Either of the following approaches assumes that column 1 of in is sorted, as in the example. If that's not the case, apply this initially to sort in according to that criterion: in = sortrows(in,1); Approach 1 (using accumarray) Compute the required number of rows, using mode; Use accumarray to...

If you need annual mean of those columns by client (it wasn't clear), dplyr can do it: library(dplyr) dat <- read.table(text="month store client he vo ep fe pr jan 1 54010 12 392 1 7 Basic jan 2 54011 12 376 2 2 Premium jan 1 54012 11 385 2...

dd <- read.table(text="id s f x 1 0 3 A 2 2 1 A 3 1 2 B", header=TRUE) with(dd,data.frame( id=rep(id,s+f), n=rep(rep(c("s","f"),nrow(dd)),c(rbind(s,f))), x=rep(x,s+f))) ...

I would use a combination of expandRows from my "splitstackshape" package and melt from "reshape2". Assuming your data are called "mydf", try: library(splitstackshape) library(reshape2) dfLong <- expandRows( melt(mydf, measure.vars = c("Yes", "No"), variable.name = "answer"), "value") Here are the first 20 rows: head(dfLong, 20) # age color place answer #...

Base R solution: reshape(mydf, timevar = "years", idvar= "MemberID", direction = "wide") MemberID a.Y1 b.Y1 c.Y1 d.Y1 a.Y2 b.Y2 c.Y2 d.Y2 1 123 0 0 1 0 0 1 0 0 3 234 1 0 0 0 0 0 1 0 Solution using reshape2 (and magrittr): mydf %>% melt(c('MemberID','years')) %>%...

r,data.frame,data.table,reshape

Using the base reshape function: reshape(dt, timevar = "shipSpdKt", idvar = "heading", direction = "wide") Using the reshape2 package: reshape2::dcast(dt, heading ~ shipSpdKt, value.var = "V1") Using the tidyr package: tidyr::spread(dt, shipSpdKt, V1) Using the data.table package: data.table::dcast.data.table(dt, heading ~ shipSpdKt, value.var = "V1") ...

Here's a possible data.table/zoo packages combination solution library(data.table) ; library(zoo) setDT(DF)[is.na(x), head := name] na.omit(DF[, head := na.locf(head)], "x") # name x head # 1: b 1 A # 2: c 2 A # 3: d 3 A # 4: e 4 B # 5: f 5 B Or as...

Since np.dstack needs a tuple of all of the matrices to stack you are going to have to store them separately as you go along anyway. A simple solution to your problem would be to put the reshaped matrices in place in the stacked structure as you generate them. stacked...

r,ggplot2,data.frame,reshape,boxplot

Here is how I would do this. I think it makes sense to melt your data first. A quick tutorial on melting your data is available here. # First, make this reproducible by using dput for the data frame df <- structure(list(date = 20140101:20140130, MAE_f0 = c(0.2, 1.9, 0.1, 7.8,...

Here's an option with tidyr and dplyr: library(dplyr) library(tidyr) DF %>% gather(Variable, Value, AG..GKV.:ML..GKV.) %>% filter(Value != "") %>% group_by(GN, Datum, Land) %>% summarise_each(funs(paste(unique(.), collapse = "\n"))) #Source: local data frame [11 x 6] #Groups: GN, Datum # # GN Datum Land Wert Variable Value #1 11693 2012-01-05 Kenia 159700...

Your current code throws an error, since the number of loop iterations gets determined at the beginning of the loop. As you are removing rows of expData, you run out of rows to index at some point. The quick fix would be to start looping from the back, i.e. use...

You can use this code: library(dplyr) d %>% mutate(before=ifelse(event,lag(amount),NA), after =ifelse(event,lead(amount),NA)) # amount event before after #1 3 FALSE NA NA #2 4 FALSE NA NA #3 6 TRUE 4 7 #4 7 FALSE NA NA #5 3 FALSE NA NA #6 4 TRUE 3 8 #7 8 FALSE NA...

r,table,formatting,reshape,long-form

You can use reshape from base-r: reshape(q, v.names="response", idvar="person", timevar="qstn", direction="wide") person response.A response.B response.C 1 101 0 0 NA 3 102 1 0 NA 5 103 0 1 1 ...

It's not pretty, but here's another solution in pure base R that uses a couple of calls to reshape(): reshape(transform(subset(reshape(df1,varying=grep('^User',names(df1)),dir='l',v.names='User'),User!=''),id=NULL,time=ave(c(User),User,FUN=seq_along),User=factor(User)),dir='w',idvar='User',sep=''); ## User Colour1 Code1 Colour2 Code2 ## 1.1 John Green N Blue U ## 2.1 Brad Red U <NA> <NA> ## 3.1 Peter Blue U <NA> <NA> ## 1.2 Meg...

As indicated in the comments under the answer, melt from "reshape2" should be sufficient (and pretty direct) for this problem. It actually also works very nicely with t(df) since it calls the matrix method for melt, which makes use of rownames in creating variables in the resulting data.frame. Here's the...

r,reshape,reshape2,strsplit,tidyr

You may try cSplit_e from package splitstackshape: library(splitstackshape) cSplit_e(data = df, split.col = "Location", sep = "/", type = "character", drop = TRUE, fill = 0) # subject Location_A Location_B Location_C Location_D # 1 1 1 0 0 0 # 2 2 1 1 0 0 # 3 3 0...

This is a simple reshaping from long to wide problem library(reshape2) dcast(df, ID + date ~ term) # ID date intercept x1 x2 x3 x4 # 1 unit1 1/1/2015 1.01 2.01 3.01 4.01 5.01 # 2 unit1 1/2/2015 1.01 2.01 3.01 4.01 5.01 # 3 unit2 1/1/2015 1.01 -1.01 1.01...

Here is the answer C_Em_df<-ddply(C_Em_df, .(Type, Driver)) ...

python,numpy,grid,scipy,reshape

Griddata in matplotlib may be more in line with your requirements. You can do the following, from numpy.random import uniform, seed from matplotlib.mlab import griddata import matplotlib.pyplot as plt import numpy as np datatype = 'grid' npts = 900 xs = uniform(-2, 2, npts) ys = uniform(-2, 2, npts) #...

What about this? as.data.frame(matrix(unlist(df), ncol=3, byrow = T)) V1 V2 V3 1 [email protected] [email protected] 43 2 [email protected] [email protected] 13 3 [email protected] [email protected] 31 4 [email protected] [email protected] 32 ...

r,reshape,reshape2,strsplit,agrep

library("tidyr") df1 <- data.frame(ID1=c("Gindalinc","Xaviertechnolgies","anine.inc(Nasq)","Xyzinc"), y=1:4) df2 <- separate(df1 , ID1 ,c("ID1_s1" , "ID1_s2") , sep = "(?=\\()" , extra = "drop") # ID1_s1 ID1_s2 y # 1 Gindalinc <NA> 1 # 2 Xaviertechnolgies <NA> 2 # 3 anine.inc (Nasq) 3 # 4 Xyzinc <NA> 4 # if you want to...

You can try df$indx <- with(df, ave(seq_along(Id), Id, FUN=seq_along)) df1 <- reshape(df, idvar='Id', timevar='indx', direction='wide') df1 # Id date.1 result.1 date.2 result.2 date.3 result.3 #1 1 12/2/1997 1 04/5/2000 0 <NA> NA #3 2 06/4/1998 1 18/6/1999 1 20/3/2000 0 Update If you want to add a new column using...

This can be done with a combination of reshape and permute. This approach works for numeric arrays or for cell arrays. Let A denote your array. Then, the desired result is B = reshape(permute(reshape(A.',N,M,[]),[2 1 3]),M,[]); Or, as noted by Divakar, you can save the transpose, which will reduce running...

You can use reshape after creating a unique "time" variable. That's easy to do with getanID from my "splitstackshape" package: library(splitstackshape) getanID(mydf, "Check_ID") # Check_ID Category Items Cost .id # 1: 0 Sugar 1 1 1 # 2: 1 Milk 1 10 1 # 3: 1 Butter 2 20 2...

You could try data.frame(year=rep(df$year,each=length(df)-1),x1=c(t(df[,-1]))) Or use melt from reshape2. But, it will give the result in different order library(reshape2) melt(df, id.var='year')[,-2] ...

Try res <- data.frame(X1=1, sapply(df1[-1], function(x) { indx <- which(!is.na(x)) x[min(indx):max(indx)]})) res # X1 V1 V2 V3 #1 1 1 0.00 0.000 #2 1 NA 0.25 0.125 #3 1 1 0.50 0.750 #4 1 0 1.00 1.000 ...

try: require(reshape2) data <- data.frame(choice = c('I', 'F', 'I', 'O', 'F', 'O'), length = c('subadults', 'subadults', 'subadults', 'adults', 'adults', 'adults'), gender = c('M', 'M', 'F', 'F', 'M', 'F')) melt_data = melt(data, value.name = "value", id.vars = c("length", "gender")) dcast(melt_data, gender+length ~ value) gender length F I O 1 F adults...

Preface each AXX <number> line with \nTitle:, select out the lines with a colon and read the result with read.dcf. The line marked ## can be omitted if its OK that the first letter of each column name is capitalized. No packages are needed: s <- as.character(df[[1]]) ix <- grep("AXX...

In short: you cannot always rely on the ndarray.flags['OWNDATA']. >>> import numpy as np >>> x = np.random.rand(2,2) >>> y = x.T >>> q = y.reshape(4) >>> y[0,0] 0.86751629121019136 >>> y[0,0] = 1 >>> q array([ 0.86751629, 0.87671107, 0.65239976, 0.41761267]) >>> x array([[ 1. , 0.65239976], [ 0.87671107, 0.41761267]]) >>>...

This is also a possible soution: dat <- data.frame(Model_A=rnorm(10, 10), Model_B=rnorm(10, 12), Data_A=rnorm(10, 11), Data_B=rnorm(10, 13)) model.names <- grep("Model",names(dat),value=TRUE) data.names <- grep("Data",names(dat),value=TRUE) new.dat <- lapply(model.names,function(m) { lapply(data.names,function(d) { md <- cbind(dat[,m],dat[,d]) md.name <- rep(paste0(m,"_",d),nrow(md)) data.frame(md.name,md) }) }) new.dat <- do.call(rbind,lapply(new.dat,function(l) do.call(rbind,l))) names(new.dat) <-...

matlab,matrix,multidimensional-array,vectorization,reshape

Just do the reverse of what you used to reshape the original array. The permute commands stay the same (switching the first and second dimension), while the reshape commands go back up to 512 reshaped_i_image = reshape(permute(reshape(permute(sub_images, [2 1 3]), 8, 512, []), [2 1 3]), 512, 512); ...

Here's a method using the recently released tidyr package. Note that I changed your example data slightly, do you need it to be exactly as you provided it? Example data: set.seed(123) df = data.frame(id=1001:1003,matrix(rnorm(36),3,12),d=runif(3),e=runif(3),f=runif(3)) colnames(df) = c('df',paste('a',1:4,sep=''), paste('b',1:4,sep=''), paste('c',1:4,sep=''), paste('d',1,sep=''), paste('e',1,sep=''), paste('f',1,sep='')) df Code: library(tidyr) library(dplyr) df %>% gather(key, value,...

Your sandbox data has implicit missing values, so the first two lines get omitted the way I read this in. I take that as being incidental. As @Roberto Ferrer clearly explained, this is an (utterly standard) reshape long. clear input Name Company1 Company2 Company3 Company4 Company5 Company6 1985 6.0781 2.4766...

One of the key defining properties of a np.matrix is that it will remain as a two-dimensional object through common transformations. (After all, that's almost the definition of a matrix.) np.random.rand returns an np.ndarray, which does not share this property. (I like to think of an ndarray as very similar...

You can use stri_list2matrix from the stringi package: library(stringi) rawData <- as.data.frame(matrix(c(1,2,3,4,5,6,"a,b,c","d,e","f"),nrow=3,ncol=3),stringsAsFactors = F) d1 <- t(rawData[,1:2]) rownames(d1) <- NULL d2 <- stri_list2matrix(strsplit(rawData$V3,split=',')) rbind(d1,d2) # [,1] [,2] [,3] # [1,] "1" "2" "3" # [2,] "4" "5" "6" # [3,] "a" "d" "f" # [4,] "b" "e" NA # [5,]...

If we have the following 3D matrix: const int ROWS=2, COLS=3, PLANES=4; int dims[3] = {ROWS, COLS, PLANES}; cv::Mat m = cv::Mat(3, dims, CV_32SC1); // works with other types (e.g. float, double,...) This only works for continuous mat objects (i.e. m.isContinuous() == true) To get a vector of the...

You can use getanID to create a unique .id for the grouping variable id. Then, try with dcast.data.table and if needed change the column names using setnames library(splitstackshape) res <- dcast.data.table(getanID(DT, 'id'), id~.id,value.var='score') setnames(res, 2:3, paste0('score', 1:2))[] # id score1 score2 #1: 1 5 4 #2: 2 5 NA #3:...

You may need a grouping variable. Instead of creating it manually as showed in the example, we can use rleid and then try with dcast from the devel version of data.table. i.e. v1.9.5+. Instructions to install the devel version are here library(data.table) dcast(setDT(x)[, gr:=rleid(id)], id+gr~nr, value.var='nr')[,gr:=NULL][] # id 1 2...

stack,fortran,heap,stack-overflow,reshape

The Fortran standard does not speak about stack and heap at all, that is an implementation detail. In which part of memory something is placed and whether there are any limits is implementation defined. Therefore it is impossible to control the stack or heap behaviour from the Fortran code itself....

You can employ bsxfun's masking capability here - %// Random inputs A = randi(9,1,15) ncols = [4 6 5] %// Initialize output arary of transposed size as compared to the desired %// output arary size, as we need to insert values into it row-wise and MATLAB %// follows column-major indexing...

Yes. There are several ways to do this in R. Here are a few. Options which get the order you specify in your question library(data.table) as.data.table(mydf)[, list(Condition = unlist(.SD)), by = Item] library(splitstackshape) merged.stack(mydf, var.stubs = "Condition", sep = "var.stubs")[, .time_1 := NULL][] data.frame(Item = rep(mydf[[1]], each = ncol(mydf[-1])), Condition...

Try data$id <- with(data, ave(seq_along(NUMBER), NUMBER, FUN=seq_along)) reshape(data, idvar=c('NUMBER', 'Gender'), timevar='id', direction='wide') If you want the Date.Tested variable to be included in the 'idvar' and you need only the 1st value for the group ('NUMBER' or 'GENDER') data$Date.Tested <- with(data, ave(Date.Tested, NUMBER, FUN=function(x) head(x,1))) reshape(data, idvar=c('NUMBER', 'Gender', 'Date.Tested'), timevar='id', direction='wide')...

Here's a possible data.table solution library(data.table) # v 1.9.5 setDT(df)[, indx := c2[2L], by = cumsum(c2 == "Valuelabels")] df2 <- df[!grepl("\\D", c2)][, indx2 := seq_len(.N), by = indx] dcast(df2, indx2 ~ indx, value.var = c("c2", "c3")) # indx2 V1_c2 V2_c2 V3_c2 V1_c3 V2_c3 V3_c3 # 1: 1 1 1 1...

It's amusing to note that the OP's solution appears to be the fastest one: f1 <- function(mydf) { mapply(function(x, y) { paste(x, y, sep = ".")}, mydf[ ,seq(1, ncol(mydf), by = 2)], mydf[ ,seq(2, ncol(mydf), by = 2)]) } f.thelatemail <- function(mydf) { mapply(paste,mydf[c(TRUE,FALSE)],mydf[c(FALSE,TRUE)],sep=".") } require(dplyr) f.on_the_shores_of_linux_sea <- function(mydf) {...

Might be as easy as something like this: dat2 <- cbind(dat[1:4], stack( dat[5:length(dat)] ) ...

If you need a single column matrix matrix(m, dimnames=list(t(outer(colnames(m), rownames(m), FUN=paste)), NULL)) # [,1] #a d 1 #a e 4 #b d 2 #b e 5 #c d 3 #c e 6 For a data.frame output, you can use melt from reshape2 library(reshape2) melt(m) ...

From the docs on audioread, the output y is: Audio data in the file, returned as an m-by-n matrix, where m is the number of audio samples read and n is the number of audio channels in the file. Therefore it looks like your file has 2 audio channels. As...

I've looked for a good/updated dupe for this and didn't find anything good (probably because of non-informative titles), so here are 3 common approaches to handle this situation Base R using reshape. Very nasty solution and generally not recommended in this situation, both because of performance and complexity. I'd also...

Approach #1 You can use vec2mat that's part of the Communications System Toolbox, assuming A as the input vector - ncols = 6; %// number of columns needed in the output out = vec2mat(A,ncols) Sample run - >> A' ans = 4 9 8 9 6 1 8 9 7...

arrays,matlab,vectorization,reshape,submatrix

You could do - [m,n,r] = size(A); X = P*reshape(A(:,:,1),m*n,[]) If you are doing it iteratively along the third dimension of A, i.e. for A(:, :, iter), where iter is the iterator, you could get all X's in a vectorized manner in an array like so - X_all = P*reshape(A,m*n,[])...

base R solution #prepare data myDf1 = data.frame(Seconds=seq(0,1,.25), s1=seq(0,8,2), s2=seq(1,9,2)) myDf2 = data.frame(Seconds=seq(0,1,.25), s1=seq(10,18,2), s2=seq(11,19,2)) myDfList=list(myDf1,myDf2) #allocate memory myCombinedNewDf=data.frame(matrix(NA_integer_,nrow=length(myDfList),ncol=(ncol(myDf1)-1)*nrow(myDf1))) #reformat for (idx in 1:length(myDfList)) myCombinedNewDf[idx,]=c(t(myDfList[[idx]][,-1])) #set colnames colnames(myCombinedNewDf)=paste0("r",sort(rep.int(1:nrow(myDf1),2)),colnames(myDf1)[-1]) As per...

library(dplyr) Plots Helps viewing what you mean by datasets : plot(data$mag,type="l") plot(data$time, type = "l") lapply(list(seq(1,30)),function(i) text(-600+601*i,0,i)) Give a number to the data sets data$lag <- data$time - lag(data$time) <0 data$lag[is.na(data$lag)] <- 0 data$set <- cumsum(data$lag) For information length(unique(data$set)) # 30 Reply to point 1) Find out which datasets are...

Just using base R functions, you can do subset(reshape(df, list(paste0("name", 1:3), paste0("age", 1:3)), v.names=c("name","age"), direction="long"), !is.na(name), select=-c(time, id)) to get city state name age 1.1 New York NY Tim 40 1.2 New York NY Bob 30 2.2 Philadelphia PA Jim 29 3.2 Chicago IL Bill 34 3.3 Chicago IL Jeff...

numpy.reshape(input_in_4D, (1800,3)) Of course this just converts the input in the default order (meaning you "unroll" the input array from inner to outer); if you need special ordering, you should read up on slicing....

This is exactly what I had to do in my current project. I use reshape2 in conjonction with data.table. That last package is not necessary but I am used to it and wrote the code for it (doesn't change things much though). What you need to do first is some...

Try indx <- grep('profits', names(df1)) indx2 <- cbind(1:nrow(df1), match(df1$year, as.numeric(sub('\\D+', '', names(df1)[indx])))) df1$profits <- df1[indx][indx2] df1[-indx] # name1 year name2 count profits #1 AA 2009 AA 20 15 #2 AA 2010 AA 3 10 #3 BB 2009 BB 34 NA #4 BB 2010 BB 4 4 ...

Try lst <- split(1:ncol(df1),cumsum(grepl('Name', colnames(df1)))) lst2 <- lapply(lst, function(i) {x1 <- na.omit(df1[i]) colnames(x1)[1] <- 'Name' aggregate(.~ Name, x1, mean)}) res <- Reduce(function(...) merge(..., by='Name', all=TRUE), lst2) head(res,2) # Name Taska Bonda Goala Taskb Bondb Goalb Rapport Client #1 Adam Tharker 24.0 24.0 24.0 24 26.0 23.0 NA NA #2 Adam...

You could use melt/dcast from reshape2 library(reshape2) df2 <- melt(df1, id.var=c('GeoFIPS', 'GeoName', 'IndustryID', 'Description')) df2 <- transform(df2, Year=sub('^X', '', variable))[-c(3,5)] dcast(df2, ...~Description, value.var='value') # GeoFIPS GeoName Year Mining Utilities #1 10180 Abilene, TX 2001 96002 33588 #2 10180 Abilene, TX 2002 92407 34116 #3 10180 Abilene, TX 2003 127138 33105...

What you trying to do is to "melt" you data while making V1 your id variable, try the following library(reshape2) res <- na.omit(melt(Data, "V1")[-2]) res[order(res$V1), ] # V1 value # 12 A1458 none # 25 A1458 160 # 38 A1458 161 # 51 A1458 162 # 64 A1458 163 #...

edit: the single line version might be a bit complicated, so I've also added one based on a for loop 2 reshapes and a permute should do it (we first split the matrices and store them in 3d), and then stack them. In order to stack them we first need...

MatA=reshape(ArrayA,N,1440); should work. Did you use reshape() in this way? ...

You can reshape 10000 points into 100x100, you cannot reshape 10000 points into 200x200. It is simple math. You'd have to change your call to im = np.random.random_integers(0, 255, 40000).reshape((200, 200)) Note you are now sampling 40000 (200*200) points instead of 10000 (100*100)...

You could use spread from tidyr library(dplyr) library(tidyr) mutate(df1, var=factor(items, levels=unique(items), labels=paste0('items', seq(n_distinct(items))))) %>% spread(var, items, fill='') # customer_code items1 items2 items3 #1 1 sugar salt #2 2 sugar accessories #3 3 salt ...

Here's my comment converted to an answer per request library(reshape2) temp$id <- rownames(temp) melt(temp, "id") Or a simpler version by @akrun melt(as.matrix(temp)) ...

r,subset,reshape,reshape2,melt

Looking at your data, this is actually probably easiest using grepl to help with the subsetting. We use grepl to search through the "Series.Name" column for any rows that include the string "Income share held". That creates a logical vector indicating the rows we want. The columns we want are...

I think you're getting confused because you actually have two tables of data linked by a common ID: library(dplyr) df <- tbl_df(df) years <- df %>% filter(attributes == "YR") %>% select(id = ID, year = values) years #> Source: local data frame [6 x 2] #> #> id year #>...

You can just use unnest from "tidyr": library(tidyr) unnest(df, b) # a b # 1 1 1 # 2 2 1 # 3 2 2 # 4 3 1 # 5 3 2 # 6 3 3 ...

Here's an approach that seems to work. It uses expandRows and getanID from my "splitstackshape" package, and then dcast.data.table from "data.table" to spread the values into a wide form: as.is$Start.Date <- as.Date(as.character(as.is$Start.Date), "%d.%m.%Y") library(splitstackshape) dcast.data.table( getanID( expandRows(as.is, "Duration"), c("Project", "Start.Date"))[ , Start.Date := Start.Date + (.id-1) * 7], Project ~...

r,data,reshape,categorical-data

The xtabs function is reasonably simple to use. The only cognitive jump is to realize that there is no LHS unless you want to do summation of a third variable: > xtabs( ~var2+var3, data=db) var3 var2 G H K X 1 1 0 Y 2 0 1 You don't want...

Using reshape2, you need to do: v <- melt(values, id.vars = "t") ...

You can use split to produce a list of data.frames: > split(df, do.call(paste0, df[,1:3])) $A1Z Type1 Type2 Type3 Info1 Info2 Info3 1 A 1 Z a a a 4 A 1 Z d d d $A2Y Type1 Type2 Type3 Info1 Info2 Info3 2 A 2 Y b b b $B4X...

In base R, if the values to be reshaped are not factors, you can also just use stack: cbind(mydf[1], stack(mydf[-1])) # date values ind # 1 12/12/2001 2 ObjA # 2 11/12/2001 5 ObjA # 3 12/12/2001 3 ObjB # 4 11/12/2001 7 ObjB # 5 12/12/2001 4 ObjC #...

You need to put NA instead of blank in your original data and as said by Davide use melt, ignoring NA to obtain the result you want: > df id start mid1 mid2 finish 1 id1 date1 date2 date3 date4 2 id2 date5 date6 <NA> date7 3 id3 date8 date9...

Slightly modifying your notations (you have 2 d): a = [1 2; 3 4]; b = [5 6; 7 8]; c = cat(3,a,b); [n,m,d] = size(c); dd = reshape(c, [n*m , d]); cc = reshape(dd, [n, m , d]); and you can check that cc is equal to c....

If you are just looking for an one-liner using melt, below is an approach (the order desired is kept): # assume DF is your data frame DF_new = data.frame(trigger = melt(t(DF[,2:3]))[,3], PID = rep(DF[,1], each=2)) DF_new # trigger PID # 1 1 1 # 2 5 1 # 3 2...

Matlab stores the matrix values in column major format (this is important during reshape). Since you want row major, you need to do z = reshape(z, [220 44]).'; i.e. transpose afterwards....

r,performance,matlab,vectorization,reshape

The first step is to convert your array w from 6x9 to 3x3x6 size, which in your case can be done by transposing and then changing the dimension: neww <- t(w) dim(neww) <- c(sqrt(somPEs), sqrt(somPEs), inputPEs) This is almost what we want, except that the first two dimensions are flipped....

cbind(data[1], color = apply(data[-1], 1, function(x) names(data[-1])[x])) user color 1 1 blue 2 2 red 3 3 blue 4 4 green ...

One value in 'start' was '0'. So, I changed to '1', created a matrix ('m1') of 1000 columns and 6 rows (length of unique elements in the 'id' column). Using Map, created a sequence for each 'start', 'end' value, the output is a list ('lst'). We rbind the 'lst' ('d2'),...

I think you are very close. I'd suggest you reading the raster package material, the vignette and description. You'll have to pay attention to your data and how it is organized. By row? Column? raster cells values will be populated by row. Using something simmilar to the example provided in...

You want dcast(long.data, date + ID ~ stat, value.var="average"). dcast will give you a new column for every unique value (or combination of values, for multiple variables) of the variable to the right of the ~.

You could do this in base R reshape(data, idvar='day', timevar='site',direction='wide') # day value.1.a value.2.a value.1.b value.2.b #1 1 1 5 9 6 #2 2 2 4 4 9 #3 3 5 7 2 4 #4 4 7 6 8 2 #5 5 5 2 1 5 #6 6 3 4...

You can also just use the aggregate function. aggregate(formula = . ~ ID, data = Data , FUN = sum) ## ID Value ## 1 2850508 1104.961 And to get your desired output, you have to cbind and rearrange: cbind(aggregate(formula = . ~ ID, data = Data , FUN =...