You can use conditional aggregation for this. MySQL treats booleans as integers, with true being "1", so you can just sum the expression for time. I am guessing it looks something like this: select feedid, (10 * sum(createdat >= date_sub(now(), interval 1 day)) + 5 * sum(createdat >= date_sub(now(), interval...

This does appear to be a casebook example of how to do things as slowly as possible, except that naturally you are not doing that on purpose. All it lacks is a loop over observations to calculate totals. So, the good news is that you should indeed be able to...

There are many ways. I would prefer via a data.table. First convert your data into a data.table: require(data.table) #tested in data.table 1.9.4 setDT(mydata) > mydata Fruit.Type Year Primary.Wgt Primary.Loss.PCT Retail.Wgt Retail.Loss.PCT 1: Oranges.F 1970 16.16 3 15.68 11.6 2: Oranges.F 1971 15.73 3 15.26 11.6 3: Oranges.F 1972 14.47 3...

sql,ms-access,calculated-columns,weighted-average

You can not double aggregate i.e SUM(COUNT(*)) you've got to have that count in a separate subquery, change your query to: SELECT COUNT(*) AS [Quantity], Sales.Description AS Brand, FORMAT(SUM(Sales.Amt),"Currency") AS Revenue, Format(SUM(Sales.Amt)/COUNT(*), "Currency") AS Rev_Per_Brand, SUM(Sales.Amt)*(COUNT(*)/(SELECT Count(*) FROM sales)) AS [wAvg_Rev_Per_Brand] FROM Sales WHERE Sales.Date > DateAdd("m",-1,NOW()) AND "This query...

statistics,standard-deviation,weighted-average

I just found this wikipedia page discussing data of equal significance vs weighted data. The correct way to calculate the biased weighted estimator of variance is , though this on-the-fly implementation is more efficient computationally as it does not require calculating the weighted average before looping over the sum on...

r,standard-deviation,weighted-average

Here's one way: mean_1 <- 6.27 sd_1 <- 5.9 mean_2 <- 5.91 sd_2 <- 4.9 n_1 <- 34 n_2 <- 6 # the combined mean mean_combined <- weighted.mean(c(mean_1, mean_2), c(n_1, n_2)) # [1] 6.216 # the combined standard deviation (if the samples are not correlated) sd_combined <- sqrt(sd_1^2 + sd_2^2)...

r,regression,correlation,weighted-average

The answer can be found in that CERN paper: ftp://ftp.desy.de/pub/preprints/cern/ppe/ppe94-185.ps.gz the procedure is a generalised least square regression. See the equation (2) page (1) for the result....

Elaborating @jdharrison's comment: > x [1] -5 6 2 4 -3 > sum(x) [1] 4 > mean(x) [1] 0.8 > x - mean(x) [1] -5.8 5.2 1.2 3.2 -3.8 > sum(x - mean(x)) [1] 6.661338e-16 #floating point 0 So x - mean(x) will do the trick....

If you want to use base functions, here's one possibility as.vector(by(ages[c("Age","W")], list(ages$Indiv), function(x) { do.call(weighted.mean, unname(x)) } )) Since aggregate won't subset multiple columns, i user the more general by and simplified the result to a vector....

The problem is with how you're using lapply. Here's the correct code: lapply(eu[eu$plicht=='ja',2:13], weighted.mean, eu[eu$plicht=='ja','inwoners'], na.rm=TRUE) lapply(eu[eu$plicht=='nee',2:13], weighted.mean, eu[eu$plicht=='nee','inwoners'], na.rm=TRUE) Notice how weighted.mean is used as an argument, rather than inside an anonymous function with x as an argument. You could equivalently do: lapply(eu[eu$plicht=='ja',2:13], function(x) weighted.mean(x, eu[eu$plicht=='ja','inwoners'], na.rm=TRUE)) lapply(eu[eu$plicht=='nee',2:13], function(x)...