r,time-series,rstudio,forecasting

mod<-arima(yourData,order=c(0,1,1)) ...

A while ago I had to do with similar question. If I remember correctly, you first need to estimate the model using the old dataset with reduced lag, so insted of using 3:6 lags you should use 2:6 lags: reg <- midas_r(qrt ~ mls(qrt, 1, 1) + mls(mth, 2:6, m...

As far as I can see there is no predict method. You sacrifice convenience for speed. However, it's quite easy to calculate the yhat values: data[, yhat := c(cbind(1, x) %*% coef(model))] # x y yhat # 1: 1 5 5 # 2: 2 6 6 # 3: 3 7...

python,time-series,forecasting,statsmodels

The code: predict_price1 = arma_mod1.predict(start_pred, end_pred, exog=True, dynamic=True) print ('Predicted Price (ARMAX): {}' .format(predict_price1)) has to be changed into: predict_price1 = arma_mod1.predict(start_pred, end_pred, external_df, dynamic=True) print ('Predicted Price (ARMAX): {}' .format(predict_price1)) that way it works! I compared the values without external_dfand they where different which can be seen as a...

I'll start off by noting that there is an error present in your for loop. Instead of n*24*80 you probably meant (n+80)*24. The counter in your loop should also go from 0 to 99 instead of 1 to 100 if you want to include the prediction for the 81st day...

Finally, I solved my problem by doing this code : fl<-structure(list(mean=K,x=Dem2,fitted=f1$fitted),class="forecast") It creates an object from the class "forecast" which contains the necessary element to compute measure MASE. Thank you...

The problem seems to be you are including independent variables, and therefore, estimating an ARMAX model. For the out-of-sample forecasts, you need also values for the independent variables AvgPov and AvgEnrol. The model doesn't estimate them; recall the dependent variable is D4.AvgU5MR.

With PNP4Nagios, your custom template can be used to define all the graph definition -- with the exception of the time window, which is added to the parameter lists in $opt[] and $def[]. So, you cannot easily override the time window 'end' as it is already defined as 'now' by...

r,datetime,time-series,forecasting

Here is a simple example assuming weekly data: x <- ts(rnorm(200), frequency=52) endx <- end(x) window(x, end=c(endx[1],endx[2]-3)) Of course, there are not actually 52 weeks in a year, but that is probably a complication that can be overlooked for most analyses....

read.csv reads in a matrix. If it only has one column, it is still a matrix. To make it a vector use d <- ts(read.csv("deseason_vVectForTS.csv", header = TRUE)[,1], start=c(2004,1), end=c(2010,12), frequency = 12) Also, please check your facts. stl is in the stats package, not the forecast package. This is...

r,time-series,shiny,forecasting

The issue wound up being that I was using the arima(...) function instead of Arima(...). It turns out that they are two different functions. The issue that I was experiencing was a result of differences in how the functions store their data. More information about this can be found in...

arima_output is a seasonal ARIMA model: > arima_output Series: train_data ARIMA(1,0,1)(0,1,0)[52] Arima() then attempts to refit this particular model to validation_data. But to fit a seasonal model to a time series, you need at least one full year of observations, since seasonal ARIMA depends on seasonal differencing. As an illustration,...

This actually is quite a lot of data, at least for R. You could look at ets() in the forecast package. I like recommending this free online forecasting textbook from the same authors. You could of course think about your data. Do you actually expect dynamics that can only be...

The underlying values of dates in R are numeric values. What you're seeing are not random values, but the numeric values of day for two different date formats. If day is in POSIXct format, then the value is the number of seconds since January 1, 1970. If day is in...

time-series,forecasting,state-space

The state vector is exactly the same in the multiplicative case as in the additive case. All the equations are given here: https://www.otexts.org/fpp/7/7 For the ETS(M,Md,N) model, ...

python,matplotlib,plot,time-series,forecasting

When plotting a single data point, you cannot plot using lines. This is obvious when you think about it, because when plotting lines you actually plot between data points, and so if you only have one data point then you have nothing to connect your line to. You can plot...

Your design matrix is rank deficient so the regression is singular. To see this: > eigen(t(xreg1) %*% xreg1)$val [1] 1321.223 0.000 0.000 0.000 You cannot fit a regression model with a rank deficient design matrix....

r,time-series,forecasting,moving-average

1) Assuming that the series starts at 3pm, that days are consecutive and all hours from 3pm to 10pm are present: tser <- ts(DF[-1], freq = 8) giving: > tser Time Series: Start = c(1, 1) End = c(1, 8) Frequency = 8 hour Count Year Month Day 1.000 15...

Actually, it's a bug in the package. I'll try to fix it. Thanks for letting me know. Given there are 9 time series in total, parallel computation will not help to speed up the forecasting process much.

r,time-series,forecasting,hierarchical

I'm posting because after a closer look at the hts documentation (insert well-deserved RTFM here), I think I found a work-around using the combinef() function from hts, which can be used to optimally combine forecasts outside of the forecast.gts() environment. I'll leave it up for a while before accepting as...

When you do a prediction the names of all the columns used as predictors in the model must be the same as the column in the new data. Using your sample data.frames above, this should work #change name to match the model data names(df.newData)<-"ts.in" #this should be true # >...

math,memory,statistics,forecasting

You could fit a linear regression model. Since this is a programming site, here is some R code: > d <- read.table("data.tsv", sep="\t", header=T) > summary(lm(log(Bytes.RAM) ~ log(Rows) + log(Columns), d)) Call: lm(formula = log(Bytes.RAM) ~ log(Rows) + log(Columns), data = d) Residuals: Min 1Q Median 3Q Max -0.4800 -0.2409...

python,time-series,forecasting,statsmodels

The documentation in the Notes section explicitly states how you can speed things up...See the docstring for fit_kw to change arguments given to the ARMA.fit method. This is going to be slow for high numbers of models. It's a naive implementation and just does a pairwise fit of them all....

Please read the warning message provided: > x_ts <- ts(x,frequency=52,start=c(1,1)) > nfit <- ets(x_ts,damped=FALSE) Warning message: In ets(x_ts, damped = FALSE) : I can't handle data with frequency greater than 24. Seasonality will be ignored. Try stlf() if you need seasonal forecasts. ...

Got the answer on this page http://www.stat.pitt.edu/stoffer/tsa2/Rissues.htm It seems arima() reports the mean but calls it intercept!

I wouldn't suggest to use forecast.gts() in your example, since handling with different xreg for different time series is still under development. But there's the other way around to get you there. Rather than direct use of forecast.gts(), you need to generate forecasts first using forecast::auto.arima and then apply combinef()...

matlab,time-series,libsvm,forecasting

A Support-Vector-Regression based predictor is used for exactly that. It shall stand for PH >= 1. The value of epsilon in the epsilon-SVR model specifies the epsilon-tube, within which no penalty is associated in the training loss function with points predicted within a distance epsilon from the actual value Y(t)....

matlab,time-series,forecasting

The term y_real-y_pred is the vector of errors. The expression squares each element of it, and then sqrts each element of it, thus having the effect of abs(). Then std() is run on the vector of errors. Thus, this is computing the S.D. of the (absolute) error. That is a...

I think what you want to do is essentially store 12 values in the f list and also in the accuracy list when the loop will have finished. The way to do it is the following (to save it in a list): f <- list() for (i in 1:12) {...

There is a utility called forecastcli which you will find in the filesystem of your Control-M/EM Server installation and the EM Client on your client workstation. I assume you need the Control-M Forecast add-on which requires an extra license as far as I know. The utility lets you filter jobs...

residuals(test.arima) However, you should be aware that the size of $\sigma^2$ depends on the scale of the data. Divide your data by 1000, and your $\sigma^2$ value will be 170.690303....

python,time-series,forecasting,statsmodels

Two problems. As the error message indicates, '2014-1-3' isn't in your data. You need to start the prediction within one time step of your data, as the docs should mention. Second problem, your data doesn't have a defined frequency. By removing the holidays from the business day frequency data, you...

I solved the direct question so this is technically the answer while I don't completely understand why. I read through the HTS code on using the trace() function and found the line causing issues: else if (fmethod == "arima") { models <- auto.arima(x, lambda = lambda, xreg = xreg, parallel...

Do not use the dates in your plot, use a numeric sequence as x axis. You can use the dates as labels. Try something like this: y=GED$Mfg.Shipments.Total..USA. n=length(y) model_a1 <- auto.arima(y) plot(x=1:n,y,xaxt="n",xlab="") axis(1,at=seq(1,n,length.out=20),labels=index(y)[seq(1,n,length.out=20)], las=2,cex.axis=.5) lines(fitted(model_a1), col = 2) The result depending on your data will be something similar: ...

If you want to pack it into one call, you can bind the data into a single data.frame and then split it up again in the do call. df <- rbind(df.h, data.frame(df.f, price=NA)) res <- group_by(df, hour) %>% do({ hist <- .[!is.na(.$price), ] fore <- .[is.na(.$price), c('hour', 'wind', 'temp')] fit...

There are several points that are different between Mr Davenport's analysis and the plot you are trying to make. The first one is that he is comparing the the arima forecast to some observed data, which is why he trains the model on a portion of the whole time series,...

performance,regression,prediction,forecasting

Yes - I would use linear regression as a starting point. For an example, see How can I predict memory usage and time based on historical values. I found Data Analysis Using Regression and Multilevel/Hierarchical Models to be s highly readable introduction to the subject (you probably won't need multilevel...

There are some papers that show some ways to do it: Financial time series forecasting using support vector machine Using Support Vector Machines in Financial Time Series Forecasting Financial Forecasting Using Support Vector Machines I really recommend that you go through the existent literature, but just for fun I will...

Since you already prepared the data with in-sample and full-sample outside of R, there is no need to convert it to time series objects. Here is the cleaned-up version of your code, which assumes that data files are in R working directory: library(midasr) yvellareg <- scan("ymonthlyjackson.csv") xvellareg <- scan("xdailyjackson.csv") #yvellareg...

r,time-series,hierarchy,hierarchical-data,forecasting

Your notation (which may not be your choice) is making this very confusing. It seems like the same numerical sequence can refer to either a county or an industry. However, the basic idea is clear enough: you have two hierarchies and you want both types of aggregation to be taking...

r,math,statistics,time-series,forecasting

You seem to be confused between modelling and simulation. You are also wrong about auto.arima(). auto.arima() does allow exogenous variables via the xreg argument. Read the help file. You can include the exogenous variables for future periods using forecast.Arima(). Again, read the help file. It is not clear at all...

This is a bug, now fixed on the github version at http://github.com/robjhyndman/forecast/. Note that the following will yield a time series of 0s: BoxCox(y, lambda=0.5) - (coef(fit)['xreg'] * xreg + coef(fit)['intercept']) - arima.errors(fit) That is, arima.errors is on the transformed scale, not the original scale....

The sample space of an ARIMA process is the whole real line, so it is impossible to guarantee that the simulated values will be positive. You could just add a constant to all values to make them positive. Or could you take the exponential of the simulated values....

Reading the help files is always a good idea. You will find there that ets has a lambda argument that does what you want. library(forecast) AP <- AirPassengers fit1 <- ets(AP, model="AAA", lambda=0) fit2 <- ets(AP, model="AAA", lambda = BoxCox.lambda(AP)) plot(forecast(fit1)) plot(forecast(fit2)) ...