python,pandas,interpolation,extrapolation

So here is a mask that ought to solve the problem. Just interpolate and then apply the mask to reset appropriate values to NaN. Honestly, this was a bit more work than I realized it would be because I had to loop through each column but then groupby didn't quite...

python,arrays,numpy,extrapolation

This seems to just be a straightforward interpolation/extrapolation problem. import numpy as np #here y2 is the new extrapolated array y2 = np.interp(x2,x1,y1) ...

This gives you the first column with the linearly-extrapolated values filled in for NA. You can adapt for the last column. firstNAfill <- function(x) { ans <- ifelse(!is.na(x[1]), x[1], ifelse(sum(!is.na(x))<2, NA, 2*x[which(!is.na(x[1, ]))[1]] - x[which(!is.na(x[1, ]))[2]] ) ) return(ans) } dat$Year1 <- unlist(lapply(seq(1:nrow(dat)), function(x) {firstNAfill(dat[x, ])})) Result: Year1 Year2 Year3...

python,numpy,graph,scipy,extrapolation

You can extrapolate data with scipy.interpolate.UnivariateSpline as illustrated in this answer. Although, since your data has a nice quadratic behavior, a better solution would be to fit it with a global polynomial, which is simpler and would yield more predictable results, poly = np.polyfit(x[:, i], y[:, i], deg=3) y_int =...