When using a training/test split, or time-series cross-validation, are you choosing a specific model or a model class?
This question arises most time I teach a forecasting workshop, and it was raised again in the following email I received today:
I have a time series that I have split into training and test datasets with an 80%-20% ratio. I fit a series of different models (ETS, BATS, ARIMA, NN etc) to the training data and generate my forecasts from each model. When evaluating the forecasts against …