I’d like to take an initial look at an innovative forecasting methodology using exponential smoothing models. Forecasts, as the saying goes (applied to models in general) are always wrong but often useful. This is especially true for long lead-time items or cases where there is known trends, seasonality, promotions, etc. The better a business can forecast demand for it’s products, both a specific amount and the error associated with that amount, the more profitable it may become.
There are two main families of forecasting models used in business; exponential smoothing and ARIMA models. Both are designed to capture the “signal” of historical demand patterns and extrapolate the pattern forward, though have different strengths and weaknesses. There are also a number of types of models within each family, such as single, double, and triple exponential smoothing models. Add to that the infinite range of parameter values that can be applied to the models and you quickly have a difficult problem of selecting the best model for a given set of demand data.
This where Rob Hyndman and his colleagues in Australia have done us a great service with the development of the “forecast” package for the open source software tool “R”. Dr. Hyndman combined advanced state space modeling techniques and information theory to develop an automatic forecasting methodology (referred to here as “ETS”) for selecting the optimal exponential smoothing model out of 30 different options. He codes the models from the “forecast” package with the letters “E”, “T”, “S” for error, trend, and seasonal. He also adds the designation “M” (multiplicative), “A” (additive), “Md” (multiplicative damped), or “Ad” (additive damped) to further specify how the model captures the data’s dynamics.
The “ETS(M,N,N)” indicates an exponential smoothing model with no trend or seasonality detected but only an error term being used.


Also worth noting is how the accuracy measurements in the upper left-hand corner of the charts got smaller from the first to the third forecasts. These measurements (MAPE for “mean absolute percentage error” and MASE for “mean absolute scaled error”) indicate improving accuracy when they get smaller (which will be covered in detail in a future blog). The lesson to learn here is that when you have an item with stable demand, forecasting models get more accurate with more data history.