Forecast Solutions

Forecast Accuracy Measurement and Improvement

Forecast accuracy measurement is important for a number of reasons including the investigation of existing or potential problems in the supply chain and ensuring that the forecasting system is under control.  It is also an important component in safety stock calculation.  Measuring accuracy starts with some measure of forecast error.  Sometimes it is the error itself that is reported, or forecast accuracy is calculated from it as 100 - abs % error.  A number of different forecast error measures can be used including MAPE or WMAPE.  In the past it was often Mean Absolute Percentage Error (MAPE) that was regarded as the best practice measurement, but it does come with certain limitations - a number of better options are now available. 

Forecast Accuracy Health Check

Forecast Solutions can carry out a forecast accuracy health check. It comprises a review of methods and procedures together with an analysis of the levels of forecasting accuracy that are being achieved in the current sales forecasting system. Forecasts can be checked for bias. In a full forecast accuracy analysis, a forecast simulation can be set up using powerful sales forecasting software in order to compare the forecast accuracy thus achieved with that from the existing process. The health check can be an important component in the early stages of a sales forecasting improvement program with the objective to improve forecast accuracy.

Forecasting Improvement Program

The first step in an improvement program may well be to carry out the accuracy health check as mentioned above.  A full understanding of the current state will be gained, then documented and shared with all parties.  Improvements should then be identified, fully discussed and agreed.  If there is a need for new software there will need to be the major stage of evaluating alternatives and making the best possible selection.  Where possible any improvements should be tested prior to implementation.  The revised process should be subject to regular monitoring with modification if necessary.

Forecast Error Measures

A list of forecast error measures and some explanation is given below:

  • E (forecast error) e.g. actual sales - forecast sales

  • AE (absolute error) - this is the error but dropping the + or - sign

  • PE (percentage error) - the error expressed as a percentage of actual sales,  forecast sales or some other 'base'

  • APE (absolute percentage error) - same as PE but dropping the + or - sign

  • MAE (mean absolute error) or MAD (mean absolute deviation) - the average of the absolute errors across products or time periods

  • MSE (mean squared error) - the average of a number of squared errors

  • RMSE (root mean squared error) - the square root of MSE

  • MAPE (mean absolute percentage error) - see below

  • WMAPE (weighted mean absolute percentage error) - see below

  • sMAPE (symmetric MAPE) - see below

  • MASE (mean absolute scaled error) - see below

So should we report forecast error or forecast accuracy? Some companies feel it is better to dwell on a positive note and report accuracy rather than error. Forecast accuracy is calculated by deducting a measure of absolute percentage error from 100% (although this does create a mathematical issue when the forecast error is greater than 100%).

MAPE (mean absolute percent error)

Mean Absolute Percent Error is widely used as a method of summarising forecast error across a number of time periods or products. Firstly each individual percent error is calculated as a percentage of Actual Sales or as a percentage of Forecast Sales. So the 'base' (the denominator) in the calculation is either Actual Sales or Forecast Sales. MAPE is the average of the absolute percent errors. Then, if a measure of accuracy is preferred over a measure of error, this is calculated as 100 - MAPE.

A major difficulty that arises with MAPE is that if there is any instance where the base in any individual percent error calculation is zero, the result cannot be calculated. This is often referred to as the divide by zero problem. Various work-arounds have been used to deal with this issue, but none of them are mathematically correct. Perhaps the biggest problem arises when MAPE is used to assess the historical errors associated with different forecasting models with a view to selecting the best model. Thus MAPE is totally unsuitable for assessing in this way any item with an intermittent demand pattern.

If MAPE is being used to summarise accuracy across a number of products there is the additional disadvantage that it does not give any greater weighting to fast-moving as compared to slow-moving products.  Because slow-moving products tend to exhibit high percentage errors, MAPE may tend to overstate the average error across a product family or total business.

WMAPE (weighted mean absolute % error)

Weighted Mean Absolute Percentage Error, as the name suggests, is a measure that gives greater importance to faster selling products. Thus it overcomes one of the potential drawbacks of MAPE. There is a very simple way to calculate WMAPE. This involves adding together the absolute errors at the detailed level, then calculating the total of the errors as a percentage of total sales.  This method of calculation leads to the additional benefit that it is robust to individual instances when the base is zero, thus overcoming the divide by zero problem that often occurs with MAPE.

WMAPE is a highly useful measure and is becoming increasingly popular both in corporate KPIs and for operational use. It is easily calculated and gives a concise forecast accuracy measurement that can be used to summarise performance at any detailed level across any grouping of products and/or time periods. If a measure of accuracy required this is calculated as 100 - WMAPE.

sMAPE (symmetric MAPE)

Symmetric MAPE presents an alternative way of overcoming the divide by zero problem of MAPE. Instead of using a base in the percentage error calculation which is either Actual Sales or Forecast Sales, it uses the average of Actual Sales and Forecast Sales. It is a rather strange measure to explain to colleagues, therefore does not often appear in company forecast accuracy reports, but it sometimes plays a part within certain software packages in the selection of a recommended forecasting model.

MASE (mean absolute scaled error)

Mean Absolute Scaled Error has recently been proposed as an alternative to MAPE and other variants of MAPE. The scaled error is calculated as the ratio of the error to the Mean Absolute Error from a 'simple forecasting technique'. Then the MASE is calculated as the average of the absolute scaled errors. There are various alternatives that can be used as the 'simple forecasting technique', for example a simple moving average, the Naive method (forecast is the same as the last data point) or Seasonal Naive (forecast is same as last year).

Why measure forecast accuracy?

Investigation or anticipation of problems

A problem may have occurred in the supply chain and scrutiny of forecast accuracy may be a helpful step in analysing and dealing with it. It is often useful to screen for exceptional instances of high forecast error, in order to anticipate and prevent problems from occurring. Simple measures can usually be used for these purposes, using routine or ad hoc reports to describe forecast accuracy for single entities in single time periods.

Monitoring forecast accuracy against targets or KPIs

Companies need to monitor the ongoing forecast accuracy that is being achieved in order to ensure the forecasting process is under control and meeting any KPIs that have been adopted. Ideally one would wish to see continual improvement taking place over time. For complex businesses it is not practical to report forecast error for every individual product or listing in every individual time period, therefore summary measure are needed to allow concise reporting.

Choosing between alternative forecast models

A common approach when selecting a forecasting model is to simulate forecast error over a range of historical time periods, then choose the one that would have resulted in the lowest errors. True, performance in the past is not necessarily a sure guide to performance in the future, but it is nevertheless a good starting point. A number of forecast error measures can be used for this purpose, of which WMAPE is a good example.

The most common family of methods for forecasting is time series forecasting, where analysis of historical data is carried out in order to calculate seasonal indices and quantify any persistent trends in sales volume.  A number of different forecasting models are then avaolable to project sales into the future, including moving average, curve fitting and exponential smoothing models.  Another approach is causal modelling , where statistical techniques are used to quantify the effect on sales or market share of potential causal factors such as pricing, weather or economic indices.

Setting safety stock

When stock replenishment for fast moving products is planned in line with the short-term forecast, safety stock is calculated using a formula that includes the variability of the forecast error as measured by standard deviation. Safety stock can then be expressed as a quantity or in terms of days cover, the latter being preferable for seasonal products. So achievement of the lowest possible forecast error is vital in striving for the minimum safety stock.