Forecast Solutions

Forecast Accuracy Measurement with MAPE and WMAPE

Achievement of the best possible forecast accuracy is vital to ensure the lowest stock levels to give the required customer service level.  Forecast accuracy measurement is also important for investigation of existing or potential problems in the supply chain and ensuring that the forecasting system is under control. 

Forecast accuracy is the complement of forecast error.  Therefore, its calculation starts with the meaurement of the error, then the accuracy is calculated from the error.  There are a number of ways to summarise forecast error across a number of products or time periods, the traditional method being the mean absolute percentage error or MAPE.  More recent alternatives are often preferable, particularly the weighted mean absolute percentage error or WMAPE. 

At Forecast Solutions we can advise on suitable forecast accuracy measures and the development of meaningful reports.  The subject of forecast accuracy measurement is also covered in depth in one of our training courses.

Some Forecast Error Measures

A list of some forecast error measures is given below, together with some explanation:

  • E (forecast error) e.g. actual sales - forecast sales

  • AE (absolute error) - this is the error but dropping the + or - sign

  • PE (percentage error) - the error expressed as a percentage of actual sales,  forecast sales or some other 'base'

  • APE (absolute percentage error) - same as PE but dropping the + or - sign

  • MAE (mean absolute error) or MAD (mean absolute deviation) - the average of the absolute errors across products or time periods

  • MSE (mean squared error) - the average of a number of squared errors

  • RMSE (root mean squared error) - the square root of MSE

  • MAPE (mean absolute percentage error) - see below

  • WMAPE (weighted mean absolute percentage error) - see below

  • sMAPE (symmetric MAPE) - see below

  • MASE (mean absolute scaled error) - see below

MAPE (mean absolute percent error)

Mean Absolute Percentage Error is widely used as a method of summarising forecast error across a number of time periods or products. Firstly the absolute value (dropping the + or - sign) of the forecast error for each item is expressed as a percentage of a 'base' of either actual sales or forecast. MAPE is simply the the average of the individual percentage errors.

A major difficulty that arises with MAPE is that if there is any instance where the base in any individual percentage error calculation is zero, the result cannot be calculated. This is often referred to as the divide by zero problem. Various work-arounds have been used to deal with this issue, but none of them are mathematically correct. Another issue is that low volume items usually exhibit high percentage errors, so an unweighted average across all items can tend to overstate the error.

If MAPE is being used to summarise accuracy across a number of products there is the additional disadvantage that it does not give any greater weighting to fast-moving as compared to slow-moving products.  Because slow-moving products tend to exhibit high percentage errors, MAPE may tend to overstate the average error across a product family or total business.

WMAPE (weighted mean absolute % error)

Weighted Mean Absolute Percentage Error, as the name suggests, is a measure that weights the errors by product volume, thus overcoming one of the main drawbacks of MAPE. There is a very easy way to calculate WMAPE. This involves simply adding together the absolute errors at the detailed level, then calculating the total of the absolute error as a percentage of total sales (or total forecast).  This method of calculation avoids the divide by zero problem that can be a major problem with MAPE.

WMAPE is a highly useful measure and is becoming increasingly popular both in corporate KPIs and for operational use. It is easily calculated and gives a concise forecast accuracy measurement that can be used to summarise performance at any detailed level across any grouping of products and/or time periods. If a measure of accuracy required this is calculated as 100 - WMAPE.

sMAPE (symmetric MAPE)

Symmetric MAPE presents an alternative way of overcoming the divide by zero problem of MAPE.  Instead of using a base in the percentage error calculation which is either actual sales or forecast sales, it uses the average of actual and forecast.  It is a rather strange measure to explain to colleagues, therefore does not often appear in company forecast accuracy reports.  It is sometimes used in certain software packages for selection of a recommended forecasting model.

MASE (mean absolute scaled error)

Mean Absolute Scaled Error has recently been proposed as an alternative to MAPE and other variants of MAPE. The scaled error is calculated as the ratio of the error to the Mean Absolute Error from a 'simple forecasting technique'. Then the MASE is calculated as the average of the absolute scaled errors. There are various alternatives that can be used as the 'simple forecasting technique', for example a simple moving average, the Naive method (forecast is the same as the last data point) or Seasonal Naive (forecast is same as last year).

Should we report Error or Accuracy

Some companies feel there is a more positive spin to reporting % accuracy rather than % error.  That is fine, although there are some downsides to reporting % accuracy and we might just as well concentrate on forecast error and the need to minimise it.

If accuracy is preferred, the % forecast accuracy is calculated by deducting the absolute % error from 100%.  So an error of 20% results in an accuracy of 80%.  One notable problem with % accuracy is the mathematical issue that occurs when the forecast error is greater than 100% - the resulting negative accuracy is a bit hard to understand.

Why measure forecast accuracy?

Investigation or anticipation of problems

A problem may have occurred in the supply chain and scrutiny of forecast accuracy may be a helpful step in analysing and dealing with it. It is often useful to screen for exceptional instances of high forecast error, in order to anticipate and prevent problems from occurring. Simple measures can usually be used for these purposes, using routine or ad hoc reports to describe forecast accuracy for single entities in single time periods.

Monitoring forecast accuracy against targets or KPIs

Companies need to monitor the ongoing forecast accuracy that is being achieved in order to ensure the forecasting process is under control and meeting any KPIs that have been adopted. Ideally one would wish to see continual improvement taking place over time. For complex businesses it is not practical to report forecast error for every individual product or listing in every individual time period, therefore summary measure are needed to allow concise reporting.

Choosing between alternative forecast models

A common approach when selecting a forecasting model is to simulate forecast error over a range of historical time periods, then choose the one that would have resulted in the lowest errors. True, performance in the past is not necessarily a sure guide to performance in the future, but it is nevertheless a good starting point. A number of forecast error measures can be used for this purpose, of which WMAPE is a good example ( the MAPE measurement often falls over due to the divide by zero problem).

Setting safety stock

When stock replenishment for fast moving products is planned in line with the short-term forecast, safety stock is calculated using a formula that includes the variability of the forecast error as measured by standard deviation. Safety stock can then be expressed as a quantity or in terms of days cover, the latter being preferable for seasonal products. So achievement of the lowest possible forecast error is vital in striving for the minimum safety stock.