What Is a Good MAPE? Benchmarks by Context

A good MAPE (Mean Absolute Percentage Error) depends heavily on what you’re forecasting, but as a general starting point: a MAPE under 10% is considered excellent, 10% to 20% is good, 20% to 50% is reasonable, and anything above 50% signals your forecast needs serious improvement. Those ranges shift dramatically based on your forecast horizon, the granularity of your data, and how volatile the thing you’re measuring actually is.

How MAPE Works

MAPE measures the average size of your forecast errors as a percentage of the actual values. For each data point, you take the absolute difference between your forecast and the actual result, divide by the actual result, then average all those percentages together. A MAPE of 15% means your forecasts were off by an average of 15% from reality.

The appeal is simplicity. Unlike raw error metrics measured in units (dollars, widgets, people), MAPE gives you a single percentage that’s easy to interpret and compare across different scales. A supply chain manager forecasting thousands of SKUs and an analyst projecting quarterly revenue can both use MAPE to gauge accuracy.

What Shifts the Target

There is no single universal threshold for a “good” MAPE because the achievable accuracy depends on several factors that vary wildly across industries and use cases.

Forecast horizon. The further out you predict, the less accurate you’ll be. Regional population projections, for example, show a MAPE of about 3% over a 5-year window but climb to 9% over 20 years. Employment projections are even more volatile, jumping from roughly 5% at 5 years to 15% at 15 years. If you’re forecasting next week’s sales, a 5% MAPE might be realistic. If you’re projecting demand 12 months out, 20% could be perfectly acceptable.

Data granularity. Aggregated data is easier to forecast than granular data. Forecasting total national demand for a product category will yield a much lower MAPE than forecasting demand for a single SKU at a single warehouse. Population projections at the state level have been shown to produce roughly half the MAPE of county-level projections over the same time horizon. The more you zoom in, the more noise you encounter, and the higher your MAPE will naturally run.

Volatility. Stable, predictable series produce lower MAPEs. A utility company forecasting electricity baseload demand might consistently hit under 5%. A fashion retailer forecasting demand for a new seasonal product line might consider 30% a win. Fast-growing categories, new product launches, and markets sensitive to external shocks will all push MAPE higher regardless of how good your model is.

Rough Benchmarks by Context

  • Mature, stable products with short horizons: Under 10% is achievable and expected. Think staple grocery items forecasted a week or two ahead.
  • Supply chain and demand planning (monthly): 10% to 20% is solid for high-volume products. Slower-moving or more variable items often land in the 20% to 40% range.
  • Financial and economic projections: 5% to 15% over shorter periods. Longer-horizon economic forecasts regularly exceed 10% to 20%, and that’s considered normal.
  • New products, intermittent demand, or long horizons: 30% to 50% or higher may be the realistic ceiling. The data simply doesn’t support pinpoint accuracy in these cases.

The most useful benchmark is often your own historical performance. If your team’s MAPE on a particular forecast has hovered around 25% and you bring it down to 18%, that’s meaningful progress, even if 18% sounds high in the abstract.

Where MAPE Can Mislead You

MAPE has a few well-known blind spots that can make your accuracy look worse (or better) than it really is.

The biggest problem is division by zero. Since MAPE divides each error by the actual value, any period where actual demand is zero or near zero produces an infinite or undefined result. This is common with seasonal products in their off-season, new product launches with no sales history, or intermittent demand items. Many teams simply exclude those data points, but that biases your sample by removing some of the hardest-to-forecast items.

MAPE also treats over-forecasting and under-forecasting asymmetrically. If actual demand is 100 and you forecast 50, the error is 50%. But if actual demand is 100 and you forecast 200, the error is 100%. Both are off by the same absolute amount, yet MAPE penalizes the overestimate twice as much. This asymmetry can subtly encourage models that lean toward under-forecasting.

Finally, MAPE weights all items equally regardless of their business importance. A low-volume product that sells 2 units per month will generate enormous percentage errors from small absolute misses, potentially dominating your overall MAPE even though it represents a tiny fraction of revenue or cost.

When to Use an Alternative Metric

If any of those limitations affect your situation, a closely related metric may serve you better.

WMAPE (Weighted MAPE) solves the equal-weighting problem. Instead of averaging percentage errors across items, it divides total absolute error by total actual volume. The formula is straightforward: sum up all the absolute errors, divide by the sum of all actual values, and multiply by 100. Each product or period contributes to the metric in proportion to its actual volume, so high-volume items that drive most of your business costs naturally carry more weight. In demand planning, WMAPE answers a more practical question: “What proportion of my total demand did I forecast incorrectly?”

MAE (Mean Absolute Error) drops the percentage entirely and reports error in the same units as your data (dollars, units, headcount). It avoids the division-by-zero issue and the asymmetry problem, though it’s harder to compare across products or datasets of different scales.

SMAPE (Symmetric MAPE) attempts to fix the asymmetry by dividing each error by the average of the forecast and actual values instead of just the actual. It reduces the penalty gap between over-forecasts and under-forecasts, though it introduces its own quirks and still struggles with zero values.

For most business forecasting, WMAPE has become the preferred evolution of MAPE because it directly correlates with the cost of your errors. If your organization uses standard MAPE and you notice that a handful of low-volume items are skewing your results, switching to WMAPE will give you a more honest picture of forecast performance where it matters most.

How to Improve Your MAPE

If your current MAPE is higher than the benchmarks for your context, focus on the areas where improvement will have the largest impact. Start with your highest-volume items, since reducing error on those products moves the needle more than perfecting a forecast for a niche SKU. Review your forecast horizon and consider whether you’re trying to predict further out than your data supports. Shortening the horizon, or refreshing forecasts more frequently, is often the simplest way to bring MAPE down.

Check whether your data granularity matches your decision-making needs. If you only need a regional demand number to make allocation decisions, don’t evaluate your accuracy at the store level and panic over a high MAPE. Aggregate where your decisions are actually made. Finally, segment your products by volume and variability, then set different MAPE targets for each segment. Holding a volatile, low-volume item to the same standard as a stable bestseller will either frustrate your team or distort your model selection.