Net Promoter Score (NPS) is a customer loyalty metric that measures how likely your customers are to recommend your company to someone else. It condenses that likelihood into a single number ranging from -100 to +100, giving businesses a quick read on overall customer sentiment. Developed by Fred Reichheld and introduced in a 2003 Harvard Business Review article, NPS has become one of the most widely used customer experience metrics across industries.
How NPS Is Measured
The core of NPS is a single survey question: “On a scale from 0 to 10, how likely are you to recommend [Company/Product/Service] to a friend or colleague?” That’s it. The power of the metric comes from its simplicity. Customers answer quickly, which means response rates tend to be higher than with longer satisfaction surveys.
Most companies pair that question with an open-ended follow-up like “What is the primary reason for your score?” or “What could we do to improve your experience?” The number gives you a trend line. The follow-up gives you the context behind it.
The Three Customer Groups
Based on their response, each customer falls into one of three categories:
- Promoters (9 or 10): Your most loyal and enthusiastic customers. They’re likely to keep buying, spend more over time, and actively refer others.
- Passives (7 or 8): Satisfied for now, but not especially enthusiastic. Their repurchase and referral rates can be as much as 50% lower than promoters. They’re vulnerable to competitors.
- Detractors (0 through 6): Unhappy customers who account for more than 80% of negative word of mouth. They churn at high rates and can actively discourage others from doing business with you.
The wide detractor range (seven out of eleven possible scores) often surprises people. A customer who gives you a 6 might feel lukewarm, but NPS treats them the same as someone who gives a 0. That’s a deliberate design choice: the framework assumes that anything short of genuine enthusiasm signals a risk to growth.
How to Calculate the Score
The formula is straightforward. Take the percentage of respondents who are promoters and subtract the percentage who are detractors. Ignore passives entirely in the math.
Say you survey 200 customers. 100 give you a 9 or 10 (promoters), 60 give you a 7 or 8 (passives), and 40 give you a 0 through 6 (detractors). Your promoter percentage is 50%, your detractor percentage is 20%, and your NPS is +30.
The result can range from -100 (every single respondent is a detractor) to +100 (every respondent is a promoter). A positive score means you have more promoters than detractors. Scores above +50 are generally considered excellent, and anything above +70 is world-class, though what counts as “good” varies significantly by industry.
What a “Good” Score Looks Like
NPS benchmarks differ dramatically depending on the type of business. Consumer brands with strong emotional connections, like certain retailers or streaming services, tend to score higher. Business software companies tend to score much lower. A 2025 analysis of 23 business software products found an average NPS of -5, with individual scores ranging from -38 to +24. That average has been fairly consistent, sitting at -3 in 2020 and -12 in 2022.
This means a score of +10 might be disappointing for a consumer brand but genuinely strong for enterprise software. The most useful comparison is against direct competitors in your category, or against your own score over time. Chasing an absolute number without that context can lead to misguided priorities.
Relational vs. Transactional NPS
There are two distinct ways to deploy an NPS survey, and they answer different questions.
Relational NPS asks customers how they feel about your company overall. You send it on a regular schedule, often quarterly or annually, to get a high-level view of brand loyalty. It’s useful for tracking trends over time and benchmarking against competitors.
Transactional NPS asks how customers feel after a specific interaction, like a support call, a purchase, or onboarding as a new customer. It gives each department or touchpoint its own measurable metric and makes it easier to pinpoint exactly where experiences break down. For post-purchase surveys, it’s common to wait a week or two so customers have time to evaluate. For contact center interactions, sending the survey immediately while the experience is fresh tends to work better. After a product update or redesign, giving customers a few days or weeks to adjust before surveying avoids capturing knee-jerk reactions to change.
Many companies run both types simultaneously. Relational surveys provide the big picture. Transactional surveys surface the specific, actionable problems.
Where NPS Falls Short
NPS is popular partly because a single number is easy to report in meetings, track on dashboards, and set goals around. But that simplicity comes with real trade-offs.
The biggest limitation is that one number can’t reliably predict future growth, loyalty, or advocacy on its own. Researchers have pointed out for years that NPS has weak predictive power and confuses correlation with causality. A rising score doesn’t guarantee revenue growth, and a dipping score doesn’t always signal trouble.
Sample size is another practical challenge. If you have 1,000 customers and only 8% respond to your survey, you’re working with roughly 80 responses. Monthly fluctuations in a sample that small may be pure noise rather than meaningful signals. Smaller companies can find it nearly impossible to draw reliable conclusions from month-to-month NPS changes.
There’s also the question of whether NPS actually surfaces anything you don’t already know. Companies that talk to their customers regularly and use their own products often find that NPS comments echo feedback they’ve already heard through other channels. The real value, in many cases, comes not from the score itself but from the qualitative comments attached to it. Those open-ended responses can reveal specific pain points, feature requests, and language your customers use to describe their experience.
Finally, frequent surveying can annoy the very customers you’re trying to delight. Each pop-up or email asking “How likely are you to recommend us?” is a small interruption. Companies that over-survey risk lower response rates over time and a skewed sample of only the most opinionated respondents.
Making the Score Useful
NPS works best as a starting point, not an endpoint. The score tells you whether sentiment is moving in the right direction. The follow-up comments tell you why. And closing the loop, actually responding to detractors and acting on their feedback, is where the real business impact happens.
Track your score over consistent time periods using consistent methodology. If you change when, how, or whom you survey, your numbers will shift in ways that have nothing to do with actual customer sentiment. Segment your results by customer type, product line, or interaction channel to find patterns the overall number hides. A company-wide NPS of +20 might mask the fact that your support team scores +50 while your onboarding process sits at -15.
Pair NPS with other metrics like customer retention rates, support ticket volume, and actual referral behavior. No single number captures the full picture of customer loyalty. Used alongside those signals, NPS gives you one more data point to work with. Used in isolation, it risks becoming a number people optimize for without improving the experiences behind it.

