A customer satisfaction survey is a structured set of questions that a business sends to its customers to measure how happy they are with a product, service, or specific interaction. These surveys produce quantifiable scores that companies use to identify what’s working, what’s frustrating customers, and where to focus improvements. They range from a single-question pop-up after a support chat to a detailed questionnaire emailed after a purchase.
How Customer Satisfaction Surveys Work
At their core, these surveys ask customers to rate their experience on a defined scale, then aggregate those ratings into a score the business can track over time. The simplest version is a binary thumbs-up or thumbs-down. More detailed versions use numbered scales (1 to 5 or 1 to 10) and include open-ended questions where customers can explain their rating in their own words.
The timing matters. A survey sent immediately after a support call captures how the customer felt about that specific interaction. One sent a month after purchase captures how well the product held up in daily life. Businesses choose when to send surveys based on what they’re trying to learn, and many run multiple types simultaneously to get a fuller picture.
Three Core Metrics
Most customer satisfaction surveys feed into one of three standard scoring systems. Each measures something slightly different, and many companies use more than one.
CSAT (Customer Satisfaction Score)
CSAT measures satisfaction with a specific interaction or purchase. Customers typically rate their experience on a 1-to-5 scale, where 1 means “very unsatisfied” and 5 means “very satisfied.” Some companies use a 1-to-10 scale or a simple satisfied/unsatisfied choice instead. The formula is straightforward: divide the number of satisfied customers by the total number of responses, then multiply by 100. If 80 out of 100 respondents say they’re satisfied, your CSAT is 80%. This metric is best for measuring how well you handled a particular moment, like a checkout experience or a product return.
NPS (Net Promoter Score)
NPS measures long-term loyalty and brand health rather than a single interaction. It asks one question: “How likely are you to recommend us to a friend or colleague?” on a 0-to-10 scale. Respondents who answer 9 or 10 are “Promoters.” Those who answer 7 or 8 are “Passives.” Anyone who answers 0 through 6 is a “Detractor.” The score equals the percentage of Promoters minus the percentage of Detractors, producing a number between -100 and +100. A positive score means you have more advocates than critics. NPS is popular because it’s simple to administer and gives a broad read on customer loyalty.
CES (Customer Effort Score)
CES gauges how easy or difficult it was for a customer to complete a task, like resolving a billing issue or finding information on your website. Customers rate their effort on a scale of 1 to 7, where 1 means “very low effort” and 7 means “very high effort” (though some companies invert the scale). The score is calculated by averaging all responses. CES is strongly predictive of loyalty because customers who have to work hard to get a problem solved are far more likely to take their business elsewhere, even if the problem eventually gets resolved.
Common Question Formats
The questions themselves generally fall into three categories, and most effective surveys combine at least two of them.
Likert scale questions ask customers to choose from a range of agreement or satisfaction levels. A typical five-point version reads: “How satisfied are you with our service?” with options from “Very satisfied” down to “Very dissatisfied.” Some companies remove the neutral middle option (creating a four-point scale) to force respondents to lean positive or negative, which can produce clearer data but may frustrate people who genuinely feel neutral.
Binary questions offer just two choices. “Were you satisfied with your experience?” paired with a smiley face and an unhappy face. Or “Did you find what you were looking for today?” with a simple yes or no. These are fast for the customer and produce clean data, but they sacrifice nuance.
Open-ended questions like “What could we have improved on today?” give customers a text box to write freely. These responses take more effort to analyze but often surface specific problems that rating scales miss entirely. A customer might rate a support interaction 4 out of 5 but mention in the text box that they were transferred three times before reaching the right person.
How Surveys Reach Customers
The channel you use to deliver a survey affects who responds and how thoughtfully they respond. Each has trade-offs.
- Email is the most common method for post-purchase or post-service surveys. It lets customers respond at their convenience, and most survey platforms integrate with CRM systems so responses link back to individual customer records. Segmenting your email list lets you send different surveys to different customer groups.
- Text message surveys reach a broad audience, including people who don’t regularly check email or use social media. The immediacy of a text often produces faster response times, making this a good choice for quick, short surveys right after an interaction.
- Web links and QR codes can be shared almost anywhere: in a follow-up email, on a receipt, on a poster in a store, or embedded in an app. QR codes work especially well for physical locations like restaurants or retail stores where you want feedback while the experience is fresh.
- Social media gives access to a large audience and lets you target specific demographics or interest groups. The trade-off is less control over who responds, since followers and non-customers alike may participate.
- In-person surveys work well when you have face-to-face contact with customers. A surveyor can clarify confusing questions on the spot, and the personal interaction tends to produce more honest, detailed answers. This method also reaches people who aren’t comfortable with digital tools.
What Companies Do With the Results
Collecting scores is only useful if they lead to action. Companies typically track their CSAT, NPS, or CES over time to spot trends. A sudden drop in satisfaction after a website redesign, for example, signals a usability problem. A gradual decline in NPS over several quarters might point to increasing competition or product stagnation.
The quantitative scores tell you something changed. The open-ended responses tell you what. This is where analysis gets more complex. Reading through thousands of free-text comments manually isn’t practical for larger companies, so many now use AI-powered tools that apply natural language processing to categorize sentiment and identify recurring themes across large volumes of unstructured text. These platforms can interpret tone, intent, and even informal language like slang and emojis, then surface the most common pain points through visual dashboards. Qualtrics, for instance, applies this kind of automated sentiment analysis across survey and support data simultaneously.
The most actionable approach ties survey results back to specific operational changes. If CES data shows that customers find your return process frustrating, the next step is redesigning that process, then resurveying to see if the score improves. Without that feedback loop, surveys become a vanity metric rather than a management tool.
Designing Surveys That Get Useful Responses
Response rates drop sharply as surveys get longer. A survey with one to three questions will typically see far higher completion than one with 15. If you need detailed feedback, front-load the most important questions so you still get usable data from people who abandon partway through.
Timing also shapes the quality of responses. Surveying immediately after an interaction captures emotional reactions and specific details. Waiting a few days or weeks captures how lasting the impression was. Neither approach is wrong, but they measure different things, and mixing them up can muddy your data.
Question wording matters more than most companies realize. Leading questions (“How great was your experience today?”) bias responses upward. Double-barreled questions (“How satisfied are you with our pricing and product quality?”) force customers to answer two things at once, making the response uninterpretable. Each question should ask about one thing, using neutral language.
Finally, consider who’s actually responding. Customers with very positive or very negative experiences are most motivated to fill out surveys, which can skew results toward extremes. Offering a small incentive or keeping the survey extremely short helps capture the middle ground of customers whose feedback is often the most representative.

