Conducting a customer satisfaction survey starts with defining what you want to learn, choosing the right metric, writing clear questions, distributing the survey at the right moment, and then turning the responses into action. The process is straightforward, but small decisions at each stage, like how you word a question or when you send it, have an outsized effect on whether you get useful data or misleading noise.
Define What You Want to Measure
Before writing a single question, get specific about the business question you’re trying to answer. “How do customers feel about us?” is too broad to be useful. Instead, narrow your focus. Are you trying to understand why customers cancel? Whether your support team is resolving issues effectively? How a recent product update landed? The answers shape everything downstream: which metric you use, who you survey, when you send it, and what questions you ask.
Write down your goal in one sentence. Something like “We want to know whether customers find our checkout process easy to complete” or “We want to understand how likely repeat buyers are to recommend us.” This keeps the survey focused and prevents the common temptation to ask about everything at once.
Choose the Right Metric
Three standard metrics dominate customer satisfaction measurement, and each one answers a different question. Picking the wrong one gives you data that doesn’t connect to your goal.
- CSAT (Customer Satisfaction Score) asks “On a scale from 1 to 5, how satisfied were you with [this interaction or experience]?” You calculate it by taking the percentage of respondents who chose 4 (satisfied) or 5 (very satisfied). CSAT works best for measuring specific touchpoints: a support call, a purchase, a product feature.
- NPS (Net Promoter Score) asks “On a scale from 0 to 10, how likely are you to recommend us to a friend or colleague?” Respondents who answer 9 or 10 are promoters. Those who answer 7 or 8 are passives. Anyone at 6 or below is a detractor. Your NPS equals the percentage of promoters minus the percentage of detractors, giving you a score between -100 and 100. NPS captures overall brand loyalty and is useful for tracking broad sentiment over time.
- CES (Customer Effort Score) asks “On a scale of 1 to 5, how easy did we make it for you to [complete a task]?” You calculate it the same way as CSAT: the percentage of people who chose 4 (agree) or 5 (strongly agree). CES is ideal after support interactions or self-service experiences where ease matters more than delight.
If you’re evaluating a specific experience, use CSAT. If you want a big-picture loyalty snapshot, use NPS. If you want to identify friction in a process, use CES. You can use more than one metric in the same survey, but keep the core question count low.
Write Questions That Get Honest Answers
Even small differences in how you phrase a question can substantially change the answers people give. Pew Research Center has documented this extensively in its survey methodology work: wording, response options, and question order all influence results. A few principles keep your questions clean.
Use simple, specific language. “How would you rate the quality of your recent customer service experience?” is better than “How do you feel about the service you received?” The first version tells the respondent exactly what you’re asking about. The second is vague enough that people interpret it differently.
Avoid leading questions. “How excellent was your experience today?” pushes respondents toward a positive answer. “How would you rate your experience today?” does not. Similarly, avoid double-barreled questions that ask about two things at once, like “How satisfied are you with our pricing and product selection?” A customer who loves your products but hates your prices has no good way to answer that.
Be deliberate about your response scales. When you offer a list of choices, the order matters. People tend to gravitate toward options they see first (in visual surveys) or last (when options are read aloud). One way to reduce this bias is to randomize the order of response options across respondents, so the effect gets distributed evenly rather than skewing your results in one direction.
Include at least one open-ended question, such as “What could we have done better?” or “Is there anything else you’d like to share?” Closed-ended questions (where people pick from preset options) are easier to analyze at scale, but open-ended responses reveal problems and ideas you didn’t think to ask about. Some organizations run a small pilot study with open-ended questions first, then use the most common responses to build the closed-ended options for the full survey.
Keep It Short
Survey length is one of the biggest factors in whether people finish it. For most customer satisfaction surveys, aim for 5 to 10 questions. If you can get what you need in 3 questions, even better. Every additional question increases the chance someone abandons the survey partway through, and partial responses can skew your data.
Start with your most important question. If someone only answers the first item and closes the tab, you still have the data point that matters most. Save demographic or segmentation questions for the end, and only include them if you genuinely plan to slice the data by those categories.
Pick the Right Timing and Channel
When you send the survey matters as much as what you ask. Survey customers as close to the experience as possible. If you want feedback on a support interaction, send the survey within an hour of the ticket being resolved. If you want feedback on a product, wait until the customer has had enough time to use it, but not so long that the experience fades from memory. For a physical product, a week or two after delivery is reasonable. For software, a few days after a new feature launch works well.
Match the channel to how customers already interact with you. Email surveys work for most businesses and give you room for slightly longer questionnaires. In-app surveys (a small popup or embedded widget) catch users in the moment and tend to get higher response rates for quick, one-question checks. SMS works well for transactional businesses like delivery services or appointment-based companies, but keep it to one or two questions. Post-call IVR surveys (the “press 1 to rate your experience” prompt) capture feedback from phone-based support immediately.
Avoid surveying the same customer repeatedly in a short window. If someone contacts support three times in a week, sending a survey after each interaction creates fatigue and annoyance. Set rules to limit how frequently any individual receives a survey request, typically no more than once per quarter for relationship-level surveys and no more than once per interaction type within a set period.
Decide Who to Survey
You rarely need to survey every customer. A well-chosen sample gives you reliable data without overwhelming your audience. For transactional surveys (post-purchase, post-support), sending to every customer who completes that interaction is fine because the survey is tied to a specific event. For broader relationship surveys, pick a random sample of active customers large enough to be statistically meaningful.
How large is “large enough” depends on your customer base. For a company with 10,000 active customers, surveying a random sample of 500 to 1,000 gives you a strong read on overall sentiment. If you want to compare satisfaction across segments (by product line, customer tenure, or plan tier), you need enough responses in each segment to draw conclusions, so factor that into your sample size.
Pretest Before You Launch
Running a small pilot before sending your survey to the full audience is one of the most valuable steps in the process, and one of the most skipped. Pew Research Center considers pretesting essential to questionnaire design, using focus groups, cognitive interviews, and small-sample test runs to catch problems before they affect real data.
You don’t need that level of rigor for a customer survey, but you should at minimum test the survey with 10 to 20 people. Ask them to complete it while you watch (or review their responses afterward) and look for patterns: Did anyone misunderstand a question? Did the survey take longer than expected? Did anyone skip a question or give an answer that didn’t match what you were trying to measure? Fix those issues before the full launch.
Analyze Responses for Patterns
Once responses come in, start with your core metric. Calculate your CSAT, NPS, or CES score to establish a baseline. If you’ve run this survey before, compare to previous results, but only if you used the same question wording and similar context. Changing the phrasing or moving a question to a different spot in the survey can shift results in ways that have nothing to do with actual changes in customer sentiment.
Next, segment the data. Break responses down by customer type, product, region, or whatever categories matter to your business. An overall CSAT of 78% might mask the fact that new customers are at 90% while long-term customers are at 60%, which tells a very different story than the average alone.
Read every open-ended response, or at least a large sample if you have thousands. Group them into themes: pricing complaints, feature requests, praise for a specific team, confusion about a process. These qualitative patterns often explain the “why” behind your quantitative scores. A low CES score tells you something is hard. The open-ended responses tell you what, specifically, is hard.
Turn Results Into Action
Data without follow-up is worse than no data at all, because it trains customers to believe their feedback doesn’t matter, making them less likely to respond next time. Build a plan for what happens after results come in before you launch the survey.
Share results with the teams who can act on them. If support interactions are generating low satisfaction scores, the support team needs to see the specific feedback, not just a number. If a product feature is causing confusion, the product team needs the verbatim comments.
Close the loop with customers when possible. If someone leaves a negative response, follow up individually. A simple message acknowledging their feedback and explaining what you’re doing about it can turn a detractor into a loyal customer. For positive feedback, a quick thank-you reinforces the behavior and keeps customers engaged.
Set a recurring schedule. Run transactional surveys continuously and relationship surveys on a regular cadence, quarterly or biannually for most businesses. Tracking the same metrics over time with consistent methodology is how you measure whether the changes you made actually improved the customer experience.

