Bad survey questions do not just get ignored. They corrupt your data. Unclear questions produce random answers. Leading questions produce flattering but useless answers. Jargon alienates respondents before they finish reading.
The difference between a survey that gives you real insight and one that wastes everyone's time comes down to how you write each individual question.
Why Most Survey Questions Fail
Most bad survey questions fail for one reason: they try to do more than one thing at a time. They ask about product quality and price in the same sentence. They use terms the respondent may not know. They assume knowledge the respondent does not have.
"The rule is simple but demands discipline: one idea per question. Not two. Not one-and-a-half. One."
A question that asks "How satisfied are you with our product quality and price?" is actually two questions disguised as one. If someone answers "3 out of 5," do they mean the quality is mediocre but the price is fair, or the quality is great but the price is too high? You cannot know. Separate them:
"How satisfied are you with the product quality and price?"
"How satisfied are you with the product quality?" then separately: "How satisfied are you with the pricing?"
Ready to put this into practice?
Create a Survey at VoteGenerator →Open vs Closed Questions
The first choice when writing any question is whether to ask for a free-text response or to provide answer options. Each has a different job.
| Type | Format | Best for | Watch out for |
|---|---|---|---|
| Open-ended | Free text field | Exploring a new topic, hearing respondents' own language, capturing unexpected ideas | Hard to analyse at scale; too many open questions cause survey abandonment |
| Closed-ended | Multiple choice, yes/no, scale | Quantifiable data, comparison across surveys, statistical analysis, large audiences | Can miss important nuance; may not offer the option the respondent actually wants |
When open-ended questions work best
"What is the biggest challenge you face with our product?" -- Exploration, capturing their actual language, discovering issues you did not know about.
When closed-ended questions work best
"How satisfied are you with our support response?" with a 5-point scale -- Easy to analyse, track over time, and compare across teams.
Best practice: Use closed-ended questions for the main body of your survey. Add one or two open questions at the end for texture and unexpected detail. Never stack more than two open questions in a row.
Writing Neutral Questions
Bias in survey questions is the silent data destroyer. The respondent's answer is shaped by how you ask -- not what they actually think. Here are the five most common traps.
1. The leading question
A leading question guides respondents toward a particular answer before they have formed their own.
"Most customers love this feature. Do you love it too?"
"How useful is this feature to you?"
2. The double-barrel question
A double-barrel question asks two things at once. The respondent cannot answer one without the other being wrong.
"Is the interface intuitive and fast?"
"Is the interface intuitive?" and separately "Is the interface fast?"
3. Jargon and technical language
Using terminology your respondents may not know excludes them and produces unreliable results.
"How robust is the API integration for your use case?"
"How easy is it to connect our tool to the other software you use?"
4. Unverified assumptions
Assuming respondents have done something or know something they might not skews results from the start.
"What did you think of the new dashboard layout?"
"Have you used the new dashboard? If yes, what did you think of it?"
5. Absolute language
Words like "always," "never," "all," and "none" force an absolute answer to a reality that is almost never absolute.
"Do you always use our software for all your work?"
"How often do you use our software?"
Scale Questions: How Many Points?
Scale questions measure sentiment on a numeric range. The length of that range affects both the quality of responses and how easy the data is to analyse.
| Scale | Common use | Verdict |
|---|---|---|
| 3-point | Simple binary-ish decisions | Too limited. Does not capture meaningful variation in opinion. |
| 5-point | General surveys (standard) | The sweet spot. Respondents understand it. Captures necessary nuance. |
| 7-point | Research and academic surveys | More granular. Good for studies that require statistical depth. |
| 10-point | NPS (Net Promoter Score) | Respondents struggle to reliably distinguish between adjacent points. Use only for NPS. |
The three standard 5-point scales:
Strongly Disagree · Disagree · Neutral · Agree · Strongly Agree
Very Unsatisfied · Unsatisfied · Neutral · Satisfied · Very Satisfied
Very Unlikely · Unlikely · Neutral · Likely · Very Likely
If you do not want neutral responses, use a 4-point or 6-point scale (no middle option). Do not use a 5-point scale and then ask respondents to avoid choosing the middle. That is contradictory and produces invalid data.
Question Order Matters
The order your questions appear in shapes how respondents answer each one. A question about problems primes people to think negatively about everything that follows. A question about satisfaction after several negative questions lands differently than it would at the start.
- "What's your annual income?" (Too personal, too early)
- "Have you had problems with our support?" (Negative priming)
- "How satisfied are you overall?" (Now primed toward negative)
- "What do you like about us?" (Feels forced after negativity)
- "How long have you used our product?" (Easy, factual warm-up)
- "What do you like most about it?" (Positive engagement)
- "What could we improve?" (Balanced, specific)
- "Have you contacted our support team?" (Narrower scope)
- "What is your industry?" (Demographic, end of survey)
Five golden rules: Start easy. Ask general before specific. Put sensitive questions last. Avoid priming one question with the previous one. End with an open "Anything else?" as a catch-all.
How Many Questions Is Too Many?
Every question you add to a survey reduces the chance someone completes it. This is true regardless of topic, audience, or how good your questions are. The relationship between length and completion is well-established: more questions means fewer finishers.
| Number of questions | General completion range | Approx. time |
|---|---|---|
| 1 to 3 | Very high | Under 1 minute |
| 4 to 7 | High | 2 to 3 minutes |
| 8 to 15 | Moderate | 5 to 10 minutes |
| 16 to 25 | Lower | 10 to 20 minutes |
| 25 or more | Substantially lower | 20+ minutes |
Note: these are general patterns, not precise statistics. Actual rates vary by audience, incentives, and topic. The principle is consistent: each additional question carries a cost.
The discipline test: For every question on your draft survey, ask "What decision will this answer change?" If you cannot name a decision, cut the question. Extra data that is marginally useful is not worth halving your response rate.
Write your questions, then create the survey.
Start at VoteGenerator →20 Survey Question Templates
Each template below is ready to copy. Click the copy button to copy the question, answer type, and scale to your clipboard.
Put These Templates to Work
Create a survey with your chosen questions at VoteGenerator. No signup required.
Create a Survey Free →