Chapter 5: Survey Development
Sitaji Gurung
Learning Objectives:
- Understand the process of survey development, including the steps involved in creating a survey, from defining the research question to selecting an appropriate survey design, sampling method, and response format.
- Differentiate between standardized and non-standardized surveys, including their advantages and disadvantages, and identify situations in which each type of survey may be most appropriate.
- Compare and contrast structured and semi-structured survey items, and explain when each type of item may be most appropriate for collecting data.
- Describe the different types of response scales used in surveys, including Likert scales, semantic differential scales, and visual analog scales, and explain the advantages and disadvantages of each.
- Understand the importance of pretesting and pilot testing surveys, and identify strategies for improving survey response rates and minimizing response bias.
Surveys are one of the most widely used tools in health research, public health, psychology, and other disciplines. They allow researchers to efficiently gather data from a large number of participants. Whether measuring behaviors, beliefs, or health outcomes, surveys provide valuable insights that can inform policies and interventions. Surveys are also relatively cost-effective compared to other data collection methods, especially when conducted electronically. When designed well, they can reach a wide audience and provide both descriptive and inferential data.
Defining the Research Question
The first and most critical step in survey development is crafting a clear, focused research question. This question guides the entire process, from designing the layout to determining what kind of data needs to be collected. A poorly defined question can lead to responses that cannot be used. It’s important to ensure the question is both measurable and relevant to your field of study. Taking time to refine the question early can prevent the waste of effort and data that doesn’t serve the study’s goals.
Discussion Questions
- What makes a good research question for a survey?
- How does a research question impact other survey choices?
Selecting a Survey Design
Survey design includes determining whether the study will be cross-sectional (data collected at one time) or longitudinal (data collected over time), as well as the mode of administration (online, paper, telephone, or in-person). Each design has implications for cost, access, and data quality. Cross-sectional surveys are efficient for quick snapshots, while longitudinal ones are better for tracking change. Researchers also need to consider the target population’s comfort and accessibility with different modes of delivery.
“Video 1: Designing a Survey” by SAGE Video is licensed under CC BY-NC-ND 4.0
Discussion Questions
- What’s the difference between a cross-sectional and longitudinal survey?
- What factors might affect your choice of administration method?
Choosing a Sampling Method
Sampling methods determine who gets included in your survey. Probability sampling (like random selection) ensures everyone has an equal chance of being chosen, improving generalizability. Non-probability sampling (like convenience or snowball sampling) is more practical but may introduce bias. The representativeness of your sample affects the credibility of your conclusions. Sample size calculations are also important to ensure your study has enough power to detect meaningful differences or associations.
Discussion Questions
- What is the difference between probability and non-probability sampling?
- Why is choosing the right sampling method important?
Selecting a Response Format
Response formats include open-ended questions, closed-ended questions, and rating scales. Open-ended questions are good for qualitative insights, while closed-ended questions and rating scales (like Likert) are best for structured, quantitative analysis.
The choice of format can influence the ease of analysis, as open-ended responses require qualitative coding. Researchers should align the response format with both the research question and the participants’ literacy and language needs.
Discussion Questions
- When would you use open-ended vs. closed-ended questions?
- What are the benefits of using rating scales in a survey?
Structured vs. Semi-Structured Items
Structured survey items use predefined answers, making analysis easier. Semi-structured items allow for more open-ended responses. Often, combining both provides rich insights and statistical usability. Structured items are especially useful for large-scale surveys that require comparison across groups. In contrast, semi-structured items are valuable for exploring new or under-researched topics in more depth.
Discussion Questions
- What is a structured survey item?
- How might semi-structured questions add value?
Standardized vs. Non-Standardized Surveys
Standardized surveys use consistent questions and response options, offering strong reliability and ease of comparison. Non-standardized surveys offer flexibility and customization, useful in exploratory research. Standardized tools are ideal when using validated instruments like the Patient Health Questionnaire-9 (PHQ-9) or Consumer Assessment of Healthcare Providers and Systems (CAHPS). Non-standardized surveys may be better suited for pilot studies or culturally adapted projects where tailoring is essential.
Table 1: Advantages and Disadvantages of Standardized Surveys
|
Feature |
Advantages |
Disadvantages |
|
Consistency |
High reliability and comparability |
May lack individual/contextual sensitivity |
|
Ease of Analysis |
Easier to code and analyze |
Limits nuanced or personalized responses |
|
Administration |
Quick to administer and score |
Less adaptable across cultures or languages |
Table 2: When to Use Each Survey Type
|
Survey Type |
Best Used When |
|
Standardized |
You need comparable data, validated tools, limited time/resources |
|
Non-Standardized |
You want depth, flexibility, and rich data for complex or new topics |
Discussion Questions
- What are the benefits of using standardized surveys?
- In what situations might a non-standardized survey be more useful?
Response Scales in Surveys
Common scales include Likert scales (e.g., agree-disagree), semantic differential scales (e.g., good–bad), and visual analog scales (e.g., pain level sliders). Each scale is suited for different types of responses and research goals. The choice of scale affects not only how participants respond, but also how researchers analyze and interpret the data. It’s important to keep scales consistent throughout the survey to avoid confusion or measurement error.
Discussion Questions
- Which response scale is most familiar to you?
- How might a scale type affect the answers you get?
Pretesting and Pilot Testing
Before launching a survey widely, researchers should pretest and pilot test it. This helps identify confusing wording, missing options, or format problems. Pilot testing with a small group improves the survey’s reliability and validity. It also offers a chance to test the estimated time to complete the survey and observe whether certain questions are skipped or misunderstood. Feedback from pilot participants can be used to revise and improve the final version.
Discussion Questions
- Why is pilot testing important?
- What problems might pretesting help you avoid?
Strategies to Improve Survey Response Rates
To improve survey response rates, researchers can adopt several key strategies. Personalizing the survey invitation helps participants feel recognized and valued, increasing the likelihood they will respond. Keeping the survey short and focused reduces respondent burden, making it easier to complete. Offering appropriate incentives, such as gift cards or entry into a prize drawing, can motivate participation, especially when surveys are lengthy or complex. Using multiple contact methods, like email, phone, or mail, ensures broader outreach and a better chance of connecting with diverse participants. Additionally, sending reminders to non-responders acts as a gentle prompt, often boosting completion rates among those who may have forgotten or delayed.
Discussion Questions
- Which strategy, personalization, brevity, incentives, or reminders, do you think would be most effective for increasing response rates in a community health survey, and why?
- How might using multiple contact methods improve participation among populations with limited internet access?
Strategies to Minimize Response Bias
To minimize response bias, it’s important to apply thoughtful design and sampling techniques. Using random or stratified sampling improves the representativeness of the sample by ensuring diverse groups are included. Avoiding leading or loaded questions helps prevent skewed results by encouraging genuine responses. Similarly, using neutral and inclusive language ensures that questions are interpreted consistently across different populations. Researchers should also pilot test their surveys with a diverse group to catch any confusing or biased questions before full deployment. Lastly, while incentives can boost participation, they should be applied carefully to avoid attracting respondents who are only motivated by the reward, which can distort the data.
Discussion Questions
- Why is it important to use neutral language and avoid leading questions when designing a survey?
- How could offering incentives introduce bias into a survey, and what are some ways to reduce that risk?
Key Terms
Survey – A method of collecting information by asking questions.
Research Question – The central question your survey is trying to answer.
Sampling – How you choose who will take the survey.
Probability Sampling – A method where everyone has an equal chance to be selected.
Non-Probability Sampling – A method that uses convenience or judgment to select participants.
Structured Question – A question with fixed answer options.
Semi-Structured Question – A question that allows open responses.
Likert Scale – A scale measuring agreement or disagreement.
Semantic Differential Scale – A scale using word pairs like “strong–weak.”
Visual Analog Scale – A slider used to mark intensity of something (e.g., pain).
Standardized Survey – A fixed survey used across many studies or groups.
Non-Standardized Survey – A flexible survey adapted to specific needs.
Pilot Testing – Trying out the survey on a small group to find issues.
Response Rate – The percent of people who complete the survey.
Response Bias – When survey responses don’t reflect the true views of a population.
References
Jhangiani, R. S., Chiang, I.-C. A., Cuttler, C., & Leighton, D. C. (2019). Research methods in psychology (Chapter 7: Survey research). BCcampus.
DeCarlo, M., Cummings, C., & Agnelli, K. (2021). Graduate research methods in social work (Chapter 12: Survey design). Open Social Work Education.
SAGE Video. (n.d.). Designing a survey [Video]. SAGE Publications.
National Center for Health Statistics, Centers for Disease Control and Prevention. (n.d.). Webinar: Designing survey questions about COVID-19 [Video].