13 Surveys
Sometimes students in research methods classes believe that understanding what a survey is and how to write one is so obvious, that there’s no need to dedicate any class time to learning about it. This feeling is understandable. Surveys have become very much a part of our everyday lives; we’ve probably all taken one, heard about their results in the news, or even administered one ourselves. As we’ll discuss in this chapter, constructing a good survey takes a great deal of thoughtful planning and many rounds of revisions; there are many benefits to choosing survey research as one’s method of data collection. In this chapter, we’ll define survey research and discuss when to use it, the strengths and weaknesses of the methodology, some types of surveys, and elements of effective survey questions and questionnaires.
What is Survey Research?
Survey research is a quantitative methodology in which researchers use standardized questionnaires to systematically collect data about people and their preferences, thoughts, and behaviors. Survey research shares some elements with quantitative interviews, but it is a distinct methodology with its own guidelines, strengths, and weaknesses. As with quantitative interviews, a survey researcher poses a set of predetermined questions to an entire sample of individuals. Unlike interviews, surveys are often administered impersonally, with the person collecting the data only interacting with respondents to get their consent to participate in the research. Respondents then complete the questionnaire on their own.
Survey research is useful when a researcher aims to describe or explain trends or common features of a large group or groups. This method may also be used to quickly gain general details about one’s population of interest to help prepare for a more focused, in-depth study, using time-intensive methods such as in-depth interviews or field research. In this case, a survey may help a researcher identify specific individuals or locations to collect additional data.
Strengths and Weaknesses of Survey Research
Survey research has several benefits compared to other research methods. First, surveys are an excellent way to measure a wide variety of unobservable data, such as people’s preferences (e.g., political ideologies), traits (e.g., self-esteem), attitudes (e.g., toward people with criminal records), beliefs (e.g., about a new law), behaviors (e.g., smoking or drinking behavior), or demographic information (e.g., income).
Second, survey research allows for the remote collection of data, from many people, relatively quickly and with minimal expense. With surveys, a large area (such as an entire county or country) can be covered using representative sampling techniques to administer mail-in, e-mail, or telephone surveys to samples of the population. Mailing a written questionnaire to 500 people entails significantly fewer costs and less time than visiting and interviewing each person individually. Plus, some respondents may prefer surveys’ convenient, unobtrusive nature to more time-intensive data collection methods, such as interviews.
Related to the benefit of cost-effectiveness is a survey’s potential for generalizability. Because surveys allow researchers to collect data from large samples for a relatively low cost, survey methods lend themselves to probability sampling techniques, which we discussed in Chapter 8. Of all the data-collection methods described in this text, survey research is probably the best method to use when one hopes to gain a representative picture of the attitudes and characteristics of a large group.
Survey research also tends to be a reliable method of inquiry, because it uses standardized questionnaires in which every respondent receives the same questions phrased in the same way. Other methods, such as qualitative and quantitative interviewing do not offer the same consistency as a quantitative survey. This is not to say that surveys are always reliable. For example, a poorly phrased question can cause respondents to interpret its meaning differently, which can reduce that question’s reliability. Thus, assuming well-constructed question and questionnaire design, one strength of survey methodology is its potential to produce reliable results.
As with all data collection methods, survey research also comes with some drawbacks. First, surveys may be flexible because researchers can ask many questions on different topics. However, once a researcher has written and distributed the questionnaire, they’re generally stuck with a single instrument for collecting data (the questionnaire), regardless of any issues that may arise later. For example, imagine you mail out a survey to 1,000 people, and as responses arrive, you discover that respondents find the phrasing of a particular question confusing. At this stage, it would be too late to start over or to change the question for the respondents who haven’t returned their surveys. By contrast, when conducting in-depth interviews, a researcher can provide further explanations on confusing questions, and tweak the questions for future interviews, as they learn more about how respondents seem to understand them.
Validity can also be a problem with surveys. Because survey questions are standardized, it can be difficult to ask anything other than general questions that most people will understand. As a result, survey findings may not be as valid as results obtained using methods of data collection that allow a researcher to comprehensively examine the topic. Let’s say, for example, that you want to learn something about voters’ willingness to elect a politician who supports the death penalty. On a questionnaire, you might ask, “If a candidate for your state’s legislature supported death penalty legislation, would you vote for the candidate if they were qualified for the job?” and provide the options of answering “yes” or “no.” What if someone’s answer was more complex than could be answered with a simple yes or no? In an interview, the respondent and interviewer could discuss the intricacies of a respondent’s answer to this type of question; however standardized questionnaires often cannot allow for the same range and depth of responses as might be found in other research methodologies. Table 13.1 summarizes these strengths and weaknesses.
Table 13. 1 Strengths and Weaknesses of Survey Research
Strengths |
Weaknesses |
Can measure a wide variety of unobservable data | Can’t change questions after the questionnaire has already been distributed |
Allows for collecting data from many people quickly and with minimal
expense |
May be less valid due to a lack of variation and depth in responses |
Strong potential for generalizing to larger populations | |
The use of standardized questionnaires allows for consistency |
Types of Surveys
Surveys come in many forms. This section examines the different types of surveys that arise from differences in time (when or with what frequency a survey is administered) and administration (how a survey is delivered to respondents).
Time
In terms of time, there are two main types of surveys: cross-sectional and longitudinal. Cross-sectional surveys are administered at a single point in time, with no follow-up surveys. These surveys offer researchers a snapshot of respondents’ lives, opinions, and behaviors, from when the survey is administered. One issue with cross-sectional surveys is that the events, opinions, behaviors, and other phenomena that such surveys are designed to assess don’t generally remain stagnant. Thus, generalizing from a cross-sectional survey can be tricky; a researcher may be able to say something about how things were when they administered their survey, but they can’t know how long things remained that way after the survey period ended. Consider, for example, a survey administered in 2019 that asked about people’s perceptions of the police. In the summer of 2020, the death of George Floyd at the hands of police officer Derek Chauvin sparked national (and even international) protests. Imagine how responses to the same set of questions might have been different if people had been surveyed during or after that summer. This example demonstrates that while cross-sectional surveys have many important uses, researchers must remember that a cross-sectional survey can only capture a snapshot of life and opinions as they were when the survey was administered.
Longitudinal surveys try to overcome this problematic aspect of cross-sectional surveys. Longitudinal surveys are administered multiple times. We’ll discuss three types of longitudinal surveys, including trend, panel, and cohort surveys. Researchers conducting trend surveys are interested in how people’s inclinations change. Gallup opinion polls are an excellent example of trend surveys. To learn about how public opinion changes over time, Gallup administers the same questions to people at different points in time. For example, for several years Gallup has polled Americans to determine their confidence in the police. Gallup’s polling has shown confidence in police remained relatively stable from 1993 through 2019. Confidence dipped in 2020, especially among Black Americans, but the percentage of Americans who said they had at least some confidence in police had already started to increase in 2021, from historic lows in 2020. Thus, through Gallup’s use of trend survey methodology, we’ve learned that while Americans’ confidence in police does change somewhat according to national conversations about policing, it also tends to revert relatively quickly, back to previous norms.
Trend surveys are unique among longitudinal survey techniques because the same people may not answer the researcher’s questions each year. For example, when we administered our public opinion survey on local police, we surveyed people who lived in our city in 2016, and again in 2019. We did not track who completed the survey each year; some respondents may have completed the survey in both years, and others might only have participated in one year. While our analyses of results from the two years indicated overall trends in the public’s opinion of the police, we could not say whether individual people’s opinions had changed over time. This is not necessarily a problem for trend surveys, because the goal is to examine changes in how the general population thinks about an issue over time. In short, the same people don’t have to participate in trend surveys each time.
Unlike in a trend survey, in a panel survey, the same people participate in the survey each time it is administered. For this reason, panel studies can be difficult and costly. Imagine trying to administer a survey to the same 100 people every year for 5 years in a row; keeping track of where people live, when they move, and when they die takes resources that researchers often don’t have. When they do, however, the results can be quite powerful. The University of Minnesota’s Youth Development Study (YDS) offers an excellent example of a panel study. Since 1988, YDS researchers have administered an annual survey to the same 1,000 people. Study participants were in ninth grade when the study began, and are now in their thirties. Several hundred papers, articles, and books have been written using data from the YDS.
The third type of longitudinal survey offers a middle ground between trend and panel surveys. In a cohort survey, a researcher identifies some category of people of interest, and then regularly surveys people who fall into that category. For example, researchers may identify people of specific generations or graduating classes, people who began work in a given industry at the same time, or perhaps people with specific life experiences in common. Similar to a trend survey, the same people don’t necessarily participate from year to year, but all participants must meet the categorical criteria for inclusion in the study.
All three types of longitudinal surveys share the strength of allowing a researcher to make observations over time. This means that if the behavior or other phenomenon of interest changes over time, either because of some world event or because people age, the researcher will notice those changes.
In sum, when or with what frequency a survey is administered will determine whether a survey is cross-sectional or longitudinal. Longitudinal surveys may be preferable, in terms of their ability to track changes over time, but the time and cost required to administer a longitudinal survey can be prohibitive.
Administration
Surveys vary not just in terms of when they are administered, but also how they are administered. Researchers commonly use self-administered questionnaires to gather survey data. In a self-administered questionnaire, respondents receive a written set of questions to which they respond. Self-administered questionnaires can be delivered in hard-copy format or online. We’ll consider both modes of delivery here.
Hard-copy self-administered questionnaires may be delivered to participants in person or via snail mail. Researchers may deliver surveys by going door-to-door, asking people to fill them out right away, making arrangements to mail the completed survey back, or having the researcher return to pick it up at a later date. We used this method in our policing survey: we knocked on doors, explained the purpose of the study, asked people if they wanted to participate, and if they consented, handed them a paper questionnaire with instructions about when and where to leave the questionnaire for later pick up. Though the advent of online survey tools has made door-to-door delivery of surveys less common, some researchers still choose this method. In our survey, we wanted to ensure a more representative sample of the population, than only people who would have access to or hear about an online survey.
Distributing surveys door-to-door can be extremely time-consuming, so many researchers send their surveys through the mail. While this mode of delivery may not be ideal, sometimes it is the most practical or only available option. Often, survey researchers who deliver their surveys via snail mail may provide some advance notice to respondents about the survey to get people thinking about and preparing to complete it. They may also follow up with their sample a few weeks after their survey has been sent out. This can be done to remind those who have not yet completed the survey to please do so, and to thank those who have already returned the survey. This sort of follow-up can greatly increase response rates.
Online surveying has become increasingly common because of the ease of use, cost-effectiveness, and speed of data collection. It’s much simpler to create a survey online, send the link to potential respondents, and then wait for the responses to roll in. With online surveys, researchers may employ similar strategies as mail surveys to increase response rates, including sending advance notice and following up with reminders to complete the survey. To deliver online, a researcher may subscribe to a service that offers online survey construction and administration. Some services offer free and paid online survey services, and some provide results in formats already readable by data analysis programs. This saves the researcher the step of manually entering data into a data analysis program, as they would if they administered their survey in hard-copy format.
There are pros and cons to each of the delivery options we’ve discussed. For example, while online surveys may be faster and cheaper than mailed surveys, a researcher can’t be certain that every person in their sample will have the necessary computer hardware, software, and Internet access to complete an online survey. On the other hand, mailed surveys may be more likely to reach the entire sample, but they are also more likely to be thrown away, lost, or not returned. The choice of delivery mechanism depends on factors such as the researcher’s resources, respondents’ resources, and the time available to distribute surveys and wait for responses.
Biases in Survey Research
Survey research also has some unique considerations, related to generalizing findings from the sample to the broader population. These potential biases include non-response bias, sampling bias, social desirability bias, and recall bias. While some of these biases apply to multiple research methods, they may be particularly relevant in survey research that aims for generalizability from a sample to a population, which is why we’ll discuss them in this chapter.
Non-Response Bias
Survey research can yield notoriously low response rates. For example, a response rate of 15-20% is typical in a mail survey, even after sending two or three reminders to potential respondents. If a large percentage of potential respondents fail to respond to a survey, researchers must consider whether people aren’t responding for some common reason, which may raise questions about the validity of the study’s findings. For instance, dissatisfied customers tend to be more vocal about their experience than satisfied customers, so they may be more likely to respond to surveys than satisfied customers. Hence, any sample of respondents in a survey about customer service may have more dissatisfied customers than the broader population from which the researcher draws the sample. This means that the researcher must be very careful when discussing the generalizability of the survey results, because the observed data may be an artifact of the biased sample rather than an accurate representation of the population.
Knowing this in advance can help survey researchers strategize about how to improve response rates. Sending a short letter or message to potential respondents before the survey begins can prepare them in advance, and improve their likelihood of responding, especially if the letter explains the purpose and importance of the study, how the survey will be administered (e.g., by mail or online), and a note of appreciation for their participation. This can also help respondents see how the issues in the survey may be relevant to their lives, which can improve response rates.
Other strategies to improve response rates include making the survey as short as possible, with clear questions that are easy to respond to, sending multiple follow-up requests for participation in the survey, providing incentives (e.g., cash or gift cards, giveaways, entry into a drawing, or discount coupons) to compensate people for the time and inconvenience of participating, and assuring potential respondents of the confidentiality and privacy of their data.
Sampling Bias
As discussed in Chapter 8, sampling bias occurs when the people selected for inclusion in a study don’t represent the larger population the researcher is interested in studying. A particular concern in survey research relates to how the researcher administers the survey. For example, online surveys tend to include a disproportionate number of students and younger people who are constantly on the Internet; they systematically exclude people with limited or no access to computers or the Internet, such as the poor and the elderly. Further, any surveys that respondents must read and answer independently will exclude those unable to read, understand, or meaningfully respond to the questions.
Social Desirability Bias
Many people try to avoid expressing negative opinions or embarrassing comments about themselves, employers, family, or friends. On a survey, researchers may not get truthful responses to questions that require expressing these negative views. Instead, respondents might spin the truth to portray themselves or people they know in a positive, more socially desirable light. For example, respondents might try to protect their family, friends, and neighbors by saying that they disagree with statements such as “My family tends to get on my nerves,” “There are a lot of political conflicts in my neighborhood,” or “My friends often engage in activities that are against the law,” even though they may agree with some of the statements. While researchers can never know for sure how social desirability bias might impact responses to survey questions, they can try to lessen it by assuring confidentiality (and anonymity, if possible), allowing respondents to complete their surveys in private and return them in sealed envelopes, and telling respondents that they can skip any question they do not want to answer.
Survey researchers can also mitigate some of the effects of social desirability bias by thoughtfully constructing their survey. For example, asking multiple questions to measure a single topic (e.g., asking about family dynamics with a set of questions instead of just one) gives more data points to assess the topic. In another strategy, researchers who ask teenagers about their use of various drugs might include a drug with a fake name to see if respondents want to look cool rather than accurately answer the questions. If a respondent says they’ve taken the drug, the respondent’s other answers might be removed before analysis. A similar tactic would be to discard unrealistic responses, such as someone indicating that they commit 100 crimes daily.
Recall Bias
Chapter 12 mentioned the idea of recall bias as a weakness of interviews. In this type of bias, respondents may not fully or accurately remember past events, motivations, or behaviors concerning those events. You might experience recall bias when someone asks about your weekend. Even if it’s Monday, when someone says, “How was your weekend?” or “What did you do this weekend?” you may be unable to answer the question. After some thought, you can probably bring back the memory, but you might not remember every detail, emotion, or motivation behind your actions over the weekend. What if someone asks you about some event last month, last year, or even years ago? How likely is it that you’d remember the event in detail?
The same issue with remembering events happens in survey research. For example, if a survey asks respondents to note how often they used alcohol and drugs during high school, they might not remember exactly how often they engaged in those behaviors. Sometimes, researchers can somewhat mitigate recall bias by anchoring respondents’ memories in specific events as they happened. For example, a survey might ask respondents to think about an occasion when they drank alcohol while in high school and report on specific aspects of that event. Then, the survey could ask respondents to estimate how often those specifics occurred throughout their years in high school. While not a perfect solution, this kind of anchoring can help mitigate some of the concerns with recall bias.
Designing Effective Questions and Questionnaires
At some point, a researcher must write survey questions and create the questionnaire they will send to potential respondents. While it may seem easy to design a bunch of questions and send them out, survey construction involves careful and thoughtful planning to mitigate potential biases in the research, ensuring that respondents can read, understand, and respond to the survey questions meaningfully. Some decisions researchers must make at this stage include the content of questions, wording, response formats, and sequencing. All these decisions can have significant consequences for survey responses.
Question Content
Question content refers to the topics you want to discuss in a survey. In other words, the researcher must identify what they want to know. As silly as this sounds, it can be easy to forget to include essential questions in a survey. For example, let’s say you want to understand how people transition out of prison. Perhaps you wish to identify which people were comparatively more or less successful in transitioning and which factors contributed to the success or lack thereof. To understand which factors shaped successful transitions, you’ll need to include questions in your survey about all possible factors that might contribute. Consulting the literature on the topic will help, as will brainstorming on your own and talking with others about what they think may be critical in the transition out of prison. Time or space limitations won’t allow you to include every single item you’ve come up with, so consider ranking your questions to include those that seem most important.
Although including questions on all key topics makes sense, researchers don’t want to include every possible question they can think of. Doing so would place an unnecessary burden on survey respondents. Survey researchers have asked respondents to give their time and attention to the survey. Because of this, asking them to complete an extremely long questionnaire just because the questions sound interesting to the researcher can be disrespectful to the respondents.
Question Wording
Once a researcher has identified all the topics they’d like to cover in the survey, they need to write the questions. Question wording refers to the decisions a survey researcher must make about how to write each question. Responses obtained in survey research are very sensitive to the types of questions asked, and poorly framed or ambiguous questions may result in meaningless responses with very little value. For these reasons, survey researchers often use common rules to evaluate their questions. We’ll discuss these below as a set of questions that you would ask about each survey question to ensure the quality of each question.
1. Is the question clear and understandable?
Survey questions should be as clear and to the point as possible. A survey is a technical instrument and should be as direct and concise as possible. Questions should use simple language, in an active voice, without complicated words or jargon. As discussed earlier, survey respondents have agreed to give their time and attention to the survey, and the best way to show appreciation for their time is to not waste it. Ensuring questions are clear goes a long way toward showing respondents the respect they deserve.
2. Is the question worded negatively?
Negatively worded questions tend to confuse respondents and can lead to inaccurate responses. For example, a question such as “Should the police department not wear body cameras?” is confusing and may frustrate respondents, as they must do the mental gymnastics required to answer the question. Survey researchers must avoid these types of questions and those that include double negatives. For example, what if a question asked, “Did you not drink during high school?” A response of “no” would mean that the respondent did drink because they did not not drink. Did you have to read that last sentence twice to see the logic? Imagine if you had to answer these kinds of questions on a survey; your brain would quickly tire of all the deciphering, and you’d likely end up not finishing the survey. In general, avoiding negative terms in the question wording helps increase respondents’ understanding.
3. Is the question ambiguous?
Survey questions should not include words or expressions that may be interpreted differently by other respondents. For instance, if a question asks respondents to report their annual income, it must be clear whether it refers to salary/wages, dividend, rental, or other income, and whether it’s asking for individual income, family income, or personal and business income. Different interpretations by different respondents will lead to incomparable responses that cannot be accurately analyzed.
Regionally or culturally specific phrases can be ambiguous, especially to respondents outside that region or culture. For example, when I moved from Florida to Colorado as a teenager, people used the word “pop” to refer to all types of soda. In the South, we’d always used the term “Coke” to refer to any variety of soda. So, imagine the confusion ensuing from a question about Coke consumption in a region where “Coke” means Coca-Cola. The results from that survey question would mean different things in different regions, which would provide data of little value to a researcher interested in people’s consumption of all kinds of soda.
4. Is the question double-barreled?
Double-barreled questions ask multiple questions as though they are a single question, which can be confusing and frustrating for survey respondents. For example, consider how a respondent might answer this question: “How well do you think the police protect and serve the people in your neighborhood?” What if they thought the police were doing a good job protecting people in the neighborhood but not serving them? Or what if they thought the police were doing a good job serving people in the neighborhood but not protecting them? This is a double-barreled question because it’s asking two separate questions: 1) how well do you think the police are doing at protecting your neighborhood, and 2) how well do you think the police are doing at serving your neighborhood? Because the original question combines protecting and serving, it’s a double-barreled question.
5. Is the question too general or too specific?
There’s a fine line between being too general and too specific in question-wording. Questions that are too general may not accurately convey respondents’ perceptions. If a researcher asked someone how they liked a particular program and provided responses ranging from “not at all” to “extremely well,” it would be unclear what the responses mean. Instead, asking more specific behavioral questions, such as whether they would recommend this program to others or plan to enroll in other programs offered by the same group, can better assess people’s perceptions of the program. Likewise, instead of asking how big a respondent’s neighborhood is, a researcher could ask how many live on the respondent’s block or street.
Questions that are too specific may be unnecessarily detailed and serve no particular research purpose. Consider a researcher interested in annual household income. Asking a respondent to report the adjusted gross income on their last tax return may be too specific unless it serves a particular purpose for the research goals. Generally, asking respondents to estimate their annual household income or choose from a range of possible income options would be sufficient for gathering basic demographic information. At the same time, if a researcher thinks the detailed data might be beneficial, they should provide too much detail rather than too little.
Response Formats
Response options are the answers that you provide to the people taking your survey. Providing respondents with unambiguous response options is important when designing effective survey questions. Generally, surveys ask respondents to choose a single (or best) response to each question. In certain cases, respondents can be asked to select multiple response options.
Offering response options assumes that your questions will be closed-ended. In a quantitative written survey, chances are good that most questions will be closed-ended. This means the researcher provides respondents with limited options for their responses. When writing effective closed-ended questions, researchers must follow a few guidelines. First, the response options must be mutually exclusive. In other words, the categories must not overlap. For example, if a question asks a respondent to report how many times they’ve interacted with the police in the past year and provides the options of 1-3 times, 3-5 times, and 5-7 times, what category would a person choose if they’d interacted with the police 3 or 5 times? To ensure that the options are mutually exclusive, the researcher could rewrite the response options to be 1-3 times, 4-6 times, and 7-9 times. To be sure that respondents can answer accurately, the categories provided must not overlap.
You might have noticed another problem with the response options presented above. What if a person had interacted with the police 0 times or 10 times? These options aren’t provided, so what option would they choose? This points to another guideline: response options must be exhaustive. In other words, the set of responses provided must cover every possible response. In the example above, the researcher could add categories for 0 times and more than 7 times to make the list exhaustive.
Another consideration for response options involves the number and type of options, also called levels of measurement. Researchers can choose between three levels of measurement: nominal, ordinal, or interval/ratio response options. With nominal response options, the survey question presents two or more two options that have no inherent order. Dichotomous response options (a type of nominal level of measurement) are those in which a respondent must choose one of two possible choices, such as yes/no or agree/disagree. For example, the question, “Do you think that the death penalty is justified under some circumstances (circle one): yes/no is dichotomous because there are only two answer choices given. Nominal-level response options can also involve more than two answer choices. For example, the question, “What is your industry of employment: manufacturing / consumer services / retail / education / healthcare / tourism & hospitality / other” presents nominal response options because there are more than two categories, and they have no inherent order.
By contrast, ordinal response options present more than two options that can be ordered. For example, the question “What is your highest level of education (choose one): some high school / high school diploma or GED / some college, no degree / associate’s degree / bachelor’s degree / some graduate school / graduate degree” has more than two options, and those options can be ordered (from least to most education).
Interval/ratio response options involve options for which respondents enter a number as their answer. For example, asking for a respondent’s age and providing a blank space for them to write in their answer would be an interval/ratio response option.
Thus far, we’ve discussed response formats for closed-ended questions. Sometimes, survey researchers include open-ended questions in their questionnaires to gather additional information from respondents. An open-ended question does not include response options; instead, respondents are asked to reply to the question in their own way, using their own words. Survey researchers use these questions to learn more about the participant’s experiences or feelings about whatever they are being asked to report. For example, our policing survey included an open-ended question at the end, asking respondents to provide any other details, perceptions, or experiences with the police that they would like the researchers to know. Allowing participants to share some of their responses in their own words can make the experience of completing the survey more satisfying to respondents, revealing new motivations or explanations that had not occurred to the researcher.
Question Sequencing
In addition to constructing quality questions and posing clear response options, researchers must consider how to present their written questions and response options. One of the first steps after writing survey questions is to group the questions thematically. In the example of the transition from prison, perhaps we’d have questions about daily routines, support systems, and others on exercise and eating habits. Those may be the themes around which we organize our questions. Or perhaps it would make more sense to present questions about pre-prison life and habits and a series of questions about life after prison. There’s no fixed way to organize the questions, but researchers must deliberately choose an order that makes sense given the research goals.
Once a researcher has grouped similar questions, the next consideration is the order to present the question groups. In general, questions should flow logically from one to the next, with the least sensitive questions leading into the most sensitive, the factual and behavioral leading into the attitudinal, and from the more general to the more specific. Some researchers disagree on where to put demographic questions about a person’s age, gender, and race. On one hand, placing them at the beginning of the questionnaire may lead respondents to think the survey is insignificant, and not something they want to bother completing. However, if the survey deals with sensitive or difficult topics, such as child sexual abuse or other criminal activity, you don’t want to scare respondents by beginning with intrusive questions. Some other general rules for question sequencing include starting with a closed-ended question, asking questions in chronological order if they relate to a sequence of events, and asking about one topic at a time rather than switching between topics with every question.
In the end, the order in which a researcher presents survey questions depends on the unique characteristics of the research. Only the researcher, hopefully in consultation with people willing to provide feedback, can determine how best to order the questions. To do so, the researcher might consider the unique characteristics of the topic, the questions, and most importantly, the sample. Remembering the characteristics and needs of the people asked to complete the survey can help guide decisions about the most appropriate order to present the survey questions.
When researchers think they have a satisfactory questionnaire ready for respondents, they often pre-test the survey before sending it out. Pre-testing refers to the process of having a few people take the survey as if they were real respondents, to identify any issues with the question content, wording, response options, or sequencing. While pre-testing can be expensive and time-consuming, pre-testing with a small group of colleagues or friends can result in a vastly improved questionnaire. By pre-testing a questionnaire, researchers can find out how understandable the questions are, get feedback on question wording and order, and learn whether any questions are unintentionally boring or offensive. The researcher can also ask pre-testers to track how long it takes them to complete the survey, providing valuable information on whether the researcher needs to cut some questions, and how long respondents should expect to spend on it. In general, surveys should take no longer than 10-15 minutes to complete. Any longer and respondents may be more likely to refuse to participate, or they may not complete the entire questionnaire.
In sum, designing effective questions and questionnaires requires thoughtful planning that accounts for the research goals, respect for respondents’ time, attention, trust, and confidentiality of personal information. Keeping the survey as short as possible, limiting the questions to only those necessary for the research project, and providing information about the confidentiality, use of responses, and how data will be reported will increase the chances that the researcher will gather quality data.
Summary
- Unlike interviews, survey research involves the researcher sending questionnaires to potential respondents, who then complete the survey on their own.
- Survey research is a quantitative data collection method in which researchers use standardized questionnaires to systematically collect data about people in their sample.
- Researchers use surveys when they want to describe trends or common features of a large group of people, or when they want to quickly gain general information about a population of interest in preparation for a more focused, in-depth study.
- Some benefits of survey research include measuring a wide variety of information, collecting data from many people quickly with relatively minimal expense, generalizing to larger populations, and consistency across questions and answers.
- Some drawbacks of survey research include being unable to change questions once surveys have been delivered, and potentially lower validity of answers compared to more in-depth research methods.
- Two types of surveys are cross-sectional surveys, administered at one point in time, and longitudinal surveys, administered multiple times. Some types of longitudinal surveys include trend surveys that investigate changes over time in a general population, panel surveys that survey the same people at multiple time points, and cohort surveys in which researchers regularly survey people who fall into specific categories.
- Surveys are usually self-administered, either delivered in hard-copy format or online.
- Researchers should strive to reduce the chances of various types of biases that can arise in survey research. These include non-response, sampling, social desirability, and recall bias.
Key Terms
Closed-Ended Questions | Mutually Exclusive | Question Wording |
Cohort Survey | Nominal Options | Recall Bias |
Cross-Sectional Survey | Non-Response Bias | Response Options |
Dichotomous Options | Open-Ended Questions | Sampling Bias |
Exhaustive | Ordinal Options | Self-Administered Questionnaire |
Interval/Ratio Options | Panel Survey | Social Desirability Bias |
Levels of Measurement | Pre-Testing | Survey Research |
Longitudinal Survey | Question Content | Trend Survey |
Discussion Questions
- What are some ways that researchers might overcome some of the weaknesses of survey research?
- Based on a research question you identified through earlier exercises in this text, write a few closed-ended questions you could ask in a questionnaire. Use the information in this chapter to critique your questions based on the content, wording, and response options.
- Why might a researcher choose a cross-sectional survey over a longitudinal survey?
- Give an example of a research question that would best be answered by each type of longitudinal survey (trend, panel, and cohort). How do the research questions have to change for each type of survey?
- If you were to conduct survey research, would you deliver the questionnaire in hard-copy format or online? Why?
- How might each of the four types of bias come about in online survey methods? How would they be different for questionnaires administered in hard-copy format?
- If you were to develop a questionnaire based on a research question you have identified through earlier exercises in this text, which topics would you cover at the beginning, middle, and end of your survey? Why would you choose that particular sequence of topics?