"

Survey Research

When you think of “science,” do you think “experiment”? If you do, you’re not alone! Experiments are indeed one method of scientific inquiry, and we’ll talk about experiments later when we talk about causal methods. In the social sciences, experiments are but one of many options, and sometimes they’re not even possible with the kind of topics social scientists may want to study. Therefore, before we talk about those options, let’s first discuss the most common methods we might use when interested in using a quantitative approach to research.

People can sometimes get a little hung up on believing that non-experimental designs are less scientific than experiments. However, that is not the case at all! Remember that all designs have both weaknesses and strengths, and the goal of a researcher should be selecting the method that is most appropriate for their particular question and study goal (remember the goal of describing over explaining? Those types of questions are sometimes better served by non-experimental designs than experiments).

Surveys

A useful and endlessly adaptable design that is often used for non-experimental work is surveys. You have almost certainly taken (or been invited to take) a survey of some sort – have you ever been on a phone call with customer service and been asked to rate your experience, or have you received an email asking for your opinion on something? The purpose and quality of surveys varies widely, so if you’ve taken a bad survey before, you know that there are better ways to get data!

Survey research, as with all methods of data collection, comes with both strengths and weaknesses. We’ll examine both in this section.

Strengths of survey methods

 Researchers employing survey methods to collect data enjoy a number of benefits. First, surveys are an excellent way to gather lots of information from many people. In a study of older people’s experiences in the workplace, researchers were able to mail a written questionnaire to around 500 people who lived throughout the state of Maine at a cost of just over $1,000. This cost included printing copies of a seven-page survey, printing a cover letter, addressing and stuffing envelopes, mailing the survey, and buying return postage for the survey. While $1,000 is nothing to sneeze at, of course, just imagine what it might have cost to visit each of those people individually to interview them in person. You would have to dedicate a few weeks of your life at least, drive around the state, and pay for meals and lodging to interview each person individually. We could double, triple, or even quadruple our costs pretty quickly by opting for an in-person method of data collection over a mailed survey. Thus, surveys are relatively cost-effective.

 

Related to the benefit of cost-effectiveness is a survey’s potential for generalizability. Surveys allow researchers to collect data from very large samples for a relatively low cost, therefore survey methods lend themselves to the probability sampling techniques discussed earlier. Of all the data collection methods described in this textbook, survey research is probably best to use when the researcher wishes to gain a representative picture of the attitudes and characteristics of a large group.

Survey research also tends to be a reliable method of inquiry. This is because surveys are standardized in that the same questions, phrased in exactly the same way, are posed to participants. Other methods like qualitative interviewing, which we’ll learn about later, do not offer the same consistency that a quantitative survey offers. This is not to say that all surveys are always reliable. A poorly phrased question can cause respondents to interpret its meaning differently, which can reduce that question’s reliability. Assuming well-constructed questions and survey design, one strength of this methodology is its potential to produce reliable results.

The versatility of survey research is also an asset. Surveys are used by all kinds of people in all kinds of professions, which means that understanding how to construct and administer surveys is a useful skill to have. Lawyers might use surveys in their efforts to select juries. Social services and other organizations (e.g., churches, clubs, fundraising groups, activist groups) use them to evaluate the effectiveness of their efforts. Businesses utilize surveys to inform marketing strategies for their products. Governments use surveys to understand community opinions and needs. Politicians and media outlets use surveys to understand their constituencies.

In sum, the following are benefits of survey research:

  • Cost-effectiveness
  • Generalizability
  • Reliability
  • Versatility

Weaknesses of survey methods

As with all methods of data collection, survey research comes with a few drawbacks. While some may argue that surveys are flexible because researchers can ask many different questions on a plethora of topics, survey researchers are generally confined to a single instrument for collecting data, the questionnaire. Surveys are in many ways rather inflexible. Let’s say you mail a survey out to 1,000 people and then discover, as responses start coming in, that your phrasing on a particular question seems to be confusing a number of respondents. At this stage, it’s too late for a do-over or to change the question for the respondents who haven’t yet returned their surveys. When conducting in-depth interviews, on the other hand, a researcher can provide respondents further explanation if they’re confused by a question and can tweak their questions as they learn more about how respondents seem to understand them.

Depth can also be a problem with surveys. Survey questions are standardized; thus, it can be difficult to ask anything other than very general questions that a broad range of people will understand. Due to the general nature of questions, survey results may not be as valid as results obtained using other methods of data collection that allow a researcher to comprehensively examine the topic being studied. For example, let’s think back to the opening example of this chapter and say that you want to learn something about voters’ willingness to elect an African American president. General Social Survey respondents were asked, “If your party nominated an African American for president, would you vote for him if he were qualified for the job?” Respondents were then asked to respond either yes or no to the question. What if someone’s opinion was more complex than a simple yes or no? What if, for example, a person was willing to vote for an African American woman but not an African American man (please note we’re not suggesting that stance makes sense, but it’s one someone might want to take nonetheless).

In sum, potential drawbacks to survey research include the following:

  • Inflexibility
  • Lack of depth

 

Considerations for surveys

There is immense variety within the realm of survey research methods. This variety comes both in terms of time—when or with what frequency a survey is administered—and in terms of administration—how a survey is delivered to respondents. In this section, we’ll look at what types of surveys exist when it comes to both time and administration.

Time

We’ve talked about time issues in research before. Because of the versatility of surveys in terms of time, however, it’s worth reviewing the options. In terms of time, there are two main types of surveys: cross-sectional and longitudinal. Cross-sectional surveys are administered only one time. They provide researchers a snapshot in time and offer an idea about how things are for the respondents at the specific time that the survey is administered.

An example of a cross-sectional survey comes from Aniko Kezdy and colleagues’ study (Kezdy et al., 2011) [1] of the association between religious attitudes, religious beliefs, and mental health among students in Hungary. These researchers administered a single, one-time-only, cross-sectional survey to a convenience sample of 403 high school and college students. The survey focused on how religious attitudes impact various aspects of respondents’ life and health. From the analysis of their cross-sectional data, the researchers found that anxiety and depression were highest among those who had both strong religious beliefs and some doubts about religion.

Yet another example of cross-sectional survey research can be seen in Bateman and colleagues’ study (Bateman et al., 2011) [2] of how the perceived publicness of social networking sites influences self-disclosure among users. These researchers administered an online survey to undergraduate and graduate business students. They found that even though revealing information about oneself is viewed as key to realizing many of the benefits of social networking sites, respondents were less willing to disclose information about themselves as their perceptions of a social networking site’s publicness rose. That is, there was a negative relationship between perceived publicness of a social networking site and plans to self-disclose on the site.

 

Cross-sectional surveys can be problematic because the events, opinions, behaviors, and other phenomena that such surveys are designed to assess don’t generally remain stagnant. Many of the phenomena change over time, therefore generalizing from a cross-sectional survey can be tricky. Perhaps you can say something about the way things were in the moment that you administered your survey, but it is difficult to know whether things remained that way for long after you administered your survey. Think, for example, about how Americans might have responded if administered a survey asking for their opinions on terrorism on September 10, 2001. Now imagine how responses to the same set of questions might differ were they administered on September 12, 2001. This is not to undermine the many important uses of cross-sectional survey research; however, researchers must be mindful that they have captured a snapshot of life as it was at the time that the cross-sectional survey was administered.

One way to overcome this sometimes-problematic aspect of cross-sectional surveys is to administer a longitudinal survey. Longitudinal surveys enable a researcher to make observations over some extended period of time.

The first type of longitudinal survey is called a repeated cross-sectional survey. The main focus of this kind of survey is trends. Researchers conducting this kind of survey are interested in how people in a specific group change over time, but they may not want to know if the same specific people are changing. Each time researchers gather data, they survey different people (so they take a different cross-section) from the identified group because they are interested in the trends of the whole group, rather than changes in specific individuals. Let’s look at an example.

The Monitoring the Future Study (http://www.monitoringthefuture.org/) is a study that described the substance use of high school children in the United States. It’s conducted annually by the National Institute on Drug Abuse (NIDA). Each year, the NIDA distributes surveys to children in high schools around the country to understand how substance use and abuse in that population changes over time. The data points provide insight into targeting substance abuse prevention programs towards the current issues facing the high school population, because each year, the study takes a snapshot of the same issue in a new group of people from the same population.

Unlike repeated cross-sectional surveys, panel surveys require the same people participate in the survey each time it is administered. As you might imagine, panel studies can be difficult and costly. Imagine trying to administer a survey to the same 100 people every year, for 5 years in a row. Keeping track of where respondents live, when they move, and when they die takes resources that researchers often don’t have. However, when the researchers do have the resources to carry out a panel survey, the results can be quite powerful. The Youth Development Study (YDS), administered from the University of Minnesota, offers an excellent example of a panel study. You can read more about the Youth Development Study at its website: https://cla.umn.edu/sociology/graduate/collaboration-opportunities/youth-development-study.

Since 1988, YDS researchers have administered an annual survey to the same 1,000 people. Study participants were in ninth grade when the study began, and they are now in their thirties. Several hundred papers, articles, and books have been written using data from the YDS. One of the major lessons learned from this panel study is that work has a largely positive impact on young people (Mortimer, 2003). [3] Contrary to popular beliefs about the impact of work on adolescents’ school performance and transition to adulthood, work increases confidence, enhances academic success, and prepares students for success in their future careers. Without this panel study, we may not be aware of the positive impact that working can have on young people.

In this kind of study, the changes within the same people are important to track – we talked before about what studies of marital quality have shown when we take the data from repeated cross-sections vs. panel surveys – that’s why this decision matters!

Types of longitudinal surveys
Sample type Description
Trend Researcher examines changes in trends over time; the same people do not necessarily participate in the survey more than once.
Panel Researcher surveys the exact same sample several times over a period of time.

Administration

Surveys vary not only in terms of when they are administered but also in terms of how they are administered. One common way to administer surveys is in the form of self-administered questionnaires. This means that a research participant is given a set of questions, in writing, to which they are asked to respond. Self-administered questionnaires can be delivered in hard copy format via mail or electronically online. We’ll consider both modes of delivery here.

Hard copy self-administered questionnaires may be delivered to participants in person or by postal mail. It is common for researchers to administer surveys in large social science classes, so perhaps you have taken a survey that was given to you in person on campus. If you are ever asked to complete a survey in a similar setting, it might be interesting to note how your perspective on the survey might be shaped by the new knowledge that you will gain about survey research methods from this chapter.

Researchers may also deliver surveys in person by going door-to-door. They may ask people to fill them out right away or arrange to return and pick up completed surveys. Though the advent of online survey tools has made door-to-door surveys less common, you can still see an occasional survey researcher at your door, especially around election time. This mode of gathering data is apparently still used by political campaign workers, at least in some areas of the country.

If you are unable to personally visit each member of your sample to deliver a survey, then you might consider sending your survey through the mail. While this mode of delivery may not be ideal, sometimes it is the only available or the most practical option. As mentioned, though, this may not be the most ideal way of administering a survey because it can be difficult to convince people to take the time to complete and return your survey. Imagine how much less likely you’d be to return a survey when there is no researcher waiting at your door to take it from you.

Survey researchers who deliver their surveys via postal mail often provide some advance notice to respondents about the survey to get people thinking and preparing to complete it. They may also follow up with their sample a few weeks after their survey has been sent out. This can be done not only to remind those who have not yet completed the survey to please do so but also to thank those who have already returned the survey. Most survey researchers agree that this sort of follow-up is essential for improving mailed surveys’ return rates (Babbie, 2010). [6] Other helpful tools to increase response rate are to create an attractive and professional survey, offer monetary incentives, and provide a pre-addressed, stamped return envelope.

Earlier, we also mentioned online delivery as another way to administer a survey. This delivery mechanism is becoming increasingly common because it is easy to use, relatively cheap, and may be quicker than knocking on doors or waiting for mailed surveys to be returned. To deliver a survey online, a researcher may subscribe to a service that offers online delivery or use some delivery mechanism that is available for free. In addition to the advantages of being online, these services are great because they can provide your results in formats that are readable by data analysis programs like SPSS. This saves you (the researcher) the step of having to manually enter data into your analysis program, as you would if you administered your survey in hard copy format (and there’s always the risk of errors when entering data from paper – the less you have to handle the data itself, the better).

Many of the suggestions that were provided earlier to improve the response rate of hard copy questionnaires also apply to online questionnaires. While the incentives that can be provided online differ from those that can be given in person or by mail, online survey researchers can still offer completion incentives to their respondents. Many surveys don’t provide anything other than the satisfaction of knowing you’re contributing to research. However, some surveys have additional perks, such as a gift card code that you receive at the end, or entry into a lottery for a larger prize, like a new tech product or tickets to something fun.

Unfortunately, online surveys may not be accessible to individuals with limited, unreliable, or no access to the internet or individuals with limited computer skills. If those issues are common in your target population, online surveys may not work as well for your research study. While online surveys may be faster and cheaper than mailed surveys, mailed surveys are more likely to reach your entire sample but also more likely to be lost and not returned. The best choice of delivery mechanism depends on numerous factors, including your resources, the resources of your study participants, and the time you have available to distribute surveys and wait for responses. Understanding the characteristics of your study’s population is key to identifying the appropriate mechanism for delivering your survey.

Sometimes surveys are administered by having a researcher verbally pose questions to respondents rather than having respondents read the questions on their own. Researchers using phone or in-person surveys use an interview schedule which contains the list of questions and answer options that the researcher will read to respondents. Consistently presenting both the questions and answer options is very important with an interview schedule. By presenting each question-and-answer option in exactly the same manner to each interviewee, the researcher minimizes the potential for the ‘interviewer effect,’ which encompasses any possible changes in interviewee responses based on how or when the researcher presents question-and-answer options. Additionally, in-person surveys may be recorded, and you can typically take notes without distracting the interviewee due to the closed-ended nature of survey questions.

Interview schedules, also known as quantitative interviews, are used in both phone surveys and in-person surveys. Phone surveys are often conducted by political polling firms to understand how the electorate feels about certain candidates or policies. In both cases, researchers verbally pose questions to participants. You might be able to imagine the challenges of this method in the age of spam calls and unknown numbers. It is easy and even socially acceptable to abruptly hang up on an unwanted caller, if you answer the phone at all. Additionally, a distracted participant who is cooking dinner, tending to troublesome children, or driving may not provide accurate answers to your questions. Phone surveys make it difficult to control the environment in which a person answers your survey. Another challenge comes from the increasing number of people who only have cell phones and do not use landlines (Pew Research, n.d.). [7] Unlike landlines, cell phone numbers are portable across carriers, associated with individuals as opposed to households, and do not change their first three numbers when people move to a new geographical area.

To help maintain rigor in live quantitative interviews, either by phone or in person, programs called computer-assisted telephone interviewing (CATI) or computer-assisted personal interviewing (CAPI) have been developed to assist quantitative survey researchers. They allow the interviewer to enter responses directly into a computer as they are provided and the computer can skip ahead to the next correct question (especially helpful if the survey has a lot of branching options based on earlier answers), thus saving hours of time that would otherwise have to be spent entering data into an analysis program by hand. This is also important because quantitative interviews must also be administered in such a way that the researcher asks the same question the same way each time. While questions on hard copy questionnaires may create an impression based on the way they are presented, having a person administer (read) questions introduces a slew of additional variables that might influence a respondent. Even a slight shift in emphasis on a word may bias the respondent to answer differently. Consistency is key with quantitative data collection—and human beings are not necessarily known for their consistency.

Thus, there’s a middle ground option between questionnaires and quantitative interviews that can be helpful in some cases as well – computer-assisted self-interview (CASI). In this method, respondents are answering the questions on the interviewer’s device, reading or listening to the questions directly from the machine and entering their own answers. This is helpful when the questions are regarding something particularly sensitive (so the respondent may be embarrassed to answer honestly if they have to speak it to the interviewer), but there’s still an interviewer around in case there are issues and to help motivate the respondent to finish the interview (in contrast to someone completing the survey at home alone on their computer, where they might get distracted and forget to complete it).

Quantitative interviews broadly can help reduce a respondent’s confusion. If a respondent is unsure about the meaning of a question or answer option on a self-administered questionnaire, they probably won’t have the opportunity to get clarification from the researcher. An interview, on the other hand, gives the researcher an opportunity to clarify or explain any items that may be confusing. If a participant asks for clarification, the researcher must use pre-determined responses to make sure each quantitative interview is exactly the same as the others.

Even though in-person surveys are conducted in the same way as phone surveys, they must account for non-verbal expressions and behaviors. One noteworthy benefit of in-person surveys is that they are more difficult to say “no” to because the participant is already sitting across from the researcher. Participants are less likely to decline in-person surveys and are much more likely to “delete” an emailed online survey or “hang up” during a phone survey. In-person surveys are also much more time consuming and expensive than mailing questionnaires. Thus, quantitative researchers may opt for self-administered questionnaires over in-person surveys on the grounds that they will be able to reach a large sample at a much lower cost than were they to interact personally with each and every respondent.

Who responds?

Response rates and other sampling issues aren’t unique to surveys, but let’s consider them in this context as they most often come up as criticism in this space. 

It can be very exciting to receive your first few completed questionnaires from your respondents. Hopefully you’ll receive more than a few! Once you have a handful of completed questionnaires, your feelings of initial euphoria may turn to dread. Data are fun, but they can also be overwhelming. The goal of data analysis is to condense large amounts of information into usable and understandable chunks.

It is sadly unlikely that everyone you recruit will complete your questionnaire when using survey methods; however, the hope is to receive the returned questionnaires in a completed and readable format from as many of your selected sample as possible. The number of completed questionnaires you receive divided by the number of questionnaires you distributed is your response rate. Let’s say your sample included 100 people and you sent questionnaires to each of those people. It would be wonderful if all 100 returned completed questionnaires, but that is very unlikely. If you’re lucky, perhaps 75 or so will return completed questionnaires. In this case, your response rate would be 75%.

Researchers don’t always agree about what makes a good response rate. Though response rates vary, having 75% of your surveys returned would be considered good—even excellent—by most survey researchers. There has been a lot of research done on how to improve a survey’s response rate. We covered some of these previously, but suggestions include:

  • Personalizing questionnaires by addressing them to specific respondents rather than to some generic recipient like such as “madam” or “sir”
  • Enhancing the questionnaire’s credibility by providing details about the study, researcher contact information, and perhaps partnering with respectable agencies such as universities, hospitals, or other relevant organizations
  • Sending out pre-questionnaire notices and post-questionnaire reminders
  • Including some token of appreciation with mailed questionnaires even if it is small, such as a $1 bill

By itself, nonresponse doesn’t bias samples (it does waste resources, which is bad, but it doesn’t bias automatically). You might see some textbooks describe a study as being basically worthless if it achieves less than 50-75% response rate. Although it is a worthy and important goal to attain a high survey response rate, having a response rate of less than 50% in large surveys is not uncommon, and not automatically damning. Many large, and excellent, surveys achieve even lower rates than 50% – depending on the mode (telephone is especially low), response rates of 7% aren’t unheard of, and low rates are not new. The major concern with response rates is that a low rate of response may introduce nonresponse bias into a study’s findings. What if only those who have strong opinions about your study topic return their questionnaires? If that is the case, our findings might not represent how things really are or at the very least, we may be limited in the claims we can make about patterns found in our data. While high return rates are certainly ideal, a recent body of research shows that concern over response rates may be overblown (Langer, 2003). [1] Several studies have shown that low response rates did not make much difference in findings or in sample representativeness (Curtin et al., 2000; Keeter et al., 2006; Merkle & Edelman, 2002). [2] The jury may still be out on ideal response rates and the extent to which researchers should be concerned about response rates. Nevertheless, certainly no harm can come from aiming for as high a response rate as possible.

Response rate and bias has to do with heterogeneous vs. homogeneous attrition. If it was a random sample to start with and the non-response is also random (“homogeneous” – meaning not differential – “attrition” – meaning drop out), the validity of the survey is not changed. However, if a certain group of people are less likely to start or complete the survey, even after being invited to participate, then you might have dangerous heterogeneous attrition. In this case, yes, it could certainly bias your study, even if you retained 85% of your original sample. For instance, say you wanted to study what supports families needed in a given city. If everyone that completed your study, 85% of who you recruited, were all from one specific group (let’s say non-parents and parents with children over 15 years old) and the other group (parents of 15-year-olds and under) had all dropped out, your survey is quite biased, at least in regards to parenting status! This means that the results you get, although meant originally to apply to all families in the city, can’t tell you about parents of younger children. How do you know what kind of attrition is happening? Careful documentation: you must keep track of the target population, the invited sample characteristics (that you can know), and final sample characteristics so that you can investigate who dropped out of the study and who you were left with. The best thing is, of course, not have attrition in the first place, but given that we can’t ethically get perfect response rates (you have to allow people to skip or drop out of studies to protect their rights as research participants) the next best thing is being smart with tracking and interpreting.

Image attributions

experience by mohamed_hassan CC-0

company social networks by Hurca CC-0

 

References

  1. Kezdy, A., Martos, T., Boland, V., & Horvath-Szabo, K. (2011). Religious doubts and mental health in adolescence and young adulthood: The association with religious attitudes. Journal of Adolescence, 34, 39–47. 
  2. Bateman, P. J., Pike, J. C., & Butler, B. S. (2011). To disclose or not: Publicness in social networking sites. Information Technology & People, 24, 78–100. 
  3. Mortimer, J. T. (2003). Working and growing up in America. Cambridge, MA: Harvard University Press. 
  4. Babbie, E. (2010). The practice of social research (12th ed.). Belmont, CA: Wadsworth. 
  5. Pew Research (n.d.) Sampling. Retrieved from: http://www.pewresearch.org/methodology/u-s-survey-research/sampling/ 
  6. Curtin, R., Presser, S., & Singer, E. (2000). The effects of response rate changes on the index of consumer sentiment. Public Opinion Quarterly, 64, 413–428.
  7. Keeter, S., Kennedy, C., Dimock, M., Best, J., & Craighill, P. (2006). Gauging the impact of growing nonresponse on estimates from a national RDD telephone survey. Public Opinion Quarterly, 70, 759–779.
  8. Merkle, D. M., & Edelman, M. (2002). Nonresponse in exit polls: A comprehensive analysis. In M. Groves, D. A. Dillman, J. L. Eltinge, & R. J. A. Little (Eds.), Survey nonresponse (pp. 243–258). John Wiley and Sons. 

 

 

License

Icon for the Creative Commons Attribution-ShareAlike 4.0 International License

Understanding Research Design in the Social Science Copyright © by Utah Valley University is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book