8 Sampling
Social science researchers come up with all sorts of interesting questions to investigate, using scientific research methods. Unfortunately, researchers can’t study entire populations because of feasibility and cost constraints. Instead, we systematically select samples from a larger group of interest, to draw conclusions about the people, behaviors, or social phenomena we’re interested in. Who researchers select for their samples and how they choose their sample impacts the conclusions that can be drawn from scientific research studies. Sampling is the process of selecting a subset of a population to study. This chapter focuses on key elements of sampling, types of sampling strategies, and how to use information about samples to evaluate claims made based on research findings.
Units of Analysis
The main goal of sampling is to identify a subset of a larger group from which to collect data. To do so, a researcher must first define the larger group or entity they’re interested in studying. This larger group is also called the unit of analysis. The unit of analysis refers to the entity (individuals, groups, organizations, behaviors, objects, cities, nations, etc.) that is the target of the investigation. In other words, a unit of analysis is the entity that you wish to be able to say something about at the end of your study.
In any scientific study, the research question determines the unit of analysis. For instance, if we are interested in studying people’s opinions of the police, recidivism, or interest in various careers in criminal justice, then the unit of analysis must be the individual. If we want to study the characteristics of street gangs or teamwork in correctional settings, then the unit of analysis will be the group. If the research goal is to understand how courts can improve cost efficiency or case processing times, then the unit of analysis is the court system. If a researcher is trying to understand differences in incarceration rates between nations, then the unit of analysis becomes a country. Even inanimate objects can serve as units of analysis. For instance, the unit of analysis for a research project focused on understanding how guns proliferate across the United States, would be the gun rather than people who use, traffic, or sell them. Finally, if we wanted to study how knowledge transfer occurs between criminal justice agencies, then our unit of analysis would be the dyad (the combination of agencies that are sending and receiving knowledge).
Identifying the unit of analysis based on a research question can sometimes be tricky. For example, consider a study of why a certain neighborhood has a higher crime rate than surrounding neighborhoods. Contenders for the unit of analysis include crimes or people committing the crimes; but ultimately, the research question focuses on the neighborhood as the unit of analysis because the focus of the investigation is the neighborhood rather than crimes or people who commit crimes. However, the unit of analysis for a study of different types of crimes in different neighborhoods would be the crime because the focus is on the types of crimes rather than the neighborhoods. If a researcher wanted to study why people in a neighborhood engage in illegal activities, then the unit of analysis would be the individual. These examples illustrate how similar research questions may have entirely different units of analysis depending on the focus of the investigation.
To test your understanding of how to identify the unit of analysis, consider a study in which the researcher wants to examine differences in death penalty laws across states. What’s the unit of analysis? In other words, what’s the focus of the investigation? Is it the states? The laws? Individuals within the states? In this case, the unit of analysis would be the law. The research question focuses on differences in laws across states, so it does not focus on the states or the people within each state. Instead, it focuses on the laws themselves. The laws are the target of the investigation and the thing that the researcher wants to be able to say something about at the end of the study.
In sum, social science researchers might examine many potential units of analysis. Identifying the unit of analysis early on in a research study is important because it shapes the type of data a researcher should collect for their research, and who they should collect it from. If your unit of analysis is a neighborhood, you should collect data about neighborhoods rather than surveying people on how they perceive the neighborhood. If your unit of analysis is a policy or law, then you should be gathering legislative and legal documents rather than observing legislators’ day-to-day lives. Sometimes, researchers collect data from a lower level of analysis and aggregate that data to a higher level. For instance, to study teamwork in correctional settings, a researcher could survey individual actors in different correctional settings and average their teamwork scores to create a composite score for variables like cohesion and conflict related to teamwork.
Populations Versus Samples
Once a researcher has defined the unit of analysis, they can narrow their focus to identifying the population they wish to study. A population can be defined as all people, groups, or other entities, with the characteristics one wishes to study. Populations in research may be rather large, such as “the American people,” but they are usually more specific. For example, a study for which the population of interest is the American people will still specify which American people, such as adults over 18, are citizens, or legal residents. In the study mentioned earlier about why certain neighborhoods have higher crime rates, the unit of analysis would be the neighborhood; but rather than identify all neighborhoods as the population, the researcher would probably narrow their focus to all neighborhoods in a particular geographic area. In another example, consider the question of how and why death penalty laws differ across states. The unit of analysis would be death penalty laws, and the researcher would likely identify the population as all death penalty laws in U.S. states at a particular point in time or over a certain timeframe. Even this identification of the population narrows the focus from the overall unit of analysis.
At this point, you might wonder why researchers don’t just gather data from the entire population. In reality, researchers gather data from entire populations of interest. To understand why, consider the kinds of research questions that social science researchers ask. For example, when the local police department asked us to study the public opinion of the police in our city, we identified the population as all people who lived there. We never expected to collect data from every one of the thousands of residents. To do so would have taken a massive amount of time and monetary resources. Instead, we had to make hard choices about who to ask to participate in our survey. Rather than survey the entire population, we systematically chose a subset to complete the survey.
The subset of the population from which we gather data is called a sample. In the case of the policing survey, our sample included all households within a few specified neighborhoods. Both qualitative and quantitative researchers use scientific sampling techniques to identify their samples, and these techniques vary according to the approaches and goals of the research. As discussed in the rest of this chapter, some sampling strategies allow researchers to make fairly confident claims about populations much larger than their sample. Other sampling strategies allow researchers to make theoretical contributions rather than sweeping claims about large populations.
Sampling Strategies for Inductive, Qualitative Research
Researchers conducting inductive, qualitative research typically make sampling choices that enable them to deepen their understanding of the phenomenon they are studying. This section examines common sampling strategies that these researchers employ, all of which fall under nonprobability sampling techniques.
Nonprobability Sampling
Nonprobability sampling refers to sampling techniques for which the chances of any person or entity being included in the sample are unknown. Because we don’t know the likelihood of selection, we can’t know whether a sample represents a larger population. This might sound like a problem, but representing the population is not the goal with nonprobability samples. Even though nonprobability samples may not represent a larger population, researchers still use systematic scientific processes to select their samples. The next sections explain some of these sampling strategies, but first, let’s consider why a researcher might decide to use a nonprobability sample.
A researcher might choose a nonprobability sampling method when designing a research project. For example, before conducting survey research, a researcher might administer the survey to a few people who resemble the people they’re interested in studying, to work out any issues with the survey such as unclear question wording, a missing response option, or confusing ordering of questions. Researchers might also use a nonprobability sample to conduct a pilot study or exploratory research before designing a more comprehensive study, to quickly gather initial data and understanding before a more extensive evaluation. These examples show how nonprobability samples can be useful when setting up, framing, or beginning a research project.
Researchers also use nonprobability samples in full-blown research projects. These projects are usually qualitative, where the researcher’s goal is an in-depth understanding of a topic or issue. For example, evaluation researchers who aim to describe some specific small group might use nonprobability sampling techniques. Researchers conducting inductive research in which the goal is to contribute to a theoretical understanding of some phenomenon might also collect data from nonprobability samples. These researchers may seek out extreme or anomalous cases to help improve existing social theories by expanding, modifying, or poking holes in those theories.
In short, nonprobability samples serve an important purpose in social science research. They are useful for developing strong research projects and improving theories through extreme, anomalous, or other purposefully selected cases.
Types of Nonprobability Samples
Researchers use several nonprobability sampling techniques, including purposive, snowball, quota, and convenience sampling. While quota and convenience sampling strategies are occasionally used by quantitative researchers, they are typically employed in qualitative research because they are nonprobability sampling techniques.
Purposive Samples
To draw a purposive sample, a researcher begins with specific perspectives they wish to examine, and seeks out research participants or cases that meet the research goals. Researchers may use this sampling strategy to ensure their study covers a range of perspectives. For example, when we wanted to study public opinion of local police, we needed to include people who live in different locations and types of neighborhoods throughout the city. If we had only included people who lived in one neighborhood, we would have missed important details about the opinions of people who live in the neighborhoods we didn’t include in our study. To achieve this, we used a purposive sampling strategy, using information from prior theories and research to ensure that we included people from various neighborhoods who may have differing views on the police.
While purposive sampling is often used when the goal is to include participants who represent a broad range of perspectives, purposive sampling may also be used when a researcher wants to include only people who meet very narrow or specific criteria. For example, when wanting to study community responses to sexually violent predator placements in California, I limited my study only to communities in which a placement had been proposed within a specific timeframe, a community notification meeting had occurred, and were different types of communities (e.g., urban, rural, suburban). In this case, my goal was to find communities with specific experiences with sexually violent predator placements, rather than finding communities that had diverse experiences with sex offenders in their neighborhoods. In other words, the goal was to gain an in-depth understanding of the topic.
Snowball Samples
Qualitative researchers sometimes rely on snowball sampling techniques to identify study participants. With snowball samples, a researcher starts by identifying a few respondents that match the criteria for inclusion in the study and asks them to recommend others they know who also meet the selection criteria. In this case, a researcher might know of one or two people they’d like to include in their study, so they rely on those initial participants to help identify additional study participants. For instance, if you wanted to survey women lawyers, and you know only one or two such lawyers, you could start with them, and then ask them to recommend other women in the legal field who might be willing to talk with you. Thus, the sample builds and becomes larger as the study continues, much as a snowball builds and becomes larger as it rolls through the snow.
Snowball sampling is useful when a researcher wishes to study some stigmatized group or behavior. For example, a researcher who wants to study how transgender police officers cope with police culture would be unlikely to find many participants by posting a call for interviewees in the police station, or announcing the study during a departmental briefing. Instead, the researcher might know of a transgender police officer, interview that person, and then be referred by the first interviewee to another potential participant. Having previous participants vouch for the researcher’s trustworthiness may help new potential participants feel more comfortable being included in the study. For the same reason, researchers may also use snowball samples when they’re interested in studying hard-to-reach populations, such as people who share an unpopular opinion on an issue or belong to a group with few members.
Quota Samples
Qualitative and quantitative researchers use quota sampling, but because it is a nonprobability method, we’ll discuss it in this section. Quota samples involve the researcher segmenting the population of interest into mutually exclusive groups, and then choosing a non-random set of observations from each group to meet a predefined quota. In this type of sampling, a researcher finds potential participants by 1) identifying categories that are important to the study and for which there is likely to be some variation, 2) creating subgroups based on each category, 3) deciding how many people, documents, or whatever element happens to be the focus of the research to include from each subgroup, and 4) collecting data from that number of entities for each subgroup.
The number of entities to include in each group can be determined in a few different ways. In proportional quota sampling, the researcher tries to match the proportion of respondents in each subgroup to the proportion of that group in the population. For instance, imagine you wanted to use a sample of 100 people to understand the voting preferences of the American public. You’d first need to identify important demographic characteristics of the U.S. population (you might identify race/ethnicity as the most important for your purposes). Then, to decide how many people to include from each racial/ethnic group, you’d look at the percentages of the population in each racial and ethnic group as reported by the U.S. Census. According to the U.S. Census in 2021, 60% of the population reported their race or ethnicity as white, 18.5% reported Hispanic or Latino, 13% reported Black or African American, 6% Asian, 1.3% American Indian and Alaska Native, and 3% reported two or more races. If you want to aim for a sample of 100 people, ensure that the numbers in each group match the percentages reported in the census. This means that if you were standing outside a grocery store asking people to participate in your survey, you’d have to stop collecting data once you reached the predetermined number of people in a particular category.
Nonproportional quota sampling is less restrictive, because a researcher tries to meet a minimum number of people in each subgroup, rather than meeting a proportional representation of the population. In this case, a researcher may decide to have 50 respondents from each racial/ethnic group, and stop when they reach the quota for each subgroup. A non-proportional technique can be useful in research with small or marginalized groups, because it over-samples these groups, providing more data on people whose voices may otherwise be silenced by the voices of people in proportionately larger groups.
In sum, quota sampling techniques offer the strength of helping researchers account for potentially relevant variation across study elements. However, they are neither designed nor guaranteed to yield findings that can be generalized to an entire population.
Convenience Samples
As with quota sampling, both qualitative and quantitative researchers use convenience sampling techniques. Also called accidental or opportunity samples, convenience samples involve drawing a sample from the part of the population that is close at hand, readily available, or convenient to access. To draw a convenience sample, a researcher collects data from those people or other relevant elements to which they have the most convenient access. This method, sometimes called haphazard sampling, is most useful in exploratory research. Journalists also use this technique when they need quick and easy access to people from their population of interest. If you’ve ever seen brief interviews of people on the street, you’ve probably seen a convenience sample in action.
While convenience samples offer one major benefit—convenience—we should be cautious about generalizing from research that relies on convenience samples. These types of samples exclude a large portion of the population (for example, people who don’t happen to walk down the street on which the researcher is looking for participants), and the data collected may reflect the unique characteristics of the area or group in which you’ve chosen to recruit participants, rather than representing the larger, more diverse set of people or other entities that you’re trying to study.
Table 8.1 provides a summary of the types of nonprobability samples. As explained earlier, rather than trying to represent a larger population, the overall goal of these samples is to provide insights for designing and conducting larger research projects or to build or improve theories about social phenomena.
Table 8. 1 Types of Nonprobability Samples
Sample Type | In this type of sample, a researcher… |
Purposive | Seeks out elements that meet specific criteria. |
Snowball | Relies on participant referrals to recruit new participants. |
Quota | Selects cases from within several different subgroups. |
Convenience | Gathers data from whatever cases happen to be accessible. |
Sampling Strategies for Deductive, Quantitative Research
Researchers conducting deductive, quantitative studies often want to generalize their findings to larger populations. While there are certain instances when quantitative researchers rely on nonprobability samples (e.g., when doing exploratory or evaluation research), quantitative researchers tend to rely on probability sampling techniques. As we’ll discuss, the goals and techniques associated with probability samples differ from those of nonprobability samples.
Probability Sampling
Probability sampling refers to sampling techniques for which every person (or event) has an equal and known chance of being selected for membership in the sample. This is important because in most cases, researchers who use probability sampling techniques want to identify a representative sample from which to collect data. A representative sample resembles the population from which it was drawn, in all the ways important for the research being conducted. If, for example, you wish to be able to say something about differences between men and women at the end of your study, you must make sure that your sample doesn’t contain only women. That’s a bit of an oversimplification, but the point with representativeness is that if your population varies in some way important to your study, your sample should contain the same sorts of variation.
Why might researchers care about obtaining a representative sample? Researchers who design studies using probability sampling techniques want to be able to generalize their findings to a larger group than the sample represents. This is called generalizability, and it is the main feature that distinguishes probability samples from nonprobability samples. Generalizability refers to the idea that a study’s results will tell us something about a group larger than the sample from which the findings were generated. To achieve generalizability, probability sampling techniques rely on a core principle of random selection, which means they try to ensure all elements in the researcher’s target population have an equal chance of being selected for inclusion in the study. We won’t go in-depth into the mathematical process of random selection, except that researchers who use random selection techniques to draw their samples will use statistical strategies to estimate how closely the sample represents the larger population from which it was drawn.
In short, probability samples serve an important purpose in social science research. They are particularly useful for obtaining representative samples that allow for generalizing to larger populations by relying on the principle of random selection.
Types of Probability Samples
Researchers use several types of probability samples, including simple random, systematic, stratified, and cluster samples. Generally, researchers conducting deductive, quantitative studies are the most likely to use these sampling techniques.
Simple Random Samples
In a simple random sample, all possible units of the population of interest have an equal probability of being selected. While simple random samples are the most basic type of probability samples, researchers don’t often use them because of difficulties when generating a true simple random sample. To draw a simple random sample, a researcher starts with a sampling frame, a list of every member/element of the population of interest. For instance, if you wanted to survey 25 police departments in your state, you’d first develop a list of every police department in your state. This list would be your sampling frame.
Once a researcher has created their list, they number each element sequentially and use a random number table (or a set of randomly assigned numbers) to select the elements from which to collect data. One way to do this would be to enter each element into a spreadsheet and use a random number function within the spreadsheet program, to generate random numbers for each element on the list. In the example of a survey of police departments, you could list each department as a separate row in a spreadsheet, and then generate a random number to be associated with each row. Then, you would sort the list based on the assigned random number and choose the first 25 departments to survey.
Instead of random number functions within spreadsheet programs, researchers could use a random number table from other sources, such as textbooks or free online random number generators. For example, Stat Trek contains a random number generator that you can use to create a random number table of whatever size you might need. Randomizer also offers a useful random number generator.
Systematic Samples
Systematic sampling techniques offer the benefits of simple random sampling while being somewhat less tedious to implement. As with simple random samples, a researcher using systematic sampling must be able to produce a list of every one of their population elements. Rather than assigning random numbers to each element, researchers draw a systematic sample by ordering a sampling frame according to some criteria and selecting elements at regular intervals throughout the list. Put another way, researchers select every kth element in the list, where k indicates the selection interval (the distance between elements on the list).
To begin the selection process, a researcher needs to figure out how many elements they wish to include in their sample, and then calculate k using a formula. To illustrate this process, let’s return to the example where you’re interested in surveying 25 police departments in your state. First, you would find out how many police departments were in your state, and a list of those departments would be your sampling frame. For this example, we’ll say there are 100 police departments in your sampling frame. To determine the selection interval (k), you would divide the total number of elements in your sampling frame by your desired sample size. In this case, the selection interval would be 4, or 100 divided by 25. Put in a more mathematical way, researchers use the formula k = N/n to calculate the selection interval. In this formula, k is the ratio of the sampling frame, size N, and the desired sample size n.
After calculating the selection interval, researchers order their list according to some criteria that ensure variation, on some element relevant to the research question. For example, in a study of police departments, a researcher might choose to order the list based on each department’s number of employees or the population size of the area they serve. Whichever criteria a researcher chooses, it must relate to the research question. In other words, researchers must consider how and why variation in the chosen criteria is important for understanding the phenomenon of interest.
Once a researcher has developed their sampling frame, calculated the selection interval, and ordered the sampling frame, the next step is determining where to begin selecting elements for inclusion in the sample. To ensure random selection, the starting point must not automatically be the first element on the list. Instead, the researcher will choose a random number between 1 and k and begin there. In our example of selecting 25 police departments from a list of 100 departments, we calculated 4 as the selection interval. This means you would randomly select one of the first 4 departments on the list, choosing every subsequent fourth department for inclusion in the sample. So, if you chose the third department, that department would be the first of the 25 departments in your sample. The seventh department would be the second department in the sample, the eleventh would be the third department, and so on until you had your sample of 25 departments.
By ordering the sampling frame and then systematically and randomly selecting elements for inclusion in the sample, systematic sampling ensures that elements are equally represented, based on the sorting criterion.
Stratified Samples
In a stratified sample, a researcher divides the study population into strata, or mutually exclusive subgroups, and draws a simple random sample from each subgroup. This technique can be useful when a subgroup of interest makes up a relatively small proportion of the overall sample, and the researcher wants to include representatives from all subgroups. For example, imagine a researcher who wants to examine how people with a range of gender identities perceive their interactions with the police. Transgender people make up a smaller percentage of the population than cisgender men and women, so there’s a chance that neither simple random nor systematic sampling techniques would yield any transgender people in the sample. The same logic applies to other non-dominant gender identities, such as non-binary, agender, gender-fluid, etc. Instead, stratified sampling techniques can help ensure that the sample contains adequate numbers of people in the gender subgroups in the population.
In the previous example of selecting 25 police departments from a list of 100 departments, a researcher could start by categorizing the departments based on the population in the area they serve. The categories might include areas with large (more than 50,000 people), medium (between 10,000 and 50,000 people), and small (less than 10,000 people) populations. The researcher would then use simple random sampling to select 8 departments from two subgroups, and depending on which group the researcher is most interested in ensuring representation, 9 from the third subgroup to complete the sample of 25 departments. This sampling strategy would ensure that departments serving small, medium, and large populations would be equally represented in the sample, even though they’re likely not equal in the larger population.
Cluster Samples
Each probability sampling technique we’ve discussed so far assumes that researchers can access a list of population elements to create a sampling frame. This is not always the case. Let’s say, for example, that you wish to conduct a study of the experiences that people with different gender identities have had with the police in your state. In the previous sampling techniques, you’d need to create a list of every person in your state, and their gender identities. Even if you could find a way to generate such a list, attempting to do so might not be the most practical use of time or resources. When this is the case, researchers turn to cluster sampling. With cluster samples, researchers divide the population into “clusters” (or small groups for sampling), randomly sample a few clusters, and then include all units within those clusters in their study.
Researchers often use cluster sampling in geographic areas. For example, if a researcher wants to study public opinion about the death penalty in a large city, they might divide the city into neighborhoods. They would then use random sampling methods to choose a few neighborhoods, including all households or people within those neighborhoods in their study. In another example, imagine you’re interested in the workplace experiences of prosecuting attorneys across the United States. While obtaining a list of all prosecutors in the country would be rather difficult, it would be much easier to create a list of all prosecutors’ offices across the country. Thus, you could draw a random sample of prosecutors’ offices (your clusters), and then include all prosecutors in the offices you’ve chosen in your sample.
Table 8.2 provides a summary of the types of probability samples. As explained earlier, the overall goal of these samples is to represent a larger population so that research findings can be more generalizable to that population.
Table 8. 2 Types of Probability Samples
Sample Type | In this type of sample, a researcher… |
Simple Random | Randomly selects elements from the sampling frame. |
Systematic | Selects every kth element from the sampling frame. |
Stratified | Creates subgroups and randomly selects elements from each. |
Cluster | Randomly selects clusters and selects every element from those clusters. |
Questions to Ask About Samples
When reading the results of research studies, it’s easy to focus only on findings rather than procedures. But, as the preceding discussions indicate, evaluating how and who a researcher selects study participants, is important for understanding research findings. Now that you are familiar with various sampling techniques, you can ask important questions about the findings you read, to be a more responsible research consumer.
Who Was Sampled, How, and for What Purpose?
Social science researchers on college campuses have a luxury that other researchers may not have: access to a whole bunch of (presumably) willing and able human guinea pigs (e.g., students). But that luxury comes at the cost of sample representativeness. One study of top academic journals in psychology found that over two-thirds (68%) of participants in studies published by those journals were based on samples drawn in the United States, and two-thirds of the work derived from US samples published in the Journal of Personality and Social Psychology was based on samples made up entirely of American undergraduates taking psychology courses (Arnett, 2008).
These findings beg the question of what and about whom we learn from social scientific studies. Joseph Henrich and colleagues pointed out that behavioral scientists very commonly make sweeping claims about human nature based on samples drawn only from WEIRD (Western, educated, industrialized, rich, and democratic) societies, and often from even narrower samples, as is the case with many studies relying on samples drawn from college classrooms (Henrich, Heine, & Norenzayan, 2010). As it turns out, many findings about the nature of human behavior regarding fairness, cooperation, visual perception, trust, and other behaviors are based on studies that excluded participants from outside the United States (and sometimes excluded anyone outside the college classroom) (Begley, 2010). These points demonstrate that we must pay attention to the population on which studies are based and the claims being made about to whom the findings apply.
A related, but slightly different, potential concern is sampling bias, which occurs when the elements selected for inclusion in a study do not represent the larger population from which they were drawn. For example, a poll conducted online by a newspaper asking for the public’s opinion about some local issue will certainly not represent the public, since those without access to computers or the Internet, those who do not read that paper’s website, and those who do not have the time or interest will not participate in the poll. In addition, just because a sample may be representative in all respects that a researcher thinks are relevant, other aspects that didn’t occur to the researcher may also be relevant.
So how do we know when we can count on results that we read from research studies? There aren’t any magic or always-true rules we can apply, but we can keep in mind a couple of guiding points. First, while sampling methods provide guidelines for drawing scientifically valid samples, the quality of a sample should be evaluated by the sample actually obtained. A researcher may set out to administer a survey to a representative sample by correctly employing a random selection technique, but if only a handful of people respond, the researcher will have to be careful about the claims they make about the survey findings. Second, researchers may be tempted to talk about the implications of their findings as if they apply to some group other than the population sampled. This tendency usually doesn’t come from a place of malice, but we must be attentive to how researchers discuss their findings concerning the population they have sampled.
Finally, keep in mind a sample that allows for comparisons of theoretically important concepts or variables is certainly better than one that does not allow for such comparisons. In a study based on a nonrepresentative sample, for example, we can learn about the strength of our social theories by comparing relevant aspects of social processes. The key is knowing the strengths of nonprobability and probability samples for answering different research questions, and ensuring that researchers’ claims match what they can say with the sample they used in their study.
At their core, questions about sample quality should address who has been sampled, how they were sampled, and for what purpose they were sampled. Being able to answer those questions will help you better understand, and more responsibly read research results.
Summary
- The unit of analysis is the larger group, individual, or entity that a researcher wants to say something about at the end of their study.
- A population is an entire group or set of entities that a researcher wants to study. By contrast, a sample is a subset of the population from which the researcher gathers data.
- Inductive, qualitative approaches to research tend to rely on nonprobability samples, which use sampling strategies such as purposive, snowball, quota, and convenience sampling.
- Deductive, quantitative approaches to research tend to rely on probability samples, which use sampling strategies such as simple random, systematic, stratified, and cluster sampling.
- Evaluating research findings requires examining sampling procedures and the quality of the samples. Answering questions such as who was sampled, how, and why, can help assess the validity of claims based on the research findings.
Key Terms
Cluster | Proportional Quota Sampling | Sampling Frame |
Cluster Sample | Purposive Sample | Simple Random Sample |
Convenience Sample | Quota Sample | Snowball Sample |
Generalizability | Random Selection | Strata |
Nonprobability Sampling | Representative Sample | Stratified Sample |
Nonproportional Quota | Sample | Systematic Sample |
Popoulation | Sampling | Unit of Analysis |
Probability Sampling | Sampling Bias |
Discussion Questions
- Explain the unit of analysis for a study of how and why prison conditions differ across U.S. states. Would the unit of analysis be different for a study of the informal groups that people in prison create to cope with life in prison? Why or why not?
- Can the same group constitute a population in one study and a sample in another? Why or why not?
- How do the goals of inductive, qualitative research match with the goals of nonprobability sampling methods?
- How do the goals of deductive, quantitative research match with the goals of probability sampling methods?
- What are some similarities and differences between purposive, snowball, quota, and convenience samples?
- What are some similarities and differences between simple random, systematic, stratified, and cluster samples?
- Explain three important things to consider about sampling procedures and samples when evaluating the implications of research findings.
- Create your own research question. Then, identify the unit of analysis, population, and what type of sample you’d use to find participants. Is the type of sample you chose a probability or nonprobability sampling method? Why did you choose that particular method over other methods in the same category (other nonprobability or probability methods)?
Works Cited in Chapter 8
Arnett, J. J. (2008). The neglected 95%: Why American psychology needs to become less American. American Psychologist, 63, 602–614.
Begley, S. (2010). What’s really human? The trouble with student guinea pigs. Retrieved from http://www.newsweek.com/2010/07/23/what-s-reallyhuman.html.
Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33, 61–135.