"

Glossary of Key Terms

A-

Abstract – The short paragraph at the beginning of an article that summarizes its main point.

Accessible Population – The portion of the target population that a researcher can realistically reach or contact for their study. While the target population might be broad, the accessible population is typically narrower and more practical to reach. The sample is then selected from this accessible population.

Accuracy – The extent to which one’s coding procedures correspond to some preexisting standard.

Acquiescence Bias – When respondents say yes to whatever the researcher asks.

Action Research – Research that is conducted for the purpose of creating some form of social change in collaboration with stakeholders.

Aggregate Matching – When the comparison group is determined to be similar to the experimental group along important variables.

AI Hallucination – When the system generates false (untrue or factually incorrect) or inaccurate (containing errors or misleading details) information that appears plausible (reasonable or believable) but is not based on real data or sources. These AI-generated errors often sound convincing because they seem true, even though they are not, and typically occur when the AI fills in gaps in its training or attempts to respond beyond its actual knowledge.

Anonymity – The identity of research participants is not known to researchers.

Anonymized Data – Data that does not contain personally identifying information.

Assent – The verbal or non-written agreement from individuals, typically children or those with limited decision-making capacity, to participate in research, indicating their willingness to take part even though they cannot legally provide informed consent.

Attributes – The characteristics that make up a variable.

Audit Trail – A transparent, detailed record of research decisions and steps, showing how data led to findings. It strengthens trustworthiness by making the process visible and verifiable.

Authenticity – The degree to which researchers capture the multiple perspectives and values of participants in their study and foster change across participants and systems during their analysis.

Authority – Learning by listening to what people in authority say is true.

Autoethnography – A qualitative method where the researcher reflects on their own life and identity to understand broader cultural or social contexts.

Availability (or Convenience) Sampling – Another name for convenience sampling; it refers to selecting participants who are readily available to the researcher.

 

B-

Bar Graph – A tool for data visualization that shows values of an x variable by y variable, with x across the horizontal axis and y along the vertical axis. Related to histograms and line charts.

Baseline Stage – A period of time before an intervention starts.

Belmont Report – A foundational document outlining basic ethical principles—respect for persons, beneficence, and justice—for conducting research on human subjects.

Beneficence – Refers to the notion that researchers are obligated to maximize potential benefits and minimize possible harms to participants. This principle entails a thorough evaluation of risks and benefits to ensure the well-being of all individuals involved in the study.

Between-Subjects Experiment – An experimental design in which each participant is exposed to only one condition. Groups must be comparable on key characteristics to control for extraneous variables and avoid confounding.

Bias – In sampling, when the elements selected for inclusion in a study do not represent the larger population from which they were drawn due to sampling method or thought processes of the researcher.

Bivariate Analysis – Analysis of two variables together, often to examine the relationship of the variables.

Block Randomization – A method of random assignment in which all conditions appear once in random order within a block, and the sequence is repeated across blocks. Each participant is assigned to the next condition in the pre-generated sequence to ensure balanced group sizes.

Books – Valuable scholarly sources, beneficial for theoretical, philosophical, and historical inquiry. They provide in-depth explanations of key concepts and definitions, help identify relevant keywords, and often include references to additional resources. Books are also helpful for understanding the scope, foundational background, and development of a topic over time.

Boolean Searching – A method of connecting keywords with operators like AND, OR, and NOT to refine search results.

 

C-

CAPI (Computer-Assisted Personal Interviewing) – A method where researchers conduct in-person surveys using a computer to record responses and automate question flow.

Carryover Effect – A type of order effect in which exposure to one condition affects a participant’s behavior in subsequent conditions.

Case Study – A qualitative research approach that involves an in-depth investigation of a single, clearly defined unit (such as an individual, group, organization, event, or community) within its real-life context.

Categorical Measures – A measure with attributes that are categories.

CATI (Computer-Assisted Telephone Interviewing) – A method where researchers conduct phone surveys and enter answers directly into a computer, which helps manage question skips and reduces data entry errors.

Causality – The idea that one event, behavior, or belief will result in the occurrence of a subsequent event, behavior, or belief.

Ceiling Effect – Occurs when a measurement’s upper limit is too low, preventing accurate capture of high values. Participants at the top end are grouped together, reducing sensitivity.

Census – A data collection process in which information is gathered from every single member of the population of interest. It is rare in research due to time and resource constraints.

Classic Experimental Design – A basic true experimental design that includes sampling, random assignment, pretest, intervention, and posttest to assess the impact of a treatment.

Closed-Ended Question – A type of survey question that offers a limited set of pre-determined response options for the participant to choose from.

Cluster Sampling – A sampling approach that begins by sampling groups (or clusters) of population elements and selecting elements from within those groups.

Code – A word or short phrase that captures the essence of a portion of language-based or visual data. Codes help summarize and organize key ideas during qualitative analysis and can be applied to segments or entire data sources.

Code Sheet – The instrument an unobtrusive researcher uses to record observations.

Codebook – The document that outlines how research variables have been documented and translated into numbers for quantitative analysis.

Coding (Quantitative) – In quantitative research, assigning numbers to a behavior or characteristic to create a variable’s data.

Coding (Qualitative) – Identifying themes across qualitative data by reading transcripts.

Cognitive Biases – Predictable flaws in thinking.

Cognitive Interview – A pretesting method where participants think out loud while completing a self-report measure, allowing researchers to understand how questions are interpreted and identify areas of confusion or emotional impact.

Cognitive Interviewing – A method used during pretesting in which researchers conduct a separate study to examine how well a survey works with a specific population of interest. This process helps identify whether the questions are clear, understandable, and interpreted as intended by respondents.

Cohort Survey – Describes how people with a defining characteristic change over time.

Community-Based Participatory Research (CBPR) – A research approach that actively involves the community being studied in the research process.

Comparable Groups – Groups that are similar on key variables relevant to the research question, allowing for meaningful comparisons. Researchers employ methods such as random assignment or matching to achieve comparability and minimize bias.

Comparison Group – A group that receives “treatment as usual” instead of no treatment, used when withholding treatment would be unethical.

Complete Counterbalancing – A type of counterbalancing in which all possible orders of conditions are used, with an equal number of participants assigned to each order.

Concept Map (Mind Map) – A technique in which a topic is identified, and related topics are mapped onto the original concept or idea, allowing for a visual examination of related topics or identifying gaps in information.

Concept – A notion or image that we conjure up when we think of some cluster of related observations or ideas.

Conceptualization – In quantitative research, the process of clearly defining key concepts by translating abstract ideas into precise, workable definitions. It begins with brainstorming what a concept brings to mind and developing a definition to guide measurement and analysis.

Concurrent Triangulation – A mixed-methods approach in which quantitative and qualitative data are collected concurrently.

Concurrent Validity – The extent to which a measure correlates with another, established measure of the same concept, when both are given at the same time.

Conditions – The different levels or variations of the independent variable that are manipulated by the researcher in an experiment.

Confederate – An individual who appears to be a participant but is actually working with the researcher to influence the study or simulate a social situation.

Conferences – Offer current research and emerging ideas, often presented through talks and discussions. These can help identify active scholars in a field, but full papers are not continuously published—sometimes, only abstracts are available. If access is difficult, a librarian can assist in locating the materials.

Confidence Interval – A range of values in which the true value is likely to be.

Confidentiality – Identifying information about research participants is known to the researchers but is not divulged to anyone else.

Confirmability – The degree to which the findings are grounded in the data provided by participants rather than shaped by researcher bias. Demonstrated through a clear audit trail linking findings back to participant input.

Confirmation Bias – Observing and analyzing information in a way that confirms what you already think is true.

Confounding Variable – A variable that influences both the independent and dependent variables, potentially distorting the true relationship between them.

Constructs – Concepts that are not observable but can be defined based on observable characteristics.

Contamination – When members of the control group are unintentionally exposed to the intervention, often through interaction with the experimental group, potentially compromising the validity of the results.

Content Analysis – A qualitative method used to examine and interpret texts, images, media, or other content, identifying patterns, themes, and narratives.

Content Validity – Assesses whether a measure captures all the essential dimensions of a concept. A measure with high content validity fully represents the concept being studied.

Context Effect (Contrast Effect) – A carryover effect where perception or interpretation of a task in one condition is influenced by the preceding condition.

Contingency Table – A table used to show cross-tabulations of the distribution of two variables.

Continuous Variables – Variables with attributes that are numbers and can be treated numerically.

Control Group – The group not exposed to the intervention, used for outcome comparison with the experimental group.

Control Variables – Variables whose effects are mathematically controlled to highlight the relationship between independent and dependent variables, establishing nonspuriousness in causal relationships.

Convenience Sampling – A nonprobability sampling method prioritizing ease and availability over representativeness by collecting data from easily accessible individuals.

Convergent Validity – Assesses if a measure produces similar results to other established measures of the same concept.

Correlation – A statistical relationship between two variables where changes in one are associated with changes in another.

Counterbalancing – Controlling order effects in within-subject designs by varying the order of conditions across participants.

Covariation – The degree to which two variables vary together.

Credibility – The accuracy and believability of findings from the participants’ perspective, often supported by member checking or peer review.

Critical Paradigm – Emphasizes power, inequality, and social change, advocating for inquiry that challenges systemic bias and promotes social transformation.

Cross-Sectional Surveys – Surveys administered at just one point in time.

Culturally Sensitive Research – Research honoring the unique cultural context and needs of a specific population or study.

 

D-

Data Dictionary (Code Book) – A document listing variable names, definitions, value meanings, levels of measurement, and other important details for each variable in a dataset.

Data Triangulation – The use of data from different people, settings, or times to verify findings.

Deception – Involves intentionally withholding or misrepresenting information about a study’s purpose or procedures. Although generally discouraged, it may be ethically permissible if fully justified, IRB-approved, poses minimal risk, includes transparency in the consent process, and is followed by thorough debriefing.

Deductive Approach – When a researcher studies existing theories and tests hypotheses emerging from those theories.

Dependability – The consistency and stability of the research process over time, involving documentation and justification of methodology changes supported by research logs or peer audits.

Dependent Variable – A variable that depends on changes in the independent variable.

Descriptive Questions – Aim to summarize or quantify characteristics of a population or phenomenon without exploring causal relationships, useful for identifying needs or resources.

Descriptive Research – Research that describes or defines a particular phenomenon.

Descriptive Statistics – Statistics describing basic qualities of data, such as central tendency or variance.

Dimensions – In social scientific measurement, distinct elements or aspects that together make up a single concept.

Direct Experience – Learning through informal observation.

Directionality – The issue of determining which variable influences the other when two variables are correlated.

Discriminant Validity – Confirms that a measure does not correlate with unrelated measures, showing it is distinct from concepts it shouldn’t be related to.

Discussion Section – Interprets the results, connects them to existing literature, discusses implications, and suggests areas for future research.

Dissemination – A planned process involving target audiences and settings where research findings are communicated, facilitating research uptake in decision-making and practice.

Double-Barreled Question – A survey question that asks about two different things at once, potentially confusing respondents.

Double-Blind Design – An experimental setup where neither participants nor researchers know who is in the control or experimental group, reducing bias.

Dunning-Kruger Effect – When unskilled individuals overestimate their ability and knowledge, and experts underestimate theirs.

 

E-

Ecological Fallacy – An error in reasoning occurring when conclusions about individuals are made based on group-level data.

Empirical Articles – Scholarly articles applying theory to real-world behavior and reporting original analysis of quantitative or qualitative data. They typically include sections such as introduction, method, results, and discussion.

Empirical Questions – Questions answerable through direct observation, measurement, or experience rather than theory, opinion, or speculation.

Epistemology – A set of assumptions about how we come to know what is real and true.

Ethical Questions – Questions rooted in moral beliefs or values, not answerable by empirical evidence alone, involving judgments about right or wrong.

Ethnography – A qualitative approach studying groups or cultures by observing and participating in their daily lives, typically over extended periods.

Evaluation Research – Research designed to assess the effects of a specific program or policy.

Evidence-Based Practice – Decision-making based on the best available evidence to assist clients.

Ex Post Facto Control Group – A non-randomized design where individuals are matched after the intervention; participants were not randomly assigned.

Exclusion Criteria – Characteristics that disqualify individuals from participating in a study, often mirroring inclusion criteria or factors interfering with research validity.

Exempt Review – The lowest level of IRB oversight, for minimal-risk studies often involving publicly available or de-identified data with limited human subject involvement.

Exhaustiveness – Listing all possible attributes in a measure.

Expectancy Bias – Also known as experimenter bias; occurs when a researcher’s expectations unintentionally influence their interpretation or recording of participant responses.

Expedited Review – IRB oversight level for minimal-risk studies not requiring full board review but examined by an IRB member, commonly involving existing medical records or behavioral studies.

Experiment – A controlled data collection method designed to test hypotheses.

Experimental Group – The group receiving the intervention being tested.

Experimenter Expectancy Effect – When researcher expectations unintentionally influence participant behavior or study outcomes.

Explanatory Questions – Seek to identify broad, generalizable causal relationships applicable across various contexts.

Explanatory Research – Research explaining why phenomena work as they do, addressing “why” questions.

Exploratory Questions – Broad, open-ended inquiries used to gain familiarity with a topic, uncovering patterns or relationships without specifying single causes.

Exploratory Research – Preliminary research aimed at clarifying and defining problems, generating ideas, or setting research priorities, typically without generalizable results.

External Audit – Independent review examining a study’s methods and findings to ensure conclusions are data-driven rather than researcher-biased.

External Validity – The extent to which study results can be generalized to other populations, settings, or times.

Extraneous Variables – Unintentionally studied variables that may influence experimental outcomes if uncontrolled, potentially confounding results.

 

F-

Face Validity – The extent to which a measure appears, on the surface, to accurately assess what it is intended to measure. It is based on subjective judgment and often serves as the initial step in evaluating a measure’s relevance.

Factorial Design – An experimental design involving two or more independent variables, each with two or more conditions. The number of conditions per variable is expressed in a format such as 2 × 2 × 3 to illustrate the full structure.

Fairness – A key criterion of authenticity, referring to the extent to which diverse perspectives, experiences, and viewpoints are acknowledged and seriously considered in the research process, ensuring multiple voices are represented and respected.

False Negative – When a measure does not indicate the presence of a phenomenon, even though it is actually present.

False Positive – When a measure indicates the presence of a phenomenon, even though it is actually absent.

Fatigue Effect – A carryover effect in which participants perform worse in later conditions due to tiredness, boredom, or loss of motivation.

Feasibility – The practicality of conducting a research study, considering realistic access to the target population, time, resources, and funding.

Fence-Sitter – A respondent who selects neutral or middle-of-the-road options on a survey, even when they have an opinion.

Field Notes – Written records made by researchers before, during, or after data collection to capture observations, reflections, and insights, supporting analysis.

File-Drawer Problem (Publication Bias) – A concern in meta-analysis or systematic reviews where data not supporting a hypothesis is less likely to be published.

Filter Question – A question used to determine which respondents should be asked follow-up questions based on previous responses.

Floaters – Respondents who provide a substantive answer despite not understanding the question or having no opinion.

Floor Effect – Occurs when a measurement’s lower limit is too high, preventing accurate capture of low values and limiting detection of meaningful differences.

Focus Group – A group interview method with 5–12 participants discussing a specific topic, used to gather diverse perspectives and enhance reliability through triangulation.

Focused Coding – Collapsing or narrowing codes, defining codes, and recoding transcripts using a final code list.

Frequency Distribution – A table or chart showing how responses are distributed among attributes of a given variable.

Full Board Review – The highest level of IRB oversight, required for studies posing more than minimal risk or involving vulnerable populations, involving full committee review and approval.

Generalizability – The extent to which a study’s results can inform understanding about a group larger than the sample studied.

Generalize – Making claims about a larger population based on findings from a smaller, representative sample.

Gray Literature – Research and information published by non-commercial sources, beneficial for finding up-to-date and timely data.

Grounded Theory – A qualitative research approach aimed at developing new theories directly from real-world data through ongoing comparison and inductive reasoning.

G-

Generalizability – The extent to which a study’s results can inform understanding about a group larger than the sample studied.

Generalize – Making claims about a larger population based on findings from a smaller, representative sample.

Gray Literature – Research and information published by non-commercial sources, beneficial for finding up-to-date and timely data.

Grounded Theory – A qualitative research approach aimed at developing new theories directly from real-world data through ongoing comparison and inductive reasoning.

 

H-

Hawthorne Effect – Participants alter their behavior because they are aware they are being observed.

Heterogeneous Attrition – Non-random dropout from a study occurring more frequently in certain subgroups, leading to biased results.

Historical Research – Analyzing data from primary sources related to historical events.

Homogeneous Attrition – Random dropout from a study across subgroups, not threatening the study’s validity.

Human Subject – A living individual from whom data is collected through direct interaction or by obtaining identifiable private information; often referred to as “participant.”

Hypothesis – A statement describing a researcher’s expectation regarding anticipated findings.

 

I-

Idiographic Causal Relationships – Focus on individual, subjective experiences to understand cause and effect, central to qualitative research.

Idiographic Research – Attempts exhaustive explanation or description based on participants’ subjective understandings.

In-Depth Interview – Semi-structured qualitative interview using open-ended questions to understand participants’ perspectives flexibly.

Incentives – Rewards ethically offered to encourage participation without coercion.

Inclusion Criteria – Characteristics required for eligibility to participate in a study.

Independence – A lack of relationship between two variables in quantitative analysis.

Independent Variable – The manipulated or categorized variable observed for its effect; considered the cause in a study.

Index – A composite measure combining multiple indicators to summarize a broader concept.

Indicator – Observable measure representing an abstract concept in research.

Indirect Observables – Characteristics assessed through inference or self-report, not directly measurable.

Individual Matching – Pairing participants based on similar characteristics, assigning one to the experimental group and the other to the control group.

Inductive Approach – Starting from specific observations and moving to general propositions about those experiences.

Inductive Reasoning – Reasoning from specific observations to general conclusions, common in qualitative research.

Inferential Statistics – Statistics used to test hypotheses beyond descriptive analysis.

Informed Consent – Ensuring participants understand study details before agreeing to participate.

Inputs – Resources necessary for a program’s operation.

Institutional Review Boards (IRBs) – Committees reviewing and overseeing research involving human subjects to protect their rights and welfare.

Inter-Rater Reliability – Degree of agreement among different observers assessing the same event or behavior.

Internal Consistency Reliability – Extent items within a scale consistently measure the same concept, often assessed using Cronbach’s Alpha.

Internal Validity – Confidence that changes in the dependent variable are due to the independent variable and not other factors.

Interval Level – Continuous, rank-orderable level of measurement with known equal distances between attributes.

Interval Variables – Variables with continuous, ordered attributes allowing addition and subtraction, but not ratio calculations.

Interview Guide – Flexible list of topics or questions guiding qualitative interviews.

Interview Schedule – Structured list of questions and answer options for verbal surveys.

Interviews – Qualitative method using open-ended questions to explore participants’ thoughts and experiences.

Introduction – Research section presenting the topic, literature review, and study significance.

Investigator Triangulation – Using multiple researchers to analyze the same data to reduce individual bias.

J-

Journaling – The practice of recording reflections, decisions, and observations during qualitative research, supporting transparency and rigor.

Justice – Requires fair distribution of research benefits and burdens, ensuring equitable participant selection without unfairly burdening or excluding any group.

 

L-

Latent Content – The underlying meaning of surface content.

Leading Question – A question worded in a way that biases respondents toward a specific answer by implying a preferred response.

Likert Scale – A closed-ended question format asking respondents to indicate agreement or disagreement with statements, often ranging from strongly agree to strongly disagree.

Literature Review – A summary and synthesis of existing research highlighting key theories, findings, and gaps, setting the context for the current study.

Longitudinal Survey – A survey administered repeatedly to the same participants or population group over an extended period.

 

M-

Macro-level – Examining social structures and institutions.

Manifest Content – The most apparent and surface-level content in communication.

Manipulation – Systematic variation of the independent variable to observe effects on the dependent variable.

Manipulation Check – A measure confirming that the independent variable was effectively manipulated during an experiment.

Matched-Groups Design – Participants matched on relevant variables before assignment to conditions to control confounding.

Matrix – A survey format organizing related questions with identical response options in a table layout.

Mean (Average) – The sum of responses divided by the number of responses.

Measurement – Defining and assigning meaning to key concepts or phenomena, enabling systematic observation, categorization, or quantification.

Measurement Protocol – Specific procedures for administering a measure consistently and reliably.

Median – The middle value in a distribution of responses.

Mediating Variable – Explains the relationship between an independent and dependent variable.

Member Checking – Participants review researcher interpretations for accuracy and intended meaning.

Memoing – Writing reflective notes during data collection to document insights, patterns, or methodological considerations.

Meso-level – Examining interactions between groups.

Meta-Analysis – A study synthesizing data from multiple other studies to conduct a combined analysis.

Method Triangulation – Using multiple data collection methods to study the same phenomenon.

Methods Section – Describes participant selection, measurement of variables, and data analysis procedures.

Micro-level – Examining the smallest levels of interaction, typically involving individuals.

Mixed Methods – Combines quantitative and qualitative methods within one study for comprehensive understanding.

Mode – The most frequently occurring response.

Moderating Variable – Alters the strength or direction of the relationship between independent and dependent variables.

Moderator – Guides and facilitates focus group discussions, ensuring topic focus and equal participation.

Multi-dimensional Concepts – Concepts comprised of multiple elements.

Multiple Treatment Design – A design involving administration of two or more treatments or treatment levels.

Multivariate Analysis – Analysis involving two or more variables to identify patterns and relationships.

Mutual Exclusivity – When individuals cannot simultaneously identify with two different attributes.

 

N-

Natural Experiment – A study comparing groups based on naturally occurring differences rather than random assignment, taking advantage of real-world conditions.

Nested Design – A study design using a subsample of an original sample for further study.

No-Treatment Control Condition – A group not exposed to any experimental manipulation, serving as a neutral comparison.

Nominal – Categorical measurement level with exhaustive and mutually exclusive categories that cannot be mathematically ranked.

Nomothetic Causal Relationship – Broad causal claims requiring covariation, plausibility, temporality, and nonspuriousness.

Nomothetic Research – Research providing general explanations universally applicable to all individuals.

Non-nested Design – A study design with two or more separate samples collected for different mixed-method components.

Nonequivalent Comparison Group Design – Quasi-experimental design lacking random assignment, using pre-existing groups.

Nonhuman Subject – Objects or entities studied without direct interaction, typically with fewer regulatory requirements.

Nonprobability Sampling – Sampling method where selection probability is unknown.

Nonresponse Bias – Sampling bias occurring when nonparticipants differ significantly from participants, distorting findings.

Null Hypothesis – Default assumption in statistical testing that no relationship or effect exists between studied variables.

Nuremberg Code – Ethical principles guiding human subject research, emphasizing voluntary consent and participant safety.

 

O-

Objective Truth – A single, unbiased, universally applicable truth.

Observation – Qualitative research method involving direct study of behaviors in natural settings.

Observational Terms – Directly observable and measurable concepts or characteristics.

Observations/Cases – Individual data points representing studied entities in a dataset.

One-Group Pretest-Posttest Design – Pre-experimental design measuring one group before and after intervention without control.

One-Shot Case Study Design – Pre-experimental design where a single group receives treatment without pretest or comparison group.

Ontology – Assumptions about the nature of reality.

Open Coding – Identifying initial categories or themes line-by-line during qualitative data analysis.

Open-ended Question – Survey question allowing respondents to answer in their own words.

Operationalization – Defining precisely how a concept will be measured through specific procedures or tools.

Oral Presentation – Verbal presentation of research findings at conferences.

Order Effect – A potential issue in within-subjects designs where the order of conditions influences responses.

Ordinal – Categorical measurement level with rank-orderable, exhaustive, and mutually exclusive categories.

Outcomes – Observed changes resulting from a program or intervention.

Outcomes Assessment – Evaluation focused on the outcomes of a specific program.

Outputs – Tangible results from a program’s activities.

Overgeneralization – Making broad assumptions based on limited observations.

 

P-

P-value – A statistical measure indicating the probability that an observed relationship occurred by chance, assuming the null hypothesis is true.

Panel Survey – A longitudinal survey method where the same individuals are surveyed repeatedly over time to observe changes.

Paradigm – A framework or perspective for viewing and understanding the human experience.

Paraphrase – Restating source information or findings in one’s own words while properly citing the original source.

Participant Observation – A qualitative data collection method where researchers observe and engage with participants, commonly used in ethnography.

Peer Debriefing – Researchers discussing their work with colleagues to identify biases, refine interpretations, and ensure rigor.

Peer Review – A formal evaluation by experts assessing the accuracy, quality, and relevance of a study before publication.

Periodicity – The occurrence of patterns at regular intervals.

Persuasive Essay – Writing that presents an argument, supported by evidence and reasoning, to convince readers of a particular viewpoint.

Phenomenology – A qualitative approach exploring individuals’ lived experiences and their interpretations of the world.

Photo Elicitation (Photovoice) – Data collection where participants respond to images or photographs to express their thoughts or feelings.

Pie Chart – A visualization tool representing proportions of attributes as segments of a circular chart.

Pilot Testing – A preliminary trial of a measurement tool using a small sample to ensure reliability and quality before full study implementation.

Placebo – A simulated or inactive treatment designed to appear real but lacking therapeutic effects.

Placebo Effect – Improvement due to participants’ belief in receiving a real treatment despite receiving an inactive one.

Plagiarism – Taking credit for another’s work through copying or failing to properly cite sources.

Plausibility – The logical sense required to claim causation between events or behaviors.

Population of Interest – The specific group targeted by a researcher for study and conclusion drawing.

Positivism – A paradigm emphasizing objective, empirical study and deductive logic to discover unbiased truths.

Poster Presentation – Visual presentations using a poster format to summarize research study components.

Postmodernism – A paradigm emphasizing subjective perspectives, challenging the possibility of universal, objective truths.

Posttest-Only Control Group Design – An experimental design measuring outcomes only after intervention, without pretesting.

Posttest – Measurement conducted after an intervention.

Practical Articles – Articles describing practical methodologies or implementations.

Practice Effect – Performance improvement in subsequent tasks due to prior experience in earlier conditions.

Practice Wisdom – Knowledge gained through practical experience, guiding social work interventions.

Pre-Experimental Design – Preliminary design exploring intervention impacts, lacking random assignment and control groups, thus more susceptible to validity threats.

Predictive Validity – The ability of a measure to accurately forecast outcomes logically related to the measurement.

Pretest – A measurement taken prior to the intervention.

Primary Data – Data collected directly by the researcher for a specific purpose.

Primary Source – Published results of original research studies.

Probability Proportionate to Size (PPS) – A sampling technique accounting for different cluster sizes, ensuring each individual has an equal selection chance.

Probability Sampling – Sampling method where each individual has a known, non-zero selection chance, enabling generalization.

Probe – Request for additional information in qualitative research.

Process Assessment – Evaluation focusing on early stages of a program to verify its intended functioning.

Program – An intervention received by clients or participants.

Prolonged Engagement – Spending extended time with participants or settings to build trust, identify patterns, and reduce biases.

Psychometric Properties – Characteristics determining the reliability and validity of a quantitative measurement tool.

Purposive Sampling – Nonprobability sampling selecting participants based on characteristics relevant to the research question.

 

Q-

Qualitative Methods – Approaches analyzing words or media to understand meaning.

Quantitative Interview – Structured verbal survey method aiming to reduce interviewer effects.

Quantitative Methods – Approaches analyzing numerical data to describe and predict social phenomena.

Quasi-Experimental Design – Research design lacking random assignment but similar to a true experiment.

Query – Search terms used to find sources in a database.

Quota Sampling – Nonprobability sampling method ensuring proportional representation from identified subgroups.

 

R-

Random Assignment – Assigning participants randomly to experimental conditions to ensure comparable groups and control extraneous variables.

Random Counterbalancing – Selecting a random order of conditions for each participant to manage large numbers of experimental conditions.

Random Error – Unpredictable fluctuations in measurements not consistently skewing results in one direction.

Random Number Generator – A computational tool creating impartial random selections for sampling.

Random Sampling – Probability-based sampling where participants have an equal chance of inclusion.

Random Selection – Using randomly generated numbers to determine sample recruitment from a sampling frame.

Ratio Level – Measurement level with mutually exclusive, rank-ordered attributes, equal intervals, and a true zero point.

Recruitment – The process of informing and inviting potential study participants through aligned sampling strategies.

Recursive Design Model – Mixed-methods design combining exploratory and explanatory sequential approaches.

Reductionism – Error drawing group conclusions from individual-level data, neglecting broader contextual factors.

References – List of sources cited, enabling readers to locate original research.

Reflexivity – Critical self-reflection on researcher’s role, biases, and influence throughout the research.

Reification – Mistakenly treating abstract concepts as tangible, universally existent entities.

Reliability – Measurement consistency under the same conditions, producing stable results over time or observers.

Repeated Cross-Sectional Survey – Longitudinal survey type collecting data at multiple points from different samples within the same population.

Replication – Repeating an experiment with identical methods to verify consistency of results.

Representative Sample – Sample mirroring the important characteristics of the population studied.

Reproducibility – Extent to which coding procedures yield identical results from different coders.

Research Methods – Organized, logical approaches to knowledge acquisition based on theory and observation.

Respect of Persons – Ethical principle emphasizing voluntary, informed consent and protecting autonomy.

Response Rate – Percentage of invited participants who complete surveys, influencing generalizability.

Results Section – Study findings presented through statistics, tables, or figures without interpretation.

Retrospective Surveys – Surveys administered once, describing historical changes over time.

Roundtable Presentation – Presentations aimed at stimulating interactive discussions.

 

S-

Sample – The subset of individuals selected from the population who participate in a study.

Sampling Bias – Error occurring when some population members are systematically more likely to be sampled, producing non-representative results.

Sampling Error – The statistical difference between sample results and actual population parameters.

Sampling Frame – The complete list of individuals from which the sample is drawn.

Scale – Composite measure accounting for varying item intensity within an index.

Scatterplot – Graph charting X and Y variable values as points.

Science – A systematic method of collecting and categorizing empirical facts or truths.

Scientific Misconduct – Intentional violation of research integrity, including data fabrication, falsification, or plagiarism.

Secondary Data – Data originally collected by others, used with permission for research purposes.

Secondary Sources – Sources interpreting, discussing, and summarizing original research.

Selection Bias – Bias introduced when researchers influence assignment into experimental or control groups.

Selection Interval (k) – Interval calculated between elements selected in systematic sampling.

Self-administered Questionnaires – Surveys completed independently by participants either physically or electronically.

Self-Selection (Selection Effects) – Bias occurring when participants choose their own groups or conditions.

Semi-structured Interviews – Interviews using open-ended questions that vary in wording or sequence.

Seminal Articles – Classic works recognized for their significant contribution and high citation count.

Sensitivity – Measure’s ability to detect small changes or differences.

Sequence – Order of methodological approaches in mixed-methods research.

Sequential Explanatory – Mixed-methods design where quantitative data collection precedes qualitative data collection.

Signposting – Writing technique clearly guiding readers through a document using headers, topic sentences, and transitions.

Simple Random Sampling – Probability sampling giving every population member an equal chance of selection.

Single-Factor Multilevel Design – Experiment with one independent variable having multiple conditions.

Single-Factor Two-Level Design – Experiment with one independent variable consisting of exactly two conditions.

Single-subjects Design – Intensive research design focusing on one individual subject.

Snowball Sampling – Nonprobability sampling where participants recruit subsequent participants from their networks.

Social Constructionism – Paradigm viewing truth as subjective, socially contextualized, and constantly evolving.

Social Desirability Bias – Participants providing socially acceptable answers rather than truthful responses.

Solomon Four-Group Design – True experimental design using four groups to evaluate treatment and testing effects, involving combinations of pretests, treatments, and posttests.

Spurious Correlation – A correlation appearing to exist between two variables that is actually random and not indicative of a real relationship.

Stability – Consistency of coding results across different time periods.

Stakeholders – Individuals or groups with vested interests in the research process or outcomes, consulted throughout research stages.

Standardized Measurement Protocols – Consistent, written procedures for administering measures to reduce bias and error.

Static Group Comparison Design – A pre-experimental design comparing outcomes between a non-randomized treatment group and a comparison group using posttest data only.

Statistical Significance – Indicates the likelihood an observed relationship or effect is not due to chance, suggesting meaningful associations between variables.

Strata – Subgroups within a population used in stratified sampling, homogeneous internally and distinct from each other.

Stratified Sampling – Probability sampling dividing the population into subgroups (strata) and sampling separately from each.

Subgroups – Distinct segments within a larger population sharing common characteristics.

Subject Pool – Individuals who have agreed to be contacted for potential participation in research.

Subjective Truth – Truth contextualized within social and cultural boundaries, varying by perspective.

Survey – Data collection method involving questions delivered via online, mail, in-person, or phone.

Synthesize – Combining ideas or identifying themes across sources, attributing original sources appropriately.

Systematic Error – Directional measurement error consistently resulting in inaccurate results.

Systematic Sampling – Probability sampling method selecting every kth element from a list following a random start.

 

T-

Table – Condensed summary presenting key findings succinctly.

Target Population – Specific group addressed by a study, defined by shared characteristics or experiences relevant to the research.

Temporality – The requirement that causes identified by researchers precede their effects.

Tertiary Sources – Resources compiling or summarizing primary and secondary sources, useful for introductory understanding but insufficient for detailed scholarly research.

Test-Retest Reliability – Measurement tool consistency over repeated administrations under similar conditions.

Testing Effects – Changes in participant scores due to prior exposure to the measurement tool.

Theoretical Articles – Articles exploring theories or conceptual frameworks without empirical research, structured to support conceptual arguments.

Theoretical Triangulation – Employing multiple theoretical frameworks to interpret data and enhance study robustness.

Theory – Interrelated statements systematically explaining aspects of social life and observed patterns.

Theory Building – Creating new theories using inductive reasoning based on empirical observations.

Theory Testing – Mathematically testing hypotheses derived from existing theories.

Theory – A systematic set of interrelated statements explaining aspects of social life, detailing the “how” and “why” behind observed patterns.

Thick Description – A detailed account of study context, participants, and circumstances, allowing readers to judge the relevance of findings to their own settings.

Third-variable Problem – Occurs when an observed relationship between two variables is influenced by an unmeasured third variable, rendering the relationship spurious.

Time Series Design – A quasi-experimental design using multiple observations before and after an intervention to identify patterns and lasting impacts.

Transcript – A complete, verbatim written copy of recorded interviews or focus groups, noting each spoken word and speaker.

Transferability – The applicability of research findings to other settings or groups, assessed by readers through thick descriptions provided by the researcher.

Transparency – Open disclosure regarding the use, extent, and methods of employing AI tools in research, ensuring ethical integrity and replicability.

Treatment Stage – Period during which an intervention is administered.

Trend – Visually observable pattern in data, particularly in single-subjects or small-sample designs, not always statistically testable.

Triangulation – Method enhancing credibility by verifying data through multiple sources, methods, researchers, or theoretical lenses.

True Experiment – Study design with independent and dependent variables, pretesting and posttesting, and both experimental and control groups.

Trustworthiness – Qualitative research criterion assessing truth value, applicability, consistency, and neutrality to ensure accurate representation of participant experiences.

Typology – Measure categorizing concepts by thematic similarity.

 

U-

Unintentional Exclusion – Inadvertent omission of individuals or groups from a study due to recruitment strategies or study design, potentially causing sampling bias.

Unit of Analysis – Primary entity about which the research seeks to draw conclusions (e.g., individuals, families, communities).

Unit of Observation – Specific elements or items observed or measured to collect data, which may differ from the unit of analysis.

Univariate Analysis – Analysis involving a single variable.

Unobtrusive Research – Data collection methods that do not interfere with the subjects under study.

 

V-

Validation (Double Entry) – Process of verifying data accuracy before analysis through double-checking entries.

Validity – Extent to which a measurement accurately captures the intended concept, beyond mere consistency.

Variable – Numeric representation of a concept or characteristic varying across research participants.

Variable Name – Concise label assigned to dataset variables, typically one word for software compatibility.

Vulnerable Populations – Groups requiring additional protections due to increased coercion risk or limited informed consent capacity (e.g., pregnant women, prisoners, children, impaired individuals, employees, students).

 

W-

Wait-List Control Condition – A control group in which participants receive the intervention after the study concludes, allowing researchers to compare active treatment participants to those awaiting treatment.

Weighting – Statistical method adjusting the influence of individual responses based on demographic factors (e.g., sex, race, education) to correct for sampling biases and accurately represent the population.

Within-Subjects Experiment – Experimental design in which participants experience all conditions of the independent variable, enabling direct within-individual comparisons.

License

Icon for the Creative Commons Attribution-ShareAlike 4.0 International License

Understanding Research Design in the Social Science Copyright © by Utah Valley University is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book