Appendix C3 Overall Critical Appraisal of Published Research Articles

Preliminary Overview:

  1. How relevant is the research problem to the actual practice of nursing?
  2. Was the study quantitative or qualitative?
  3. What was the underlying purpose (or purposes) of the study—Therapy/intervention, Diagnosis/Assessment, Prognosis, Etiology/harm, Description, or Meaning?
  4. What might be some clinical implications of this research? To what type of people and settings is the research most relevant? If the findings were accurate, how might I use the results of this study?
  5. What was the study all about? What were the main phenomena, concepts, or constructs under investigation?
  6. If the study was quantitative, what were the independent and dependent variables?
  7. . Did the researcher examine relationships or patterns of association among variables or concepts? Did the report imply the possibility of a causal relationship?
  8. Were key concepts defined, both conceptually and operationally?
  9. What type of study does it appear to be, in terms of types described in this chapter—experimental or nonexperimental/observational? Grounded theory, phenomenological, or ethnographic?
  10. Did the report provide information to suggest how long the study took to complete?

 

Ethical Components:

  1. Was the study approved and monitored by an Institutional Review Board, Research Ethics Board, or other similar ethics review committee?
  2. Were study participants subjected to any physical harm, discomfort, or psychological distress? Did the researchers take appropriate steps to remove or prevent harm?
  3. Did the benefits to participants outweigh any potential risks or actual discomfort they experienced? Did the benefits to society outweigh the costs to participants?
  4. Was any type of coercion or undue influence used to recruit participants? Did they have the right to refuse to participate or to withdraw without penalty?
  5. Were participants deceived in any way? Were they fully aware of participating in a study, and did they understand the purpose and nature of the research?
  6. Were appropriate informed consent procedures used with participants? If not, was there a justifiable rationale?
  7. Were adequate steps taken to safeguard participants’ privacy? How was confidentiality maintained? Was a Certificate of Confidentiality obtained—and, if not, should one have been obtained?
  8. Were vulnerable groups involved in the research? If yes, were special precautions instituted because of their vulnerable status?
  9. Were groups omitted from the inquiry without a justifiable rationale, such as women (or men), or minorities?

 

Research Problems, Research Questions, and Hypotheses:

  1. What was the research problem? Was the problem statement easy to locate and was it clearly stated? Did the problem statement build a coherent and persuasive argument for the new study?
  2. Does the problem have significance for nursing?
  3. Was there a good fit between the research problem and the paradigm (and tradition) within which the research was conducted?
  4. Did the report formally present a statement of purpose, research question, and/or hypotheses? Was this information communicated clearly and concisely, and was it placed in a logical and useful location?
  5. Were purpose statements or research questions worded appropriately (e.g., were key concepts/variables identified and the population specified?
  6. If there were no formal hypotheses, was their absence justified? Were statistical tests used in analyzing the data despite the absence of stated hypotheses?
  7. Were hypotheses (if any) properly worded—did they state a predicted relationship between two or more variables? Were they presented as research or as null hypotheses?

 

Literature Review (the “Introductory” portion of an article):

  1. Does the review seem thorough and up-to-date? Did it include major studies on the topic? Did it include recent research?
  2. Did the review rely mainly on research reports, using primary sources?
  3. Did the review critically appraise and compare key studies? Did it identify important gaps in the literature?
  4. Was the review well organized? Is the development of ideas clear?
  5. Did the review use appropriate language, suggesting the tentativeness of prior findings? Is the review objective?
  6. If the review was in the introduction for a new study, did the review support the need for the study?
  7. If the review was designed to summarize evidence for clinical practice, did it draw appropriate conclusions about practice implications?

 

Theoretical/Conceptual Frameworks:

  1. Did the report describe an explicit theoretical or conceptual framework for the study? If not, does the absence of a framework detract from the study’s conceptual integration?
  2. Did the report adequately describe the major features of the theory or model so that readers could understand the conceptual basis of the study?
  3. Is the theory or model appropriate for the research problem? Does the purported link between the problem and the framework seem contrived?
  4. Was the theory or model used for generating hypotheses, or is it used as an organizational or interpretive framework? Do the hypotheses (if any) naturally flow from the framework?
  5. Were concepts defined in a way that is consistent with the theory? If there was an intervention, were intervention components consistent with the theory?
  6. Did the framework guide the study methods? For example, was the appropriate research tradition used if the study was qualitative? If quantitative, do the operational definitions correspond to the conceptual definitions?
  7. Did the researcher tie the study findings back to the framework at the end of the report? Were the findings interpreted within the context of the framework?

 

Quantitative Research Articles:

  1. Was the design experimental, quasi-experimental, or nonexperimental? What specific design was used? Was this a cause-probing study? Given the type of question (Therapy, Prognosis, etc.), was the most rigorous possible design used?
  2. What type of comparison was called for in the research design? Was the comparison strategy effective in illuminating key relationships?
  3. If the study involved an intervention, were the intervention and control conditions adequately described? Was blinding used, and if so, who was blinded? If not, is there a good rationale for failure to use blinding?
  4. If the study was nonexperimental, why did the researcher opt not to intervene? If the study was cause-probing, which criteria for inferring causality were potentially compromised? Was a retrospective or prospective design used, and was such a design appropriate?
  5. Was the study longitudinal or cross-sectional? Was the number and timing of data collection points appropriate?
  6. What did the researcher do to control confounding participant characteristics, and were the procedures effective? What are the threats to the study’s internal validity? Did the design enable the researcher to draw causal inferences about the relationship between the independent variable and the outcome?
  7. What are the major limitations of the design used? Were these limitations acknowledged by the researcher and considered in interpreting results? What can be said about the study’s external validity?

 

Quantitative Sampling Plans:

  1. Was the population identified? Were eligibility criteria specified?
  2. What type of sampling design was used? Was the sampling plan one that could be expected to yield a representative sample?
  3. How many participants were in the sample? Was the sample size affected by high rates of refusals or attrition? Was the sample size large enough to support statistical conclusion validity? Was the sample size justified on the basis of a power analysis or other rationale?
  4. Were key characteristics of the sample described (e.g., mean age, percentage of female)?
  5. To whom can the study results reasonably be generalized?

 

Quantitative Data Collection:

  1. Did the researchers use the best method of capturing study phenomena (i.e., self-reports, observation, biomarkers)?
  2. If self-report methods were used, did the researchers make good decisions about the specific methods used to solicit information (e.g., in-person interviews, Internet questionnaires, and so on)? Were composite scales used? If not, should they have been?
  3. If observational methods were used, did the report adequately describe what the observations entailed and how observations were sampled? Were risks of observational bias addressed? Were biomarkers used in the study, and was this appropriate?
  4. Did the report provide adequate information about data collection procedures (e.g., the training of the data collectors)?
  5. Did the report offer evidence of the reliability of measures? Did the evidence come from the research sample itself, or was it based on other studies? If reliability was reported, which estimation method was used? Was the reliability sufficiently high?
  6. Did the report offer evidence of the validity of the measures? If validity information was reported, which validity approach was used?
  7. If there was no reliability or validity information, what conclusion can you reach about the quality of the data in the study?

 

Qualitative Research Articles:

  1. Was the research tradition for the qualitative study identified? If none was identified, can one be inferred?
  2. Is the research question congruent with a specific research tradition? Are the data sources and research methods congruent with the research tradition?
  3. How well was the research design described? Are design decisions explained and justified? Does it appear that the design emerged during data collection, allowing researchers to capitalize on early information?
  4. Did the design lend itself to a thorough, in-depth examination of the focal phenomenon? Was there evidence of reflexivity? What design elements might have strengthened the study (e.g., a longitudinal perspective rather than a cross-sectional one)?
  5. Was the study undertaken with an ideological perspective? If so, is there evidence that ideological goals were achieved (e.g., Was there full collaboration between researchers and participants? Did the research have the power to be transformative?)?

 

Qualitative Sampling Plans:

  1. Was the setting appropriate for addressing the research question, and was it adequately described?
  2. What type of sampling strategy was used?
  3. Were the eligibility criteria for the study specified? How were participants recruited into the study?
  4. Given the information needs of the study—and, if applicable, its qualitative tradition—was the sampling approach effective?
  5. Was the sample size adequate and appropriate? Did the researcher indicate that saturation had been achieved? Do the findings suggest a richly textured and comprehensive set of data without any apparent “holes” or thin areas?
  6. Were key characteristics of the sample described (e.g., age, gender)? Was a rich description of participants and context provided, allowing for an assessment of the transferability of the findings?

 

Qualitative Data Collection:

  1. Given the research question and the characteristics of study participants, did the researcher use the best method of capturing study phenomena (i.e., self-reports, observation)? Should supplementary methods have been used to enrich the data available for analysis?
  2. If self-report methods were used, did the researcher make good decisions about the specific method used to solicit information (e.g., unstructured interviews, focus group interviews, and so on)?
  3. If a topic guide was used, did the report present examples of specific questions? Did the wording of questions encourage rich responses?
  4. Were interviews recorded and transcribed? If interviews were not recorded, what steps were taken to ensure data accuracy?
  5. If observational methods were used, did the report adequately describe what the observations entailed? What did the researcher actually observe, in what types of setting did the observations occur, and how often and over how long a period were observations made?
  6. What role did the researcher assume in terms of being an observer and a participant? Was this role appropriate?
  7. How were observational data recorded? Did the recording method maximize data quality?

Critically appraisal also includes examining the study’s statistical analyses. However, since this course does not dive too deep into that aspect, you will need to look at overall statistical significance and clinical significance.

 

Interpretation of Findings, Implications, and Clinical Significance:

  1. Were all the important results discussed?
  2. Did the researchers discuss any study limitations and their possible effects on the credibility of the findings? In discussing limitations, were key threats to the study’s validity and possible biases reviewed? Did the interpretations take limitations into account?
  3. What types of evidence were offered in support of the interpretation, and was that evidence persuasive? Were results interpreted in light of findings from other studies?
  4. Did the researchers make any unjustifiable causal inferences? Were alternative explanations for the findings considered? Were the rationales for rejecting these alternatives convincing?
  5. Did the interpretation consider the precision of the results and/or the magnitude of effects?
  6. Did the researchers draw any unwarranted conclusions about the generalizability of the results?
  7. Did the researchers discuss the study’s implications for clinical practice or future nursing research? Did they make specific recommendations?
  8. If yes, are the stated implications appropriate, given the study’s limitations and the magnitude of the effects as well as evidence from other studies? Are there important implications that the report neglected to include?
  9. Did the researchers mention or assess clinical significance? Did they make a distinction between statistical and clinical significance?
  10. If clinical significance was examined, was it assessed in terms of group-level information (e.g., effect sizes) or individual-level results? How was clinical significance operationalized?

 

License

Share This Book