Skip to Main Content

Joint Medical Program Library Resources: Evaluate What You Find

Critically Evaluating What You Find

Evaluation is about determining the quality, value, or importance of something in order to take action. It is underpinned by the systematic collection of information and evidence.

What is evidence? Things to keep in mind:

  • All research is (potentially) "evidence" and there are no "perfect" studies.
  • "Bits" of data (eg, a research study) become evidence when examined in context. Beware of the "this study changes everything" phenomena.
  • Is there an agenda (bias)?
    It is doubtful that any study of humans is totally without some kind of bias, either in the study design, or in the author's pre-existing beliefs, not to mention the source of the research funds. How bias was controlled and the significance of bias in any particular study is what's relevant.
  • Is qualitative research "evidence"?
    If your goal is to understand beliefs and meanings in the group with whom you are working, then qualitative studies can be important.
  • Read Criteria for evaluating evidence on public health interventions (J Epidemiol Community Health. 2002 Feb;56(2):119-27).

Who pays for science?
Most scientific research is funded by government, companies doing research and development, and non-profit entities/educational institutions. Because science is attempting to get at some "truth," the source of research funding shouldn't have a significant effect on the outcome of scientific research, right?

Read Industry sponsorship and research outcome Cochrane Database of Systematic Reviews 2017, Issue 2. Art. No.: MR000033).

Peer review
Peer review refers to a process whereby scholarly work (ie, an article) is reviewed and critiqued by experts to ensure it meets some standards of acceptance before it is published.
Does this process make for better science?
Read Editorial peer review for improving the quality of reports of biomedical studies (Cochrane Database of Systematic Reviews 2007, Issue 2. Art. No.: MR000016).

What gets researched? What gets published? ("Publication bias"); What (or Who) gets funded? Who gets cited?
Studies that report interventions that had no effect are less likely to get published. What does this mean in terms of the state of knowledge on a topic?
Read Systematic Review of the Empirical Evidence of Study Publication Bias and Outcome Reporting Bias (PLoS One. 2008 Aug 28;3(8):e3081).
There is also evidence of disparities in grant funding for research:
Read Race, Ethnicity, and NIH Research Awards (Science. 2011 Aug 19;333(6045):1015-9).
as well as disparities in recruitment for trials:
Read Racial Differences in Eligibility and Enrollment in a Smoking Cessation Clinical Trial (Health Psychol. 2011 Jan; 30(1): 40–48).
Not to mention disparities in whose research gets cited:
Read Gender Disparity in Citations in High-Impact Journal Articles (JAMA Network Open 2021;4:e2114509).
Read Non-White scientists appear on fewer editorial boards, spend more time under review, and receive fewer citations (PNAS 2023, 120(13):e2215324120).
Sometimes stuff gets researched because of "scientific inertia":
Read Adequate and anticipatory research on the potential hazards of emerging technologies: a case of myopia and inertia? (J Epidemiol Community Health 2014;68:890-895).

Is it race? or is it racism?
Race is a sociological construct, yet most articles describing racial disparities ascribe them to race, not to racism.
Read NIH must confront the use of race in science (Science 2020;369(6509):1313-1314).
See Critically Appraising for Antiracism: Identifying racial bias in published research: A guide and tool to help evaluate research literature for racism.

Oops! I made a mistake (or ... was it cheating..?)
Occasionally, researchers make mistakes, and sometimes those mistakes affect the conclusions of a published article. Articles may be retracted if the mistake is significant. This is a formal process where the author or journal publishes a statement outlining the error. Sometimes, however, retraction is the result of fraud, plagiarism, or other bad acts.
Read Retraction Watch.

Opinion or fact?
Do the conclusions of the article follow the evidence that's presented? Are opinions or notions posited as facts?
Search "As is well known..." in Google Scholar.
Read A Propaganda Index for Reviewing Problem Framing in Articles and Manuscripts: An Exploratory Study (PLoS One. 2011;6(5):e19516).

CV boosting: Does this study add to the body of knowledge, or is it just something the author is doing to add to an ever-growing list of publications?
(In)significance of a single study: Science is incremental. Beware of any study that's proclaimed to be a "breakthrough."
Read Evidence-Based Public Health (RC Brownson. New York; Oxford: Oxford University Press, 2011).

What's the question?
Compare: "Our intervention worked toward fixing Problem X" vs. "The best intervention(s) for fixing Problem X is/are:..."

What to consider when looking at survey or estimated data:

    Adopted from information on the UCSF Family Health Outcomes Project web site
  • Look at sample sizes and survey response rates - representative of your population? Enough responses to be valid?
  • Who was surveyed? - representative of population being compared to? Include group you are interested in? Is the sample "WEIRD"?
  • Were the survey respondants from heterogeneous groups? Do the survey questions have a similar meaning to members of different groups?
  • How was survey conducted? Via telephone? - Most US households only have cell phones (PDF). Random selection or targeted group?
  • What assumptions and methods were used for extrapolating the data?
  • Look at definitions of characteristics - Does this match your own definitions?
  • When was the data collected?

Reliability and validity:
(Adopted from Chapter 3, Conducting research literature reviews: from the Internet to paper, by Arlene Fink; Sage, 2010).
Reliable data collection is relatively free from "measurement error":

  • Is the survey written at a reading level too high for the people completing it?
  • Is the device used to measure, say, elapsed time in an experiment accurate?

Validity refers to how well a measure assesses what it claims to measure:

  • If the survey is supposed to measure "quality of life," how is that concept defined?
  • How accurately can this animal study of drug metabolism be extrapolated to humans?

Some Relevant Books

Always Remember . . .

image of Magritte painting