Skip to Main Content

PH 210J: MCAH Journal Club: Evaluate What You Find

Critically Evaluating What You Find

Evaluation is about determining the quality, value, or importance of something in order to take action. It is underpinned by the systematic collection of information and evidence.

What is evidence? Things to keep in mind:

  • All research is (potentially) "evidence" and there are no "perfect" studies.
  • Is there an agenda (bias)?
    It is doubtful that any study of humans is totally without some kind of bias, either in the study design, or in the author's pre-existing beliefs, not to mention the source of the research funds. How bias in methodology was controlled and the significance of bias in any particular study is what's relevant.
  • Is qualitative research "evidence"?
    If your goal is to understand beliefs and meanings in the group with whom you are working, then qualitative studies can be important.
    Read Criteria for evaluating evidence on public health interventions (J Epidemiol Community Health. 2002 Feb;56(2):119-27)

Who pays for science?
Most scientific research is funded by government, companies doing research and development, and non-profit entities. Because science is attempting to get at some "truth," the source of research funding shouldn't have a significant effect on the outcome of scientific research, right?

Is it race? or is it racism?
Race is a sociological construct, yet most articles describing racial disparities ascribe them to race, not to racism.

Peer review
Peer review refers to a process whereby scholarly work (ie, an article) is reviewed and critiqued by experts to ensure it meets some standards of acceptance before it is published.
Does this process make for better science?

What gets researched? What gets published? ("Publication bias"); What (or Who) gets funded?
Studies that report interventions that had no effect are less likely to get published. What does this mean in terms of the state of knowledge on a topic?

There is also evidence of disparities in grant funding for research:

as well as disparities in recruitment for trials:

Sometimes stuff gets researched because of "scientific inertia":

Oops! I made a mistake (or ... was it cheating..?)
Occasionally, researchers make mistakes, and sometimes those mistakes affect the conclusions of a published article. Articles may be retracted if the mistake is significant. This is a formal process where the author or journal publishes a statement outlining the error. Sometimes, however, retraction is the result of fraud, plagiarism, or other bad acts.

Opinion or fact?
Do the conclusions of the article follow the evidence that's presented? Are opinions or notions posited as facts?

CV boosting: Does this study add to the body of knowledge, or is it just something the author is doing to add to his/her list of publications?
(In)significance of a single study: Science is incremental. Beware of any study that's proclaimed to be a "breakthrough."

 

What to Consider When Looking at Survey or Estimated Data

  • Look at sample sizes and survey response rates
    • Representative of your population?
    • Enough responses to be valid?
  • Who was surveyed?
    • Representative of population being compared to?
    • Include group you are interested in?
    • Were the survey respondents from heterogeneous groups?
    • Do the survey questions mean the same things to members of different groups?
  • How was survey conducted?
  • What assumptions and methods were used for extrapolating the data?
    • Is there any bias?
    • Is the method appropriate?
  • Look at definitions of characteristics:
    • Does this match your own definitions?
  • When was the data collected?
    • How old is too old?

How is race/ethnicity reported in the studies you read?:

  • Who identified race/ethnicity of respondents/participants?
  • Does the language in the article impart bias?
  • Is race acknowledged as a social construct?
  • Are differences reported as associated with "race" or "racism"?
  • Are participants' identities disaggregated?

Reliability and validity

Reliable data collection: relatively free from "measurement error." 

  • Is the survey written at a reading level too high for the people completing it?
  • Is the device used to measure elapsed time in an experiment accurate?

Validity refers to how well a measure assesses what it claims to measure 

  • If the survey is supposed to measure "quality of life," how is that concept defined?
  • How accurately can this animal study of drug metabolism be extrapolated to humans?

Adapted from Chapter 3, Conducting research literature reviews : from the Internet to paper, by Arlene Fink,  2010. 

Journal Impact Measures

The journal impact factor is a calculation of how many citations the articles in a journal receive (over a 2-year average). It is used as a proxy measure of the quality of a journal. If the impact factor of a journal is 5, then on average, articles in this journal receive about five citations within the first two years after publication.

In any discussion of journal, article, or author metrics, it is imperative to remember Goodhart's law:
"When a measure becomes a target, it ceases to be a good measure."

Journal Citation Reports: Find impact factors (Note: Journal websites generally will include the impact factor).

Scopus CiteScore metrics: Click “Sources" - An alternative to the JIF.

You may wish to read this brief article on the Journal Impact Factor:
Is the impact factor the only game in town?. P. Smart. Ann R Coll Surg Engl. 2015;97(6):405-8.

PLoS, a top-tier open access suite of journals, says this: "PLOS does not consider Impact Factor to be a reliable or useful metric to assess the performance of individual articles. PLOS supports DORA – the San Francisco Declaration on Research Assessment – and does not promote our journal Impact Factors."

In addition, citation counts themselves are not necessarily a good metric of importance; see How citation distortions create unfounded authority: analysis of a citation network. Greenberg SA. BMJ. 2009 Jul 20;339:b2680. doi: 10.1136/bmj.b2680.

Finally, one could argue that journal impact factor manipulation is itself a predatory journal trait.