Skip to Main Content

Optometry Residents: Evaluating Resources

Critically Evaluating What You Find

Evaluation is about determining the quality, value, or importance of something in order to take action. It is underpinned by the systematic collection of information and evidence.

What is evidence? Things to keep in mind:

  • All research is (potentially) "evidence" and there are no "perfect" studies.
  • Is there an agenda (bias)?
    Any study of humans is unlikely to be without some kind of bias, either in the study design, or in the author's pre-existing beliefs, not to mention the source of the research funds. How bias in methodology was controlled and the significance of bias in any particular study is what's relevant.
  • Is qualitative research "evidence"?
    If your goal is to understand beliefs and meanings in the group with whom you are working, then qualitative studies can be important.
    Read Criteria for evaluating evidence on public health interventions (J Epidemiol Community Health. 2002 Feb;56(2):119-27).

Things to consider:

  • The question being addressed: What kind of research gets funded?
  • Publication bias: Research that shows no effect tends not to get published.
  • Conflict of interest, author affiliation, source(s) of funding: Does the researcher (or the funder) have a vested interest in the outcome? Many authors do not disclose industry payments.
  • Documentation and assumptions: Are all stated "facts" referenced?
  • Significance of a single study: Science is an incremental process; one study rarely "changes everything."

Who pays for science?
Most scientific research is funded by government, companies doing research and development, and non-profit entities. Because science is attempting to get at some "truth," the source of research funding shouldn't have a significant effect on the outcome of scientific research, right?

Read Industry sponsorship and research outcome Cochrane Database Syst Rev. 2017 Feb 16;2:MR000033).
Read Food Politics, a blog by Marion Nestle that often addresses the issues of industry sponsored research.
This article discusses the "manufactured uncertainty" created by industry groups that sponsor research and publishing on chemicals.

Is it race? or is it racism?
Race is a sociological construct, yet most articles describing racial disparities ascribe them to race, not to racism.
Read NIH must confront the use of race in science (Science 2020;369(6509):1313-1314).
See Critically Appraising for Antiracism: Identifying racial bias in published research: A guide and tool to help evaluate research literature for racism.

Peer review
Peer review refers to a process whereby scholarly work (ie, an article) is reviewed and critiqued by experts to ensure it meets some standards of acceptance before it is published.
Does this process make for better science?
Read Editorial peer review for improving the quality of reports of biomedical studies (Cochrane Database Syst Rev. 2007 Apr 18;(2):MR000016).

Reliability and validity
Reliable data collection: relatively free from "measurement error:" 
Is the survey written at a reading level too high for the people completing it? 
If I measure something today, then measure it again tomorrow using the same scale, will it vary? Why? 
Validity refers to how well a measure assesses what it claims to measure: 
If the survey is supposed to measure "quality of life," how is that concept defined? Is it measurable? 
(Adopted from Chapter 3, Conducting research literature reviews : from the Internet to paper, by Arlene Fink; Sage.)

What to consider when looking at survey or estimated data:

  • Look at sample sizes and survey response rates - representative of your population? Enough responses to be valid?
  • Who was surveyed? - representative of population being compared to? Include group you are interested in? Is the sample "WEIRD"?
  • Were the survey respondents from heterogeneous groups? Do the survey questions have a similar meaning to members of different groups?
  • How was survey conducted? Via telephone? - Many people only have cell phones. Random selection or targeted group?
  • What assumptions and methods were used for extrapolating the data?
  • Look at definitions of characteristics - Does this match your own definitions?
  • When was the data collected?

Journal Impact Measures

The journal impact factor is a calculation of how many citations the articles in a journal receive (over a 2-year average). It is used as a proxy measure of the quality of a journal. If the impact factor of a journal is 5, then on average, articles in this journal receive about five citations within the first two years after publication.

In any discussion of journal, article, or author metrics, it is imperative to remember Goodhart's law:
"When a measure becomes a target, it ceases to be a good measure."

Journal Citation Reports: Find impact factors (Note: Journal websites generally will include the impact factor).

Scopus CiteScore metrics: Click “Sources" - An alternative to the JIF.

You may wish to read this brief article on the Journal Impact Factor:
Is the impact factor the only game in town?. P. Smart. Ann R Coll Surg Engl. 2015;97(6):405-8.

PLoS, a top-tier open access suite of journals, says this: "PLOS does not consider Impact Factor to be a reliable or useful metric to assess the performance of individual articles. PLOS supports DORA – the San Francisco Declaration on Research Assessment – and does not promote our journal Impact Factors."

In addition, citation counts themselves are not necessarily a good metric of importance; see How citation distortions create unfounded authority: analysis of a citation network. Greenberg SA. BMJ. 2009 Jul 20;339:b2680. doi: 10.1136/bmj.b2680.

Finally, one could argue that journal impact factor manipulation is itself a predatory journal trait.

What is a "predatory journal"? How do I find out if a journal I want to read or publish in is "predatory"?

"Predatory journals and publishers are entities that prioritize self-interest at the expense of scholarship and are characterized by false or misleading information, deviation from best editorial and publication practices, a lack of transparency, and/or the use of aggressive and indiscriminate solicitation practices." (Source).

A 2020 systematic review of checklists to determine whether a journal is predatory found no checklist to be optimal. They recommended you look for a checklist that:

  1. Provides a threshold value for criteria to assess potential predatory journals, e.g. if the journal contains these three checklist items then we recommend avoiding submission;
  2. Has been developed using rigorous evidence, i.e. empirical evidence that is described or referenced in the publication.

They noted that only one checklist out of the 93 assessed fulfills the above criteria.

Be aware! Rather than relying on lists or checklists, check if a journal is listed in DOAJ (Directory of Open Access Journals); if it is, the journal is less likely to be problematic because it has been vetted. Similarly, check if a journal is a member of COPE (Committee on Publication Ethics), where it must follow COPE’s publication ethics (COPE Core Practices).

You may wish to review the Principles of Transparency and Best Practice in Scholarly Publishing from the World Association of Medical Editors.

See also the report, Combatting Predatory Academic Journals and Conferences, from the InterAcademy Partnership.

Also of interest may be the Retraction Watch Hijacked Journals Checker.

And, please also be aware of the Institutionalized Racism of Scholarly Publishing:

  • Non-Western and/or non-English journals are hugely underrepresented in our current scholarly indexes;
  • The scholarly publishing infrastructure demands journals be Open and English to be noticed, but non-Western journals may be labeled as predatory as they struggle to fulfill such demands.

Finally, one could argue that journal impact factor manipulation is a trait of predatory journals.

Evidence Pyramid

The "Evidence Pyramid" is a graphic representation of strength of evidence of various publication types. A typical evidence pyramid looks like this:

Evidence pyramid

However, recently a modified evidence pyramid has been proposed, which looks like this:

Modified evidence pyramid from BMJ article

The proposed new evidence-based medicine pyramid. (A) The traditional pyramid. (B) Revising the pyramid: (1) lines separating the study designs become wavy, (2) systematic reviews are ‘chopped off’ the pyramid. (C) The revised pyramid: systematic reviews are a lens through which evidence is viewed. (from Murad MH, Asi N, Alsawas M, et al. New evidence pyramid. BMJ Evidence-Based Medicine 2016;21:125-127.

Evaluation Criteria

When you encounter any kind of source, consider:

  1. Authority - Who is the author? What is their point of view?
  2. Purpose - Why was the source created? Who is the intended audience?
  3. Publication & format - Where was it published? In what medium?
  4. Relevance - How is it relevant to your research? What is its scope?
  5. Date of publication - When was it written? Has it been updated?
  6. Documentation - Did they cite their sources? Who did they cite?

Evaluating Information Worksheet (Google Doc)

Use these questions to help decide whether a source is a good fit for your research project.

Criteria Guidance

Authority

Purpose

  • Why was this source created?
    • Does it have an economic value for the author or publisher? 
    • Is it an educational resource? Persuasive?
      • What (research) questions does it attempt to answer?
      • Does it strive to be objective?
    • Does it fill any other personal, professional, or societal needs?
  • Who funded the research?
  • Who is the intended audience?
    • Is it for scholars?
    • Is it for a general audience?

Publication & format

  • Where was it published?
  • Was it published in a scholarly publication, such as an academic journal?
    • Who was the publisher? Was it a university press?
    • Was it formally peer-reviewed?​
  • Does the publication have a particular editorial position?
    • Is it generally thought to be a conservative or progressive outlet?
    • Is the publication sponsored by any other companies or organizations? Do the sponsors have particular biases?
  • Were there any apparent barriers to publication?
    • Was it self-published?
    • Were there outside editors or reviewers?
  • Where, geographically, was it originally published, and in what language?
  • In what medium?
    • Was it published online or in print? Both?
    • Is it a blog post? A YouTube video? A TV episode? An article from a print magazine?
      • What does the medium tell you about the intended audience? 
      • What does the medium tell you about the purpose of the piece?

Relevance

  • How is it relevant to your research?
    • Does it analyze the primary sources that you're researching?
    • Does it cover the authors or individuals that you're researching, but different primary texts?
    • Can you apply the authors' frameworks of analysis to your own research?
  • What is the scope of coverage?
    • Is it a general overview or an in-depth analysis?
    • Does the scope match your own information needs?
    • Is the time period and geographic region relevant to your research?

Date of Publication 

  • When was the source first published?
  • What version or edition of the source are you consulting?
    • Are there differences in editions, such as new introductions or footnotes?
    • If the publication is online, when was it last updated?
  • What has changed in your field of study since the publication date? 
  • Are there any published reviews, responses or rebuttals?

Documentation 

  • Did they cite their sources?
    • If not, do you have any other means to verify the reliability of their claims?
  • Who do they cite?
    • Is the author affiliated with any of the authors they're citing?
    • Are the cited authors part of a particular academic movement or school of thought?
  • Look closely at the quotations and paraphrases from other sources:
    • Did they appropriately represent the context of their cited sources?
    • Did they ignore any important elements from their cited sources?
    • Are they cherry-picking facts to support their own arguments?
    • Did they appropriately cite ideas that were not their own?