Evaluation is about determining the quality, value, or importance of something in order to take action. It is underpinned by the systematic collection of information and evidence.
What is evidence? Things to keep in mind:
Things to consider:
Who pays for science?
Most scientific research is funded by government, companies doing research and development, and non-profit entities. Because science is attempting to get at some "truth," the source of research funding shouldn't have a significant effect on the outcome of scientific research, right?
Read Industry sponsorship and research outcome Cochrane Database Syst Rev. 2017 Feb 16;2:MR000033).
Read Food Politics, a blog by Marion Nestle that often addresses the issues of industry sponsored research.
This article discusses the "manufactured uncertainty" created by industry groups that sponsor research and publishing on chemicals.
Is it race? or is it racism?
Race is a sociological construct, yet most articles describing racial disparities ascribe them to race, not to racism.
Read NIH must confront the use of race in science (Science 2020;369(6509):1313-1314).
See Critically Appraising for Antiracism: Identifying racial bias in published research: A guide and tool to help evaluate research literature for racism.
Peer review refers to a process whereby scholarly work (ie, an article) is reviewed and critiqued by experts to ensure it meets some standards of acceptance before it is published.
Does this process make for better science?
Read Editorial peer review for improving the quality of reports of biomedical studies (Cochrane Database Syst Rev. 2007 Apr 18;(2):MR000016).
Reliability and validity
Reliable data collection: relatively free from "measurement error:"
Is the survey written at a reading level too high for the people completing it?
If I measure something today, then measure it again tomorrow using the same scale, will it vary? Why?
Validity refers to how well a measure assesses what it claims to measure:
If the survey is supposed to measure "quality of life," how is that concept defined? Is it measurable?
(Adopted from Chapter 3, Conducting research literature reviews : from the Internet to paper, by Arlene Fink; Sage.)
What to consider when looking at survey or estimated data:
The journal impact factor is a calculation of how many citations the articles in a journal receive (over a 2-year average). It is used as a proxy measure of the quality of a journal. If the impact factor of a journal is 5, then on average, articles in this journal receive about five citations within the first two years after publication.
In any discussion of journal, article, or author metrics, it is imperative to remember Goodhart's law:
"When a measure becomes a target, it ceases to be a good measure."
Journal Citation Reports: Find impact factors (Note: Journal websites generally will include the impact factor).
Scopus CiteScore metrics: Click “Sources" - An alternative to the JIF.
You may wish to read this brief article on the Journal Impact Factor:
Is the impact factor the only game in town?. P. Smart. Ann R Coll Surg Engl. 2015;97(6):405-8.
PLoS, a top-tier open access suite of journals, says this: "PLOS does not consider Impact Factor to be a reliable or useful metric to assess the performance of individual articles. PLOS supports DORA – the San Francisco Declaration on Research Assessment – and does not promote our journal Impact Factors."
In addition, citation counts themselves are not necessarily a good metric of importance; see How citation distortions create unfounded authority: analysis of a citation network. Greenberg SA. BMJ. 2009 Jul 20;339:b2680. doi: 10.1136/bmj.b2680.
Finally, one could argue that journal impact factor manipulation is itself a predatory journal trait.
Predatory journals often lack an appropriate peer-review process and frequently are not indexed, yet authors are required to pay an article processing charge. The lack of quality control, the inability to effectively disseminate research and the lack of transparency compromise the trustworthiness of articles published in these journals.
A 2020 systematic review of checklists to determine whether a journal is predatory found no checklist to be optimal. They recommended you look for a checklist that:
They noted that only one checklist out of the 93 assessed fulfills the above criteria.
Be awake and aware! Rather than relying on lists or checklists, check if a journal is listed in DOAJ (Directory of Open Access Journals); if it is, the journal is less likely to be problematic because it has been vetted. Similarly, check if a journal is a member of COPE (Committee on Publication Ethics), where it must follow COPE’s publication ethics (COPE Core Practices).
You may wish to review the Principles of Transparency and Best Practice in Scholarly Publishing from the World Association of Medical Editors.
See also the report, Combatting Predatory Academic Journals and Conferences, from the InterAcademy Partnership.
Also of interest may be the Retraction Watch Hijacked Journals Checker.
And, please also be aware of the Institutionalized Racism of Scholarly Publishing:
Finally, one could argue that journal impact factor manipulation is a trait of predatory journals.
The "Evidence Pyramid" is a graphic representation of strength of evidence of various publication types. A typical evidence pyramid looks like this:
However, recently a modified evidence pyramid has been proposed, which looks like this:
The proposed new evidence-based medicine pyramid. (A) The traditional pyramid. (B) Revising the pyramid: (1) lines separating the study designs become wavy, (2) systematic reviews are ‘chopped off’ the pyramid. (C) The revised pyramid: systematic reviews are a lens through which evidence is viewed. (from Murad MH, Asi N, Alsawas M, et al. New evidence pyramid. BMJ Evidence-Based Medicine 2016;21:125-127.
When you encounter any kind of source, consider:
Evaluating Information Worksheet (Google Doc)
Use these questions to help decide whether a source is a good fit for your research project.
Publication & format
Date of Publication