Evaluation is about determining the quality, value, or importance of something in order to take action. It is underpinned by the systematic collection of information and evidence.
What is evidence? Things to keep in mind:
Who pays for science?
Most scientific research is funded by government, companies doing research and development, and non-profit entities. Because science is attempting to get at some "truth," the source of research funding shouldn't have a significant effect on the outcome of scientific research, right?
Read Industry sponsorship and research outcome Cochrane Database Syst Rev. 2017 Feb 16;2:MR000033).
Read Food Politics, a blog by Marion Nestle that often addresses the issues of industry sponsored research.
Is it race? or is it racism?
Race is a sociological construct, yet most articles describing racial disparities ascribe them to race, not to racism.
Read NIH must confront the use of race in science (Science 2020;369(6509):1313-1314).
Peer review refers to a process whereby scholarly work (ie, an article) is reviewed and critiqued by experts to ensure it meets some standards of acceptance before it is published.
Does this process make for better science?
Read Editorial peer review for improving the quality of reports of biomedical studies (Cochrane Database Syst Rev. 2007 Apr 18;(2):MR000016).
What gets researched? What gets published? ("Publication bias"); What (or Who) gets funded?
Studies that report interventions that had no effect are less likely to get published. What does this mean in terms of the state of knowledge on a topic?
Read Systematic Review of the Empirical Evidence of Study Publication Bias and Outcome Reporting Bias (PLoS One. 2008 Aug 28;3(8):e3081).
There is also evidence of disparities in grant funding for research:
Read Race, Ethnicity, and NIH Research Awards (Science. 2011 Aug 19;333(6045):1015-9).
as well as disparities in recruitment for trials:
Read Racial Differences in Eligibility and Enrollment in a Smoking Cessation Clinical Trial (Health Psychol. 2011 Jan; 30(1): 40–48).
Sometimes stuff gets researched because of "scientific inertia":
Read Adequate and anticipatory research on the potential hazards of emerging technologies: a case of myopia and inertia? (J Epidemiol Community Health 2014;68:890-895).
Oops! I made a mistake (or ... was it cheating..?)
Occasionally, researchers make mistakes, and sometimes those mistakes affect the conclusions of a published article. Articles may be retracted if the mistake is significant. This is a formal process where the author or journal publishes a statement outlining the error. Sometimes, however, retraction is the result of fraud, plagiarism, or other bad acts.
Read Retraction Watch.
Opinion or fact?
Do the conclusions of the article follow the evidence that's presented? Are opinions or notions posited as facts?
Search "As is well known..." in Google Scholar.
Read A Propaganda Index for Reviewing Problem Framing in Articles and Manuscripts: An Exploratory Study (PLoS One. 2011;6(5):e19516).
CV boosting: Does this study add to the body of knowledge, or is it just something the author is doing to add to his/her list of publications?
(In)significance of a single study: Science is incremental. Beware of any study that's proclaimed to be a "breakthrough."
Read Evidence-based public health (RC Brownson. New York; Oxford: Oxford University Press, 2011).
Reliable data collection: relatively free from "measurement error."
Validity refers to how well a measure assesses what it claims to measure
Adapted from Chapter 3, Conducting research literature reviews : from the Internet to paper, by Arlene Fink, 2010.
The journal impact factor is a calculation of how many citations the articles in a journal receive (over a 2-year average). It is used as a proxy measure of the quality of a journal. If the impact factor of a journal is 5, then on average, articles in this journal receive about five citations within the first two years after publication.
In any discussion of journal, article, or author metrics, it is imperative to remember Goodhart's law:
"When a measure becomes a target, it ceases to be a good measure"
» Journal Citation Reports: Find impact factors (Note: Journal websites generally will include the impact factor)
» Scopus CiteScore metrics: Click “Sources" - An alternative to the JIF
You may wish to read this brief article on the Journal Impact Factor:
Is the impact factor the only game in town?. P. Smart. Ann R Coll Surg Engl. 2015;97(6):405-8.
PLoS, a top-tier open access suite of journals, says this: "PLOS does not consider Impact Factor to be a reliable or useful metric to assess the performance of individual articles. PLOS supports DORA – the San Francisco Declaration on Research Assessment – and does not promote our journal Impact Factors"
In addition, citation counts themselves are not necessarily a good metric of importance; see How citation distortions create unfounded authority: analysis of a citation network. Greenberg SA. BMJ. 2009 Jul 20;339:b2680. doi: 10.1136/bmj.b2680.