Skip to Main Content

Resources at the UC Berkeley Library for STEM Education & Outreach: Critical Evaluation

How to access books, journals/articles, industry information, evidence-based practice resources, government information, community information, and more

Critically Evaluating What You Find Part 1

What is evidence?
All research is (potentially) "evidence" and there are no "perfect" studies. 
Critically evaluating what you read will help any unearth biases or methodological shortcomings that may be present.

Is there an agenda (bias)?
It's doubtful that any study of humans is without some kind of bias, either in the study design, or in the author's pre-existing beliefs. How bias in methodology was controlled and the significance of bias in any particular study is what's relevant. 

Things to consider:

  • The question being addressed: What kind of research gets funded?
  • Publication bias: Research that shows no effect tends not to get published
  • Conflict of interest, author affiliation, source(s) of funding: Does the researcher (or the funder) have a vested interest in the outcome?
  • Documentation and assumptions: Are all stated "facts" referenced?
  • Peer review: Is the article peer-reviewed? Does it matter?
  • Authority: Does the researcher have the knowledge to work in this area?
  • Significance of a single study: Science is an incremental process; one study rarely "changes everything"

Who pays for science? Does it matter?  (There is evidence that it does matter)
Research may be funded by:

  • Government
  • Industry/trade groups
  • Private foundations/associations
  • etc.

This article (PDF) discusses the "manufactured uncertainty" created by industry groups that sponsor research and publishing on chemicals.

Is qualitative research "evidence"?
» If your goal is to understand beliefs and meanings in the group with whom you are working, then qualitative studies can be important.

Critically Evaluating What You Find Part 2

Reliability and validity

Reliable data collection: relatively free from "measurement error:" 

  • Is the survey written at a reading level too high for the people completing it? 
  • If I measure something today, then measure it again tomorrow using the same scale, will it vary? Why? 

Validity refers to how well a measure assesses what it claims to measure: 

  • If the survey is supposed to measure "quality of life," how is that concept defined? Is it measurable? 

(Adopted from Chapter 3, Conducting research literature reviews : from the Internet to paper, by Arlene Fink; Sage, 2010.)
Extensive discussions of reliability and validity are available in several texts, such as Textbook in Psychiatric Epidemiology (3rd Ed.; M. Tsuang et al. Wiley. 2011; See chapters 5 and 7).

What to consider when looking at survey or estimated data:

  • Look at sample sizes and survey response rates - representative of your population? Enough responses to be valid?
  • Who was surveyed? - representative of population being compared to? Include group you are interested in?
  • Were the survey respondents from heterogeneous groups? Do the survey questions have a similar meaning to members of different groups?
  • How was survey conducted? Via telephone? - Many people only have cell phones. Random selection or targeted group?
  • What assumptions and methods were used for extrapolating the data?
  • Look at definitions of characteristics - Does this match your own definitions?
  • When was the data collected?