When we hear a piece of information that surprises us, we often react by saying, “Where’d you hear that?” It’s a good question, and one we should ask more often, because some sources are better – sometimes much better – than others. In this lesson, students will learn to distinguish between credible and not-so-credible types of sources. They’ll explore the biases of different sources and develop tools for detecting bias. In their effort to get to facts that are as objective as possible, students will examine the differences between primary and secondary sources, check the track records of different sources, and practice looking for broad consensus from a range of disinterested experts.
In this lesson, students will:
- Learn to distinguish between expert and non-expert statements and to understand the limits of expert authority.
- Examine the role of bias among experts.
- Learn the difference between primary and secondary sources, explore the track records of various sources, and realize the importance of consensus among disinterested sources.
Because the internet has brought the average person into contact with a volume of information that would have been nearly unimaginable to anyone outside a large university just a generation ago, we’re now able to stay informed about a bewildering variety of events from the mundane (today’s weather or the score of last night’s Yankees game) to the tawdry (hours of footage of Anna Nicole Smith) to the tragic (shootings at Virginia Tech) to the profound (videos showing the development of a human embryo). But the vast quantity of information available is not without a price. For every Abu Ghraib photo uncovered one can find a crank extolling his “proof” that the destruction of the World Trade Center was a Jewish plot. A search for Martin Luther King Jr. may turn up “Letter from Birmingham Jail” – or a white supremacist home page. No longer do consumers of information have to work to find information. The difficulty now lies in finding good information.
For students, evaluating sources is often one of the most difficult aspects of any research project. It requires making judgments about a source’s credibility, which in turn means asking essential questions:
- Is the source objective?
- Is the source presenting straight facts?
- Are the facts being filtered through another author’s analysis?
- If so, is that author objective?
- Are the source’s conclusions in line with those of most other experts in the field?
- Have we verified those conclusions by assessing the facts ourselves?
Many uses of expert authority are, of course, perfectly appropriate. When my physics teacher tells me that the speed of light is 299,792,458 meters per second, I am probably justified in accepting that figure as accurate. Similarly, I can cite the Encyclopedia Britannica as reasonably good evidence that the philosopher John Stuart Mill was born on May 20, 1806.
On the other hand, citing Albert Einstein’s reservations about the use of nuclear weapons as a reason for believing that all nuclear weapons should be banned is not a particularly credible reason for opposing nuclear weapons. Einstein’s knowledge of nuclear physics does not automatically translate into any particular expertise in morality or in public policy. This is a perfect example of a fallacy that logicians have dubbed an “inappropriate appeal to authority,” which involves, in brief, relying on the word of an expert when that expert has no authority in a particular subject area. (See the AnnenbergClassroom.org lesson plan called Monty Python and the Quest for the Perfect Fallacy for a more complete treatment). Similarly, a mother of five children isn’t automatically more of an expert on childbirth than, say, a male obstetrician. By the same token, a Vietnam vet who served a single tour of duty as an infantryman is probably less qualified to discuss national security than is Hillary Clinton.
In this lesson, we will look at a number of useful tools for distinguishing between appropriate and inappropriate uses of authority.
Make enough copies of all the handouts for each student, except Student Handouts #2 and #3. Make only enough of each of those for half the class. Distribute Student Handout #1 at the beginning of Exercise #1. Distribute Student Handouts #2, #3 and #4 at the beginning of Exercise #2 (making sure that half the class gets #2 and half gets #3), and pass out Student Handout #5 toward the end of that exercise, as described below. Distribute Student Handout #6 at the beginning of Exercise #3.
- Video, “Agent Shoots Himself in Foot.”
- Student handout #1: Chart for Exercise #1.
- Student handout #2: Rimland, “Vaccinations: the Overlooked Factors.”
- Student handout #3: Offit, “Vaccines and Autism.”
- Student handout #4: Chart for Exercise #2.
- Student handout #5: CDC, “Measles, Mumps and Rubella (MMR) Vaccine and Autism Fact Sheet.”
- Student handout #6: FactCheck.org’s Guide to Testing Evidence.
Exercise #1 – Who Counts as an Expert?
To the teacher: There are a lot of people who claim expertise in different subjects. Sadly, many times those claims are exaggerated. Indeed, it is fairly commonplace to find experts in one particular field who attempt to claim expertise in completely unrelated fields. Students often have trouble determining what counts as real expertise on a subject: Too often, glittery titles substitute for actual substantive authority. In this exercise, students will discuss the nature of expertise.
Show the class the video of a classroom demonstration of firearm safety.
Obviously the irony of a police officer shooting himself in the foot just as he boasts of being the only person in the room qualified to handle a handgun is amusing. But it also demonstrates an important lesson: Not everyone who claims to be an expert really is an expert.
Exercise: If you haven’t done so already, distribute copies of student handout #1 to each student. Divide the class into groups of 4 to 6 students each. The handout lists six different occupations. Ask the students to research what each occupation actually does. (Note: This task will require either Internet or library access.) Students should record their findings on the handout.
Using their findings as a guide, students should then discuss areas of expertise for each profession. Ask students to construct a list of topics on which a person from each profession could reasonably be considered an expert. Ask the students to report their findings back to the class.
Exercise #2 – Bias, Bias Everywhere
To the teacher: Not all experts are created equal. Merely possessing the right set of credentials is not a guarantee of good information. A stockbroker with an Ivy League MBA is probably a good source of financial advice – unless that stockbroker is attempting to sell us stock in a company that he owns. Then we have to be slightly more suspicious. In this exercise, students will look at some experts with agendas.
Experts are human, too, which means that, sometimes, they come complete with biases and personal agendas. An example: Dr. James P. Grigson, a Texas psychiatrist, testified in a number of capital (i.e., death penalty) cases in the 1980s. In Texas, a jury may recommend a capital sentence only if it believes that the defendant, if released, would probably commit violent crimes in the future. Dr. Grigson was paid by Texas prosecutors each time he testified in a capital case, and his testimony was predictable: The defendant would probably go on to commit violent crimes. In all, Grigson testified in 111 capital cases over an 18-year period. All but nine of those resulted in executions.
Dr. Grigson was what is known in the legal profession as a “hired gun,” or an expert who can be counted on to deliver a particular opinion. In most of those cases, it would have been relatively easy to find a psychiatrist who would take the opposite view. In fact, it isn’t hard to find an expert to back up just about any opinion one could want.
Exercise: Distribute Student Handout #2 to half the class and Student Handout #3 to the other half. Distribute copies of Student Handout #4 to each student. Divide the class into small groups of 4 to 6 students each. Make sure that each group has some students with Student Handout #2 and some with Student Handout #3. Ask students to read whichever handout they’ve been given (#2 or #3). Then have them outline the basic facts about vaccines and autism on the chart on Student Handout #4. Students should then discuss the basic facts with one another. Questions to consider:
- To what extent do Dr. Rimland and Dr. Offit agree about the link between vaccines and autism?
- How do Rimland and Offit differ on the facts?
- Why do you think that these experts might disagree?
Next, ask students to research the experts who have authored the discussions on Student Handouts #2 and #3. (Note: This part of the activity will require internet access.) Questions to keep in mind:
- What is your expert’s background? Does he have any personal agenda? (Hint: Try searching for Offit and Merck; also, be sure to look at Rimland’s personal history.)
- Who sponsored the expert? (Hint: Start by looking up the essay itself.)
- Who funds that organization? (Hint: The “About Us” page often has helpful information)
- Does the organization seem to have any particular agenda?
Finally, distribute copies of Student Handout #5. Ask the students to compare their expert source to the findings of the CDC.
- Do the two agree about the basic facts?
- Does the fact that your expert agrees (or disagrees) with the CDC mean that you should (or shouldn’t) rely upon that expert’s testimony?
- Does it mean they are (or aren’t) right?
- Does it mean they are (or aren’t) biased?
- What does the exercise tell you about relying upon experts?
Have the students report their findings back to the class.
Exercise #3 – Finding Objectivity
To the teacher: Some experts may be biased, but that doesn’t mean that they can’t still be useful. Indeed, a great many experts do hold particular opinions, but also cite very good reasons for holding those opinions. The fact that an expert holds a particular view does not mean that s/he cannot be trusted; it means only that we cannot rely on just that expert’s word. In this exercise, students will look at different sources, determine what, if any, biases those sources may have, and then examine resources for verifying expert opinions.
The point of Exercise #2 is not to show that experts’ opinions are invalid, but to show that we should not get in the habit of taking experts at their word. A bit of skepticism is entirely healthy – even when it comes to expert opinion. Whenever possible, we should test evidence. The following tests, adapted from the book “unSpun,” are used as rules of thumb by researchers and writers at FactCheck.org:
- Is the source highly regarded and widely accepted? There are a number of long-standing organizations we know we can count on for reliable unbiased information. For instance, for job statistics, the Bureau of Labor Statistics is every economist’s basic source.
- Is the source an advocate? Claims made by political parties, candidates, lobbying groups, salesmen and other advocates may be true but are usually self-serving and as a result may be misleading; they require special scrutiny. Always compare their information with other sources.
- What is the source’s track record? Look for previous experience.
- What method is used? Good research will employ methods that are commonly accepted in the discipline. Many studies will have to rely on estimates; good studies will minimize those estimates and will, to the extent possible, draw on large, random samples of information in a uniform way.
- Does the source “show its work”? Good researchers always explain how they arrived at their conclusions.
- Is the sample random? News organizations and Web sites are fond of conducting “unscientific” polls. Viewers or visitors are asked to express a preference, and the results are reported. This is just a marketing method designed to draw interest; the results are utterly meaningless because the sample is self-selected, not random. Some such polls have been intentionally rigged.
- Is there a control group? Good scientific procedure requires a “control” to provide a valid basis for comparison. For example, in tests of new drugs one group gets a placebo, with no active ingredients, to provide a point of comparison with the group that gets the actual drug.
- Does the source have the requisite skill? A trained epidemiologist should be trusted more than a newspaper headline writer to evaluate whether a cluster of cancer cases was caused by something in the water or was just a statistical fluke.
- Have the results been replicated or contradicted? Sometimes one study tells a story that isn’t backed up by later research. Have the results been repeated in similar studies? Do other researchers agree, or do they come up with contrary findings?
(Note: These guidelines are reprinted as Student Handout #6.)
Exercise: If you have not already done so, distribute copies of Student Handout #6 to each student. Then divide the class into teams of 4 to 6 students each. Assign one of the following organizations to each team of students:
- American Enterprise Institute
- Brookings Institution
- Cato Institute
- Center for Responsive Politics
- Citizens for Tax Justice
- Media Matters
- National Taxpayers Union
Have each team research their assigned entity. Students should determine (1) what, if any, leanings their assigned group has, and (2) how reliable the research from their particular group really is. Some questions to consider:
- Who works for the group?
- What are their backgrounds?
- Who is on the board of directors and what are their backgrounds?
- Who funds the group in question?
- How frequently are scholars from the group quoted? (a Nexis search of major newspapers might be helpful here)In what contexts are scholars from the group quoted?
- What sort of reputation does the group have?
After students have completed their analysis, have them look up their group in Annenberg Classroom’s Critical Thinking Resources. Ask the students to compare their analysis to Annenberg Classroom’s. Do they agree or disagree? Have students report their findings back to the class.
About the Author
Joe Miller received his Ph.D. in philosophy from the University of Virginia. He is a former staff writer at FactCheck.org, a project of the University of Pennsylvania’s Annenberg Public Policy Center. Before joining FactCheck, he served as an assistant professor of philosophy at West Point and at the University of North Carolina at Pembroke, where he taught logic, critical thinking, ethics and political theory. The winner of an Outstanding Teacher award at UNC-Pembroke and an Outstanding Graduate Teaching Assistant award at the University of Virginia, Joe has more than 10 years of experience developing curricula. He is a member of American Philosophical Association and the Association for Political Theory.
- English Language Arts Standards
- Information Literacy Standards
- National Educational Technology Standard
- National Mathematics Standards
- National Social Studies Standards
- Social Responsibility