Do You Know How to Evaluate Truth Claims?
Here’s how to watch out for “baloney” in arguments.
The media is flooding us with misinformation. How do we manage it?
It is helpful to understand what rational argumentation looks like. Here is the Toulmin model (Toulmin, Rieke & Jani, 1984):
Claim is made > Supportive facts are given > Reason for connecting claim and facts > Theory or empirical evidence backing the claim is presented > Qualifications of the claim are made > Consideration of alternative explanations.
Science offers a method for testing claims (hypotheses). It is helpful to remember the principles of experimental science (e.g., behavioral science):
- Hypotheses or claims are subject to falsifiability.
- Claims must be empirically verifiable.
- Simpler theories follow the desire for parsimony.
- Scientists use probabilistic reasoning (chances of it are xx%).
- Scientists look for generalizability (depends on method and sample).
- Scientists look for converging evidence (a single study is not enough).
In his book, The Demon-Haunted World: Science as a Candle in the Dark (1996), public scientist Carl Sagan provided some tools. First, he described the uniqueness of science in helping us discern truth:
“Science is different from many another human enterprise—not, of course, in its practitioners being influenced by the culture they grew up in, nor in sometimes being right and sometimes wrong (which are common to every human activity), but in its passion for framing testable hypotheses, in its search for definitive experiments that confirm or deny ideas, in the vigor of its substantive debate, and in its willingness to abandon ideas that have been found wanting.
If we were not aware of our own limitations, though, if we were not seeking further data, if we were unwilling to perform controlled experiments, if we did not respect the evidence, we would have very little leverage in our quest for the truth. Through opportunism and timidity we might they be buffeted by every ideological breeze, with nothing of lasting value to hang on to.” (p. 263)
Sagan distinguished science from pseudoscience:
“Perhaps the sharpest distinction between science and pseudoscience is that science has a far keener appreciation of human imperfections and fallibility than does pseudoscience (or “inerrant” revelation). If we resolutely refuse to acknowledge where we are liable to fall into error, then we can confidently expect that error—even serious error, profound mistakes—will be our companion forever. But if we are capable of a little courageous self-assessment, whatever rueful reflections they may engender, our chances improve enormously.” (p. 21)
Sagan acknowledged the difficulty of finding truths:
“Finding the occasional straw of truth awash in a great ocean of confusion and bamboozle requires vigilance, dedication, and courage. But if we don’t practice these tough habits of thought, we cannot hope to solve the truly serious problems that face us—and we risk becoming a nation of suckers, a world of suckers, up for grabs by the next charlatan who saunters along.” (p. 39)
Sagan was very concerned about how easily people are fooled if they don’t have a toolkit for detecting “baloney.” And so he provided several lists of advice and cautions are particularly relevant in this era of false information being spread far and wide as money-making clickbait.
Evaluating the Validity of a Truth Claim
- Wherever possible there must be independent confirmation of the “facts.”
- Encourage substantive debate on the evidence by knowledgeable proponents of all points of view.
- In science, there are no authorities, just experts.
- Try not to get overly attached to one hypothesis just because it’s yours. Think of reasons to reject it. If you don’t, others will.
- Instead, set up more than one hypothesis and think how it could be proved. Always consider, ‘what if the opposite of my position were true?’
- Quantify. If you can measure something, you are better able to discriminate among competing hypotheses. What is vague and qualitative is open to many explanations and truth is harder to find.
- If there’s a chain of argument, every link in the chain must work (including the premise), not just most of them.
- When you have a choice, choose the simplest hypothesis (“Occam’s razor”).
- Always ask whether the hypothesis can be, at least in principle, falsified.
- Propositions that are untestable, unfalsifiable are not worth much.
Sagan also offered lessons in how to avoid falling for baloney, calling it “the fine art of baloney detection.” I’ve had students assess their favorite news outlets or feeds using the following list.
What to Watch For: Fallacies of Logic and Rhetoric That Do Not Address the Argument:
- Attacking the arguer and not the argument (called ad hominem; Latin for “towards the man”). Just because an argument is made by someone you may not like does not mean the argument itself is flawed.
- Appeal to authority. Just because a famous person said so does not make it so.
- Argument from adverse consequences that might result. This is an appeal to emotions, not a legitimate rejection of a reasoned argument.
- Appeal to ignorance—the claim that whatever has not been proved false must be true, and vice versa. Absence of evidence is not evidence of absence.
- Special pleading, often to rescue a proposition in deep rhetorical trouble (e.g., “God moves in mysterious ways”).
- Begging the question (assuming your conclusion is true).
- Observational selection (confirmation bias)—counting the hits and ignoring the misses; looking and finding what you are looking for, disregarding contrary evidence).
- Hasty generalization (generalizing from a few cases).
- Inconsistency (e.g., to say that the decline of life expectancy in the former Soviet Union is due to failure of communism without a comparable statement that the high infant mortality rate in the U.S. is due to the failure of capitalism).
- Non sequitur: a statement irrelevant to the argument (e.g., “Our nation will prevail because God is great”).
- Post hoc, ergo propter hoc (Latin) because it happened after X, it was caused by X.
- Meaningless question.
- False dilemma (excluded middle or false dichotomy): it’s either one extreme position or the other.
- Slippery slope, related to excluded middle.
- Confusion of correlation and causation. Correlations are not causations for various reasons: (1) Third variable problem (two things that occur together may be caused by another variable); (2) Directionality problem(unclear which variable may cause the other).
- Straw man, caricaturing a position to make it easier to attack.
- Suppressed evidence/half-truths.
- Weasel words: “Some people say…”; being non-specific so that later you can deny anything implied.
We can add:
- What-about-ism: accusing the arguer of hypocrisy, distracting attentionfrom the issue at hand (used in the Soviet Union and Russia routinely).
Watch out for your own human reasoning fallacies
Human reasoning gets off track by several weaknesses that are used by those who want to manipulate decision-making.
- Susceptibility to vividness (e.g., testimonials by people who have supposedly used the product; shocking pictures you remember)
- Susceptibility to availability (availability heuristic): What’s pops into your mind (based on repeated exposure). For example, when Americans think of a criminal they often think of someone with dark skin because even though the most significant criminal damage (e.g., financial fraud) is done by white-collar criminals who are by-and-large privileged white people. Americans have been primed by media exposure to imagine a dark-skinned person, due to the fact that most local news focuses on physical crimes done usually by disadvantaged people.
- Illusory correlation (things that occur together must be causally related)
- Gambler’s fallacy (a streak of bad luck must be balanced by a streak of good luck)
- Conjunction fallacy (stereotyping ignoring probabilities)
- Attribution error (assuming your behavior is affected by circumstance but the behavior of others is chosen based on personality)
In conducting research, there are a number of cautions researchers watch out for.
- Selection bias (favoring the familiar, what you are looking for is salient)
Here is a list of other biases.
Researchers also have to be careful in drawing conclusions because of the following:
- Multiple causation (most things have multiple causes—hard for humans to conceive)
- Sample size (too small a sample is not a reliable source for drawing conclusions)
- Illusory correlation (assuming variables that are found together are related)
Sagan, C. (1996). The Demon-Haunted World: Science as a Candle in the Dark. New York: Penguin.
Toulmin, S., & Rieke, R., & Janik, A. (1984). An Introduction to Reasoning, 2nd ed.. New York: Macmillan.