Discussion questions

CHAPTER 7

7.1 Fallacies of Support

With fallacious reasoning, the premises only appear to support the conclusion. When you look closely at a fallacious argument, you can see how the premises fail to offer support.

When reasoning, it is essential to reach conclusions based on adequate evidence; otherwise, our views are unsubstantiated. The better the evidence, the more credible our claims are, and the more likely they are to be true. Fallacies can lead us to accept conclusions that are not adequately supported and may be false. Let us learn some of the most common ways this can happen.

Begging the Question

One of the most common fallacies is called begging the question, also known as petitio principii. This fallacy occurs when someone gives reasoning that assumes a major point at issue; it assumes a particular answer to the question with which we are concerned. In other words, the premises of the argument claim something that someone probably would not agree with if he or she did not already accept the conclusion. Take a look at the following argument:

Abortion is wrong because a fetus has a right to live.

There is nothing wrong with this argument as an expression of a person’s belief. The question is whether it will persuade anyone who does not already agree with the conclusion. The premise of this argument assumes the major point at issue. If fetuses have a right to live, then it would follow almost automatically that abortion is wrong. However, those who do not accept the conclusion probably do not accept this premise (perhaps they do not feel that the developing embryo is developed enough to have human rights yet). It is therefore unlikely that they will be persuaded by the argument. To improve the argument, it would be necessary to give good reasons why a fetus has a right to life, reasons that would be persuasive to people on the other side of the argument.

For more clarity about this problem, take a look at these similar arguments:

Capital punishment is wrong because all humans have a right to live.

Eating meat is wrong because animals have a right to live.

These arguments are nearly identical, yet they reach different conclusions about what types of killing are wrong because of different assumptions about who has the right to live. Each, however, is just as unlikely to persuade people with a different view. In order to be persuasive, it is best to give an argument that does not rest on controversial views that are merely assumed to be true. It is not always easy to create non-question-begging arguments, but such is the challenge for those who would like to have a strong chance of persuading those with differing views.

Here are examples on both sides of a different question:

Joe: I know that God exists because it says so in the Bible.

Doug: God doesn’t exist because nothing supernatural is real.

Do you think that either argument will persuade someone on the other side? Someone who does not believe in God probably does not accept the Bible to be completely true, so this reasoning will not make the person change his or her mind. The other argument does the same thing by simply ruling out the possibility that anything could exist other than physical matter. Someone who believes in God will probably not accept this premise.

Both arguments, on the other hand, will probably sound very good to someone who shares the speaker’s point of view, but they will not sound persuasive at all to those who do not. Committing the fallacy of begging the question can be compared to “preaching to the choir” because the only people who will accept the premise are those who already agree with the conclusion.

Circular Reasoning

An extreme form of begging the question is called circular reasoning. In circular reasoning, a premise is identical, or virtually identical, to the conclusion.

Here is an example:

Mike: Capital punishment is wrong.

Sally: Why is it wrong?

Mike: Because it is!

Mike’s reasoning here seems to be, “Capital punishment is wrong. Therefore, capital punishment is wrong.” The premise and conclusion are the same. The reasoning is technically logically valid because there is no way for the premise to be true and the conclusion false—since they are the same—but this argument will never persuade anyone, because no one will accept the premise without already agreeing with the conclusion.

As mentioned, circular reasoning can be considered an extreme form of begging the question. For another example, suppose the conversation between Joe and Doug went a little further. Suppose each questioned the other about how they knew that the premise was true:

Joe: I know that the Bible is true because it says so right here in the Bible, in 2 Timothy 3:16.

Doug: I know that there is nothing supernatural because everything has a purely natural explanation.

Here both seem to reason in a circular manner: Joe says that the Bible is true because it says so, which assumes that it is true. On the other side, to say that everything has a purely natural explanation is the same thing as to say that there is nothing supernatural, so the premise is synonymous with the conclusion. If either party hopes to persuade the other to accept his position, then he should offer premises that the other is likely to find persuasive, not simply another version of the conclusion.

Moral of the Story: Begging the Question and Circular Reasoning

To demonstrate the truth of a conclusion, it is not enough to simply assume that it is true; we should give evidence that has a reasonable chance of being persuasive to people on the other side of the argument. The way to avoid begging the question and circular reasoning is to think for a minute about whether someone with a different point of view is likely to accept the premises you offer. If not, strive to modify your argument so that it has premises that are more likely to be accepted by parties on the other side of the debate.

Hasty Generalizations and Biased Samples

Some inductive arguments make generalizations about certain groups, but in a hasty generalization, the sample size is inadequate.

Chapter 5 demonstrated that we can reason from a premise about a sample population to a conclusion about a larger population that includes the sample. Here is a simple example:

Every crow I have ever seen has been black.

Therefore, all crows are black.

This is known as making an inductive generalization; you are making a generalization about all crows based on the crows you have seen. However, if you have seen only a small number of crows, then this inductive argument is weak because the sample of crows was not large enough. A hasty generalization is an inductive generalization in which the sample size is too small. The person has generalized too quickly, without adequate support.

Notice that stereotypes are often based on hasty generalizations. For example, sometimes people see a person of a different demographic driving poorly and, based on only one example, draw a conclusion about the whole demographic. As Chapter 8 will discuss, such generalizations can act as obstacles to critical thinking and have led to many erroneous and hurtful views (see also http://www.sciencedaily.com/releases/2010/08/100810122210.htm for a discussion of the long-term effects of stereotyping).

Not all inductive generalizations are bad, however. A common form of inductive generalization is a poll. When someone takes a poll, he or she samples a group to draw a conclusion about a larger group. Here would be an example:

The Acquisition of Biases

Social psychologist Mahzarin Banaji explains that prejudice is something that is learned, rather than something with which we are born. In other words, we are prone to making hasty generalizations simply because we make observations about the world around us. The good news, according to Banaji, is that we can change our thinking.

Critical Thinking Questions

What are some ways that the human mind works that naturally lead it to make false generalizations?

What are the positive and negative aspects of the manner in which the human mind functions?

How does perception relate to prejudice? How can humans work to overcome the natural functions of their minds?

We sampled 1,000 people, and 70% said they will vote for candidate A.

Therefore, candidate A will win.

Here, the sample size is relatively large, so it may supply strong evidence for the conclusion. Recall Chapter 5’s discussion of assessing the strength of statistical arguments that use samples. That chapter discussed how an inductive generalization could be affected if a sample population is not truly random. For example, what if all of the people polled were in the same county? The results of the poll might then be skewed toward one candidate or other based on who lives in that county. If, in a generalization, the sample population is not truly representative of the whole population, then the argument uses a biased sample (recall the Chapter 5 discussion of Gallup’s polling techniques and see this chapter’s A Closer Look: Biased Samples in History for a historical example of how even well-intentioned polling can go wrong).

Slanted polling questions represent just one method of creating deliberately biased samples; another method is called cherry picking. Cherry picking involves a deliberate selection of data to support only one side of an issue. If there is evidence on both sides of a controversial question and you focus only on evidence supporting one side, then you are manipulating the data by ignoring the evidence that does not support the conclusion you desire.

For example, suppose an infomercial gives many examples of people who used a certain product and had amazing results and therefore suggests that you will probably get great results, too. Even if those people are telling the truth, it is very possible that many more people did not have good results. The advertisers will, of course, only put the people in the commercial that had the best results. This can be seen as cherry picking, because the viewer of the commercial does not get to see all of the people who felt that the product was a waste of money.

A Closer Look: Biased Samples in History

In 1936 the largest poll ever taken (10 million questionnaires) showed that Alf Landon would soundly defeat Franklin D. Roosevelt in the presidential election. The results were quite the opposite, however. What went wrong? The answer, it turns out, was that the names and addresses that were used to send out the questionnaires were taken from lists of automobile owners, phone subscribers, and country club memberships (DeTurk, n.d.). Therefore, the polls tended to be sent to wealthier people, who were more likely to vote for Landon.

Typically, finding a representative sample means selecting a sample randomly from within the whole population. However, as this example shows, it is sometimes difficult to make certain that there is no source of bias within one’s sampling method. In fact, it is really difficult to get inductive generalizations just right. We must have a sufficiently large sample, and it must be truly representative of the whole population. We should be careful to look at a large sample of data that accurately represents the population in general. There is a complex science of polling and analyzing the data to predict things like election results. A more in-depth discussion of this topic can be found in Chapter 5.

Appeal to Ignorance and Shifting the Burden of Proof

Sometimes we lack adequate evidence that a claim is true or false; in such situations it would seem wise to be cautious and search for further evidence. Sometimes, however, people take the lack of proof on one side to constitute a proof of the other side. This type of reasoning is known as the appeal to ignorance; it consists of arguing either that a claim is true because it has not been proved to be false or that a claim is false because it has not been proved to be true.

Here is a common example on both sides of another issue:

UFO investigator: “You can’t prove that space aliens haven’t visited Earth, so they probably have.”

Skeptic: “We haven’t yet verified the existence of space aliens, so they must not exist.”

Both the believer and the skeptic in these examples mistakenly take a failure to prove one side to constitute a demonstration of the truth of the other side. It is sometimes said that the absence of evidence is not evidence of absence. However, there are some exceptions in which such inferences are justified. Take a look at the following example:

John: There are no elephants in this room.

Cindy: How do you know?

John: Because I do not see any.

In this case the argument may be legitimate. If there were an elephant in the room, one would probably notice. Another example might be in a medical test in which the presence of an antibody would trigger a certain reaction in the lab. The absence of that reaction is then taken to demonstrate that the antibody is not present. For such reasoning to work, we need to have good reason to believe that if the antibody were present, then the reaction would be observed.

However, for that type of reasoning to work in the case of space aliens, the believer would have to demonstrate that if there were none, then we would be able to prove that. Likewise, the skeptic’s argument would require that if there were space aliens, then we would have been able to verify it. Such a statement is likely to be true for the case of an elephant, but it is not likely to be the case for space aliens, so the appeal to ignorance in those examples is fallacious.

The appeal to ignorance fallacy is closely related to the fallacy of shifting the burden of proof, in which those who have the responsibility of demonstrating the truth of their claims (the so-called burden of proof) simply point out the failure of the other side to prove the opposing position. People who do this have not met the burden of proof but have merely acted as though the other side has the burden instead. Here are two examples of an appeal to ignorance that seem to shift the burden of proof:

Power company: “This new style of nuclear power plant has not been proved to be unsafe; therefore, its construction should be approved.” (It would seem that, when it comes to high degrees of risk, the burden of proof would be on the power plant’s side to show that the proposed plants are safe.)

Prosecuting attorney: “The defense has failed to demonstrate that their client was not at the scene of the crime. Therefore, we must put this criminal in jail.” (This prosecutor seems to assume that it is the duty of the defense to demonstrate the innocence of its client, when it is actually the prosecution’s responsibility to show that the accused is guilty beyond reasonable doubt.)

It is not always easy to determine who has the burden of proof. However, here are some reasonable questions to ask when it comes to making such a determination:

Which side is trying to change the status quo? One person trying to get another person to change views will usually have the burden of proof; otherwise, the other person will not be persuaded to change.

Which side’s position involves greater risk? A company that designs parachutes or power plants, for example, would be expected to demonstrate the safety of the design.

Is there a rule that determines the burden of proof in this context? For example, the American legal system requires that, in criminal cases, the prosecution prove its case “beyond reasonable doubt.” Debates often put the burden of proof on the affirmative position.

Generally speaking, we should arrive at conclusions based on good evidence for that conclusion, not based on an absence of evidence to the contrary. An exception to this rule is the case of negative tests: cases in which if the claim P is true, then result Q would very likely be observed. In these cases, if the result Q is not observed, then we may infer that P is unlikely to be true. In general, when one side has the burden of proof, it should be met; simply shifting the burden to the other side is a sneaky and fallacious move.

Appeal to Inadequate Authority

An appeal to authority is the reasoning that a claim is true because an authority figure said so. Some people are inclined to think that all appeals to authority are fallacious; however, that is not the case. Appeals to authority can be quite legitimate if the person cited actually is an authority on the matter. However, if the person cited is not in fact an authority on the subject at hand, then it is an appeal to inadequate authority.

If the guitar player were stating his position on the best guitar to purchase, we might be inclined to follow his advice, as he would be a legitimate authority. However, in this case he is an inadequate authority.

To see why appeals to authority in general are necessary, try to imagine how you would do in college if you did not listen to your teachers, textbooks, or any other sources of information. In order to learn, it is essential that we listen to appropriate authorities. However, many sources are unreliable, misleading, or even downright deceptive. It is therefore necessary to learn to distinguish reliable sources of authority from unreliable sources. How do we know which is which? Here are some good questions to ask when considering whether to trust a given source or authority:

Is this the kind of topic that can be settled by an appeal to authority?

Is there much agreement among authorities about this issue?

Is this person or source an actual authority on the subject matter in question?

Can this authority be trusted to be honest in this context?

Am I understanding or interpreting this authority correctly?

If the answer to all of these is “yes,” then it may be a legitimate appeal to authority; if the answer to any of them is “no,” then it may be fallacious. Here are some examples of how appeals to authority can fail at each of these questions:

Is this the kind of topic that can be settled by an appeal to authority?

Student: “Capitalism is wrong; Karl Marx said so.” (The morality of capitalism may not be an issue that authority alone can resolve. We should look at reasons on both sides to determine where the best arguments are.)

Is there much agreement among authorities about this issue?

Student: “Abortion is wrong. My philosophy teacher said so.” (Philosophers do carefully consider arguments about abortion, but there is no consensus among them about this topic; there are good philosophers on both sides of the issue. Furthermore, this might not be the type of question that can be settled by an appeal to authority. One should listen to the best arguments on each side of such issues rather than simply trying to appeal to an authority.)

Is this person or source an actual authority on the subject matter in question?

Voter: “Global warming is real. My congressperson said so.” (A politician may not be an actual authority on the matter, since politicians often choose positions based on likely voting behavior and who donates to their campaigns. A climatologist is more likely to be a more reliable and informed source in this field.)

Can this authority be trusted to be honest in this context?

Juror: “I know that the accused is innocent because he said he didn’t do it.” (A person or entity who has a stake in a matter is called an interested party. A defendant is definitely an interested party. It would be better to have a witness who is a neutral party.)

Am I understanding or interpreting this authority correctly?

Christian: “War is always wrong because the Bible states, ‘Thou shalt not kill.’” (This one is a matter of interpretation. What does this scripture really mean? In this sort of case, the interpretation of the source is the most important issue.)

Finally, here is an example of a legitimate appeal to authority:

“Martin Van Buren was a Democrat; it says so in the encyclopedia.” (It is hard to think of why an encyclopedia—other than possibly an openly editable resource such as Wikipedia—would lie or be wrong about an easily verifiable fact.)

It may still be hard to be certain about many issues even after listening to authorities. In such cases the best approach is to listen to and carefully evaluate the reasoning of many experts in the field, to determine to what degree there is consensus, and to listen to the best arguments for each position. If we do so, we are less prone to being misled by our own biases and the biases of interested parties.

False Dilemma

An argument presents a false dilemma, sometimes called a false dichotomy, when it makes it sound as though there were only two options when in fact there are more than just those two options. People are often prone to thinking of things in black-and-white terms, but this type of thinking can oversimplify complex matters. Here are two simple examples:

Wife to husband: “Now that we’ve agreed to get a dog, should it be a poodle or a Chihuahua?” (Perhaps the husband would rather get a Great Dane.)

Online survey: “Are you a Republican or a Democrat?” (This ignores many other options like Libertarian, Green, Independent, and so on. If you are in one of those other parties, how should you answer?)

Such examples actually appear to be manipulative, which is why this can be such a problematic fallacy. Take a look at the following examples:

Partygoer: “What is it going to be? Are you going to go drink with us, or are you going to be a loser?” (This seems to imply that there are no other options, like not drinking and still being cool.)

Activist: “You can either come to our protest or you can continue to support the abuse we are protesting.” (This assumes that if you are not protesting, you do not support the cause and in fact support the other side. Perhaps you believe there are better ways to change the system.)

Though the fallacy is called a dilemma, implying two options, the same thing can happen with more than two options—for example, if someone implies that there are only five options when there are in fact other options as well.

False Cause

Analyzing the Evidence

We must take care to avoid committing the false cause fallacy when looking at evidence. Sometimes, there is an alternative explanation or interpretation, and our inference may have no causal basis.

Critical Thinking Questions

What are some questions that we can ask to help us examine evidence?

How can a critical thinker analyze evidence to determine its truth or falsity?

The assumption that because two things are related, one of them is the cause of the other is called the fallacy of false cause. It is traditionally called post hoc ergo propter hoc (often simply post hoc), which is Latin for “it came after it therefore it was caused by it.” Clearly, not everything that happens after something else was caused by it. Take this example:

John is playing the basketball shooting game of H-O-R-S-E and tries a very difficult shot. Right before the shot someone coughs, and the ball goes in. The next time John is going to shoot, he asks that person to cough. (John seems to be assuming that the cough played some causal role in the ball going in. That seems unlikely.)

Here is a slightly more subtle example:

John is taller than Sally, and John won the election, so it must have been because he was taller. (In this case, he was taller first and then won the election, so the speaker assumes that is the reason. It is conceivable that his height was a factor, but that does not follow merely because he won; we would need more evidence to infer that was the reason.)

Large-scale correlations might be more complex, but they can commit the same fallacy. Suppose that two things, A and B, correlate highly with each other, as in this example:

The number of police cars in a city correlates highly with the amount of crime in a city. Therefore, police cars cause crime.

It does not necessarily follow that A, the number of police cars, causes B, crime. Another possibility is that B causes A; the amount of crime causes the higher number of police cars. Another option is that a third thing is causing both A and B; in this case the city’s population might be causing both. It is also possible that in some cases the correlation has no causal basis.

7.2 Fallacies of Relevance

We have seen examples in which the premises are unfounded or do not provide adequate support for the conclusion. In extreme cases the premises are barely even relevant to the truth of the conclusion, yet somehow people draw those inferences anyway. This section will take a look at some examples of common inferences based on premises that are barely relevant to the truth of the conclusion.

Red Herring and Non Sequitur

A red herring fallacy is a deliberate attempt to distract the listener from the question at hand. It has been suggested that the phrase’s origins stem from the practice of testing hunting dogs’ skills by dragging a rotting fish across their path, thus attempting to divert the dogs from the track of the animal they are supposed to find. The best dogs could remain on the correct path despite the temptation to follow the stronger scent of the dead fish (deLaplante, 2009). When it comes to reasoning, someone who uses a red herring is attempting to steer the listener away from the path that leads to the truth of the conclusion.

Here are two examples:

Political campaigner: “This candidate is far better than the others. The flag tie he is wearing represents the greatest country on Earth. Let me tell you about the great country he represents. . . .” (The campaigner seems to be trying to get the voter to associate love for America with that particular candidate, but presumably all of the candidates love their country. In this case patriotism is the red herring; the real issue we should be addressing is which candidate’s policies would be better for the country.)

Debater in an argument about animal rights: “How can you say that animals have rights? There are humans suffering all around the world. For example, there are human beings starving in Africa; don’t you care about them?” (There may indeed be terrible issues with human suffering, but the existence of human suffering does not address the question of whether animals have rights as well. This line of thinking appears to distract from the question at hand).

An extreme case in which someone argues in an irrelevant manner is called a non sequitur, meaning that the conclusion does not follow from the premises.

Football player: “I didn’t come to practice because I was worried about the game this week; that other team is too good!” (Logically, the talent of the other team would seem to give the player all the more reason to go to practice.)

One student to another: “I wouldn’t take that class. I took it and had a terrible time. Don’t you remember: That semester, my dog died, and I had a car accident. It was terrible.” (These events are irrelevant to the quality of the class, so this inference is unwarranted.)

Whereas a red herring seems to take the conversation to a new topic in an effort to distract people from the real question, a non sequitur may stay on topic but simply make a terrible inference—one in which the conclusion is entirely unjustified by the premises given.

Appeal to Emotion

The appeal to emotion is a fallacy in which someone argues for a point based on emotion rather than on reason. As noted in Chapter 1, people make decisions based on emotion all the time, yet emotion is unreliable as a guide. Many philosophers throughout history thought that emotion was a major distraction from living a life guided by reason. The ancient Greek philosopher Plato, for example, compared emotion and other desires to a beast that tries to lead mankind in several different directions at once (Plato, 360 BCE). The solution to this problem, Plato reasons, is to allow reason, not emotion, to be in charge of our thinking and decision making. Consider the following examples of overreliance on emotion:

Impulsive husband: “Honey, let’s buy this luxury car. Think of how awesome it would be to drive it around. Plus, it would really impress my ex-coworkers.” (This might feel like the fun choice at the time, but what about when they cannot afford it in a few years?)

Columnist: “Capital punishment should be legal. If someone broke into your house and killed your family, wouldn’t you want him dead?” (You perhaps would want him dead, but that alone does not settle the issue. There are many other issues worth considering, including the issue of innocent people accidentally getting on death row, racism in the system, and so on.)

This is not to say that emotion is never relevant to a decision. The fun of driving a car is one factor (among many) in one’s choice of a car, and the emotions of the victim’s family are one consideration (out of many) in whether capital punishment should be allowed. However, we must not allow that emotion to override rational consideration of the best evidence for and against a decision.

One specific type of appeal to emotion tries to get someone to change his or her position only because of the sad state of an individual affected. This is known as the appeal to pity.

Student: “Professor, you need to change my grade; otherwise, I will lose my scholarship.” (The professor might feel bad, but to base grades on that would be unjust to other students.)

Salesman: “You should buy this car from me because if I don’t get this commission, I will lose my job!” (Whether or not this car is a good purchase is not settled by which salesperson needs the commission most. This salesman appears to play on the buyer’s sense of guilt.)

A cartoon of a marketing meeting. The strategy displayed on a chart is: “If you don’t buy our products we’ll surely become very sad and probably start crying.” There is a quote at the bottom of the cartoon that reads, “Hey! We’ve never tried a pity strategy before.”

With an appeal to pity, it is important to recognize when the appeal is fallacious versus genuine. Telling possible consumers you will cry if they do not purchase your product is most likely a fallacious appeal to pity.

As with other types of appeal to emotion, there are cases in which a decision based on pity is not fallacious. For example, a speaker may speak truthfully about terrible conditions of children in the aftermath of a natural disaster or about the plight of homeless animals. This may cause listeners to pity the children or animals, but if this is pity for those who are actually suffering, then it may provide a legitimate motive to help. The fallacious use of the appeal to pity occurs when the pity is not (or should not be) relevant to the decision at hand or is used manipulatively.

Another specific type of appeal to emotion is the appeal to fear. The appeal to fear is a fallacy that tries to get someone to agree to something out of fear when it is contrary to a rational assessment of the evidence.

Mom: “You shouldn’t swim in the ocean; there could be sharks.” (The odds of being bitten by a shark are much smaller than the odds of being struck by lightning [International Wildlife Museum, n.d.]. However, the fear of sharks tends to produce a strong aversion.)

Dad: “Don’t go to that country; there is a lot of crime there.” (Here you should ask: How high is the crime rate? Where am I going within that country? Is it much more dangerous than my own country? How important is it to go there? Can I act so that I am safe there?)

Political ad: “If we elect that candidate, then the economy will collapse.” (Generally, all candidates claim that their policies will be better for the economy. This statement seems to use fear in order to change votes.)

This is not to say that fear cannot be rational. If, in fact, many dangerous sharks have been seen recently in a given area, then it might be wise to go somewhere else. However, a fallacy is committed if the fears are exaggerated—as they often are—or if one allows the emotion of fear to make the decision rather than a careful assessment of the evidence.

The appeal to fear has been used throughout history. Many wars, for example, have been promoted by playing on people’s fears of an outside group or of the imagined consequences of nonaction.

Politician: “We have to go to war with that country; otherwise its influence will destroy our civilization.” (There may or may not be good rational arguments for the war, but getting citizens to support it out of exaggerated fears is to commit the appeal to fear fallacy.)

Sometimes, a person using the appeal to fear personally threatens the listener if she or he does not agree. This fallacy is known as the appeal to force. The threat can be direct:

Boss: “If you don’t agree with me, then you are fired.”

Or the threat can be implied:

Mob boss: “I’d sure like to see you come around to our way of seeing things. It was a real shame what happened to the last guy who disagreed with us.”

Either way, the listener is being coerced into believing something rather than rationally persuaded that it is true. A statement of consequences, however, may not constitute an appeal to force fallacy, as in the following example:

Boss: “If you don’t finish that report by Friday, then you will be fired.” (This example may be harsh, but it might not be fallacious because the boss is not asking you to accept something as true just to avoid consequences, even if it is contrary to evidence. This boss just gives you the information that you need to get this thing done in time in order to keep your job.)

It may be less clear if the consequences are imposed by a large or nebulous group:

Campaign manager: “If you don’t come around to the party line on this issue, then you will not make it through the primary.” (This gives the candidate a strong incentive to accept his or her party’s position on the issue; however, is the manager threatening force or just stating the facts? It could that the implied force comes from the voters themselves.)

It is sometimes hard to maintain integrity in life when there are so many forces giving us all kinds of incentives to conform to popular or lucrative positions. Understanding this fallacy can be an important step in recognizing when those influences are being applied.

When it comes to appeals to emotions in general, it is good to be aware of our emotions, but we should not allow them to be in charge of our decision making. We should carefully and rationally consider the evidence in order to make the best decisions. We should also not let those competing forces distract us from trusting only the best and most rational assessment of the evidence.

Appeal to Popular Opinion

The appeal to popular opinion fallacy, also known as the appeal to popularity fallacy, bandwagon fallacy, or mob appeal fallacy, occurs when one accepts a point of view because that is what most people think. The reasoning pattern looks like this:

“Almost everyone thinks that X is true. Therefore, X must be true.”

The appeal to popular opinion fallacy can be harmless, like when you see a movie because all your friends said it was great, but other times it can have negative consequences, such as bullying or discriminating against others.

The error in this reasoning seems obvious: Just because many people believe something does not make it true. After all, many people used to believe that the sun moved around the earth, that women should not vote, and that slavery was morally acceptable. While these are all examples of past erroneous beliefs, the appeal to popular opinion fallacy remains more common than we often realize. People tend to default to the dominant views of their respective cultures, and it takes guts to voice a different opinion from what is normally accepted. Because people with uncommon views are often scorned and because people strongly want to fit in to their culture, our beliefs tend not to be as autonomous as one might imagine.

The philosopher Immanuel Kant discussed the great struggle to learn to think for ourselves. He defined enlightenment as the ability to use one’s own understanding without oversight from others (Kant, 1784). However, extricating ourselves from bandwagon thinking is harder than one might think. Consider these examples of popular opinions that might seem appealing:

Patriot: “America is the best country in the world; everyone here knows it.” (To evaluate this claim objectively, we would need a definition of best and relevant data about all of the countries in the world.)

Animal eater: “It would be wrong to kill a dog to eat it, but killing a pig for food is fine. Why? Because that’s what everyone does.” (Can one logically justify this distinction? It seems simply to be based on a majority opinion in one’s culture.)

Business manager: “This business practice is the right way to do it; it is what everyone is doing.” (This type of thinking can stifle innovation or even justify violations of ethics.)

General formula: “Doing thing of type X is perfectly fine; it is common and legal.” (You could fill in all kinds of things for X that people ethically seem to take for granted without thinking about it. Have you ever questioned the ethics of what is “normal”?)

It is also interesting to note that the “truths” of one culture are often different from the “truths” of another. This may not be because truth is relative but because people in each culture are committing the bandwagon fallacy rather than thinking independently. Do you think that we hold many false beliefs today just because a majority of people also believe them? It is possible that much of the so-called common sense of today could someday come to be seen as once popular myths.

It is often wise to listen to the wisdom of others, including majority opinions. However, just because the majority of people think and act a certain way does not mean that it is right or that it is the only way to do things; we should learn to think independently and rationally when deciding what things are right and true and best.

Appeal to Tradition

Closely related to the appeal to popular opinion is the appeal to tradition, which involves believing in something or doing something simply because that is what people have always believed and done. One can see that this reasoning is fallacious because people have believed and done false and terrible things for millennia. It is not always easy, however, to undo these thought patterns. For example, people tried to justify slavery for centuries based partly on the reasoning that it had always been done and was therefore “right” and “natural.” Some traditions may not be quite as harmful. Religious traditions, for example, are often considered to be valuable to people’s identity and collective sense of meaning. In seeking to avoid the fallacy, therefore, it is not always easy to distinguish which things from history are worth keeping. Here is an example:

“This country got where it is today because generations of stay-at-home mothers taught their children the importance of love, hard work, and respect for their elders. Women should stay at home with their kids.” (Is this a tradition that is worth keeping or is it a form of social discrimination?)

The fallacy would be to assume that something is acceptable simply because it is a tradition. We should be open to rational evaluation of whether a tradition is acceptable or whether it is time to change. For example, in response to proposals of social change, some will argue:

“If people start changing aspect X of society, then our nation will be ruined.” (People have used such reasoning against virtually every form of positive social change.)

You may be realizing that sometimes whether a piece of reasoning is fallacious can be a controversial question. Sometimes traditions are good; however, we should not assume that something is right just because it is a tradition. There would need to be more evidence that the change would be bad than evidence that it would be good. As with appeals to popularity, it is important to reason carefully and independently about what is best, despite the biases of our culture.

Ad Hominem and Poisoning the Well

Ad hominem is Latin for “to the person.” One commits the ad hominem fallacy when one rejects or dismisses a person’s reasoning because of who is saying it. Here are some examples:

“Who cares what Natalie Portman says about science? She’s just an actress.” (Despite being an actress, Natalie Portman has relevant background.)

“Global warming is not real; climate change activists drive cars and live in houses with big carbon footprints.” (Whether the advocates are good personal exemplars is independent of whether the problem is real or whether their arguments are sound.)

“I refuse to listen to the arguments about the merits of home birth from a man.” (A man may not personally know the ordeal of childbirth, but that does not mean that a man cannot effectively reason about the issue.)

It is not always a fallacy to point out who is making a claim. A person’s credentials are often relevant to that person’s credibility as an authority, as we discussed earlier with the appeal to authority. However, a person’s personal traits do not refute that person’s reasoning. The difference, then, is whether one rejects or ignores that person’s views or reasoning due to those traits. To simply assume that someone’s opinion has no merit based on who said it is to commit the fallacy; to question whether or not we should trust someone as an authority may not be.

This next example commits the ad hominem fallacy:

“I wouldn’t listen to his views about crime in America; he is an ex-convict.” (This statement is fallacious because it ignores the person’s reasoning. Ex-convicts sometimes know a lot about problems that lead to crime.)

This example, however, may not commit the fallacy:

“I wouldn’t trust his claims about lung cancer; he works for the tobacco industry.” (This simply calls into question the credibility of the person due to a source of bias.)

One specific type of ad hominem reasons that someone’s claim is not to be listened to if he or she does not live up to the truth of that claim. It is called the tu quoque (Latin for “you too”). Here is an example:

“Don’t listen to his claims that smoking is bad; he smokes!” (Even if the person is a hypocrite, that does not mean his claims are false.)

Another type of fallacy commits the ad hominem in advance. It is called poisoning the well: when someone attempts to discredit a person’s credibility ahead of time, so that all those who are listening will automatically reject whatever the person says.

“The next speaker is going to tell you all kinds of things about giving money to his charity, but keep in mind that he is just out to line his pockets with your money.” (This may unfairly color everyone’s perceptions of what the speaker says.)

To ignore arguments because of their source is often lazy reasoning. A logical thinker neither rejects nor blindly accepts whatever someone says, but carefully evaluates the quality of the reasoning used on both sides. We should evaluate the truth or falsity of people’s claims on the merits of the claims themselves and based on the quality of the reasoning for them.

7.3 Fallacies of Clarity

Another category of fallacies consists of arguments that depend on an unclear use of words; they are called fallacies of clarity. Problems with clarity often result from words in our language that are vague (imprecise in meaning, with so-called gray areas) or ambiguous (having more than one meaning). Fallacies of clarity can also result from misunderstanding or misrepresenting others’ arguments.

The Slippery Slope

The slippery slope fallacy occurs when someone reasons, without adequate justification, that doing one thing will inevitably lead to a whole chain of other things, ultimately resulting in intolerable consequences; therefore, the person reasons, we should not do that first thing.

It is perfectly appropriate to object to a policy that will truly have bad consequences. A slippery slope fallacy, however, merely assumes that a chain of events will follow, leading to a terrible outcome, when such a chain is far from inevitable. Such assumptions cause people to reject the policy out of fear rather than out of actual rational justification.

Here is an example:

Student: “Why can’t I keep my hamster in my dorm room?”

Administrator: “Because if we let you keep your hamster, then other students will want to bring their snakes, and others will bring their dogs, and others will bring their horses, and it will become a zoo around here!” (There may be good reasons not to allow hamsters in dorm rooms—allergies, droppings, and so on—but the idea that it will inevitably lead to allowing all kinds of other large, uncaged animals seems to be unjustified.)

As with many fallacies, however, there are times when similar reasoning may actually be good reasoning. For example, an alcoholic may reason as follows:

“I can’t have a beer because if I do, then it will lead to more beers, which will lead to whiskey, which will lead to me getting in all kinds of trouble.”

For an alcoholic, this may be perfectly good reasoning. Based on past experience, one may know that one action leads inevitably to another. One way to test if an argument commits a slippery slope fallacy, as opposed to merely raising legitimate questions about the difficulty of drawing a line, is to ask whether it would be possible to draw a line that would stop the slippery slope from continuing. What do you think about the following examples?

“We can’t legalize marijuana because if we do, then we will have to legalize cocaine and then heroine and then crack, and everyone will be a druggie before you know it!”

“If you try to ban pornography, then you will have to make the distinction between pornography and art, and that will open the door to all kinds of censorship.”

Some examples may present genuine questions as to where to draw a line; others may represent slippery slope fallacies. The question is whether those consequences are likely to follow from the initial change.

As some examples show, the difficulty of drawing precise lines is sometimes relevant to important political questions. For example, in the abortion debate, there is a very important question about at what point a developing embryo becomes a human being with rights. Some say that it should be at conception; some say at birth. The Supreme Court, in its famous Roe v. Wade decision (1973), chose the point of viability—the point at which a fetus could survive outside the womb; the decision remains controversial today.

True, it is difficult to decide exactly where the line should be drawn, but the failure to draw one at all can lead to slippery slope problems. To reason that we should not make any distinctions because it is hard to draw the line is like reasoning that there should be no speed limit because it is difficult to decide exactly when fast driving becomes unsafe. The trick is to find good reasons why a line should be drawn in one place rather than another.

Another example is in the same-sex marriage debate. Some feel that if same-sex marriage were to be universally legalized, then all kinds of other types of objectionable marriage will become legal as well. Therefore, they argue, we must not legalize it. This would appear to commit the slippery slope fallacy, because there are ways that gay marriage laws could be written without leading to other objectionable types of marriages becoming legal.

Moral of the Story: The Slippery Slope

It can be difficult to draw sharp boundaries and create clear definitions, but we must not allow this difficulty to prevent us from making the best and most useful distinctions we can. Policy decisions, for example, should be judged with careful reasoning, making the best distinctions we can, not by the mere application of slippery slope reasoning.

Equivocations

Equivocation is a fallacy based on ambiguity. An ambiguous term is a word that means two different things. For example, fast can mean “going without food,” or it can mean “rapid.” Some ambiguities are used for humor, like in the joke, “How many therapists does it take to change a lightbulb? Just one, but the lightbulb has to really want to change!” This, of course, is a pun on two meanings of change. However, when ambiguity is used in reasoning, it often creates an equivocation, in which an ambiguous word is used with one meaning at one point in an argument and another meaning in another place in the argument in a misleading way. Take the following argument:

Mark plays tennis.

Mark is poor.

Therefore, Mark is a poor tennis player.

If the conclusion meant that Mark is poor and a tennis player then this would be a logically valid argument. However, the conclusion actually seems to mean that he is bad at playing tennis, which does not follow from the fact that he is poor. This argument seems to switch the meaning of the word poor in between the premises and the conclusion. As another example, consider the following exchange:

Person A: “I broke my leg; I need a doctor!”

Person B: “I am a doctor.”

Person A: “Can you help me with my leg?”

Person B: “I have a PhD in sociology; what do I know about medicine?”

Can you identify the equivocation? Person B seemed to reason as follows:

I have a PhD in sociology.

Therefore, I am a doctor.

Although this reasoning is right in some sense, it does not follow that person B is the type of doctor that person A needs. The word doctor is being used ambiguously.

Here is another example:

Officer: Have you been drinking at all tonight?

Driver: Yes.

Officer: Then you are under arrest.

Driver: But I only had a soda!

Clearly, the officer came to a false conclusion because he and the driver meant different things by drinking. See A Closer Look: Philosophical Equivocations for more examples.

It is very important when reasoning (or critiquing reasoning) that we are consistent and clear about our meanings when we use words. A subtle switch in meanings within an argument can be highly misleading and can mean that arguments that initially appear to be valid may actually be invalid once we correctly understand the terms involved.

A Closer Look: Philosophical Equivocations

In real life, equivocations are not always so obvious. The philosopher John Stuart Mill, for example, attempted to demonstrate his moral theory, known as utilitarianism, by arguing that, if the only thing that people desire is pleasure, then pleasure is the only thing that is desirable (Mill, 1879). Many philosophers think that Mill is equivocating between two different meanings of desirable. One interpretation means “able to be desired,” which he uses in the premise. The other interpretation is “thing that is good or should be desired,” which he uses in the conclusion. His argument would therefore be invalid, based on a subtle shift in meaning.

Another historical example is one of the most famous philosophical arguments of all time. The philosopher Saint Anselm long ago presented an argument for the existence of God based on the idea that the word God means the greatest conceivable thing and that a thing must exist to be greatest (Anselm, n.d.). His argument may be simplified as follows:

God means the greatest conceivable thing.

A thing that exists is greater than one that does not.

We can conceive of God existing.

Therefore, God must exist.

Though this is still an influential argument for the existence of God, some think it commits a subtle equivocation in its application of the word great. The question is whether it is talking about the greatness of the concept or the greatness of the thing. The first premise seems to take it be about the greatness of the concept. The second premise, however, seems to depend on talking about the thing itself (actual existence does not change the greatness of the concept). If this analysis is right, then the word greatest has different meanings in the first two premises, and the argument may commit an equivocation. In that case the argument that appears to be valid may in fact be subtly invalid.

The Straw Man

Have you ever heard your views misrepresented? Most of us have. Whether it is our religion, our political views, or our recreational preferences, we have probably heard someone make our opinions sound worse than they are. If so, then you know that can be a very frustrating experience.

A cartoon showing two men. One man says, “I don’t want to gar to a bar tonight.” The other man responds, “You don’t believe in having fun do you?”

Concept by Christopher Foster | Illustration by Steve Zmina

Misrepresenting the views of the other side through a straw man fallacy can be frustrating and will fail to advance the issue.

The straw man fallacy is an attack on a person’s position based on a (deliberate or otherwise) misrepresentation of his or her actual views. The straw man fallacy is so named because it is like beating up a scarecrow (a straw man) rather than defeating a real person (or the real argument). The straw man fallacy can be pernicious; it is hard for any debate to progress if the differing sides are not even fairly represented. We can hope to refute or improve on a view only once we have understood and represented it correctly.

If you have listened to people arguing about politics, there is a good chance that you have heard statements like the following:

Democrat: “Republicans don’t care about poor people.”

Republican: “Democrats want the government to control everything.”

These characterizations do not accurately represent the aims of either side. One way to tell whether this is a fair representation is to determine whether someone with that view would agree with the characterization of their view. People may sometimes think that if they make the other side sound dumb, their own views will sound smart and convincing by comparison. However, this approach is likely to backfire. If our audience is wise enough to know that the other party’s position is more sophisticated than was expressed, then it actually sounds unfair, or even dishonest, to misrepresent their views.

It is much harder to refute a statement that reflects the complexity of someone’s actual views. Can you imagine if politically partisan people spoke in a fairer manner?

Democrat: “Republicans believe that an unrestrained free market incentivizes innovation and efficiency, thereby improving the economy.

Republican: “Democrats believe that in a country with as much wealth as ours, it would be immoral to allow the poorest among us to go without life’s basic needs, including food, shelter, and health care.”

That would be a much more honest world; it would also be more intellectually responsible, but it would not be as easy to make other people sound dumb. Here are more—possibly familiar—examples of straw man fallacies, used by those on opposing sides of a given issue:

Environmentalist: “Corporations and politicians want to destroy the earth. Therefore, we should pass this law to stop them.” (Perhaps the corporations and politicians believe that corporate practices are not as destructive as some imply or that the progress of industry is necessary for the country’s growth.)

Developer: “Environmentalists don’t believe in growth and want to destroy the economy. Therefore, you should not oppose this power plant.” (Perhaps environmentalists believe that the economy can thrive while shifting to more eco-friendly sources.)

Young Earth creationist: “Evolutionists think that monkeys turned into people! Monkeys don’t turn into people, so their theory should be rejected.” (Proponents of evolution would state that there was a common ancestor millions of years ago. Genetic changes occurred very gradually between thousands and thousands of generations, leading to eventual species differences.)

Atheist: “Christians don’t believe in science. They think that Adam and Eve rode around on dinosaurs! Therefore, you should not take their views seriously.” (Many Christians find their religion to be compatible with science or have nonliteral interpretations of biblical creation.)

Closely related to the straw man fallacy is the appeal to ridicule, in which one simply seeks to make fun of another person’s view rather than actually refute it. Here is an example:

“Vegans are idiots who live only on salad. Hooray for bacon!” (Actually, vegans are frequently intelligent people who object to the confinement of animals for food.)

“People with those political opinions are Nazis!” (Comparisons to Nazis in politics are generally clichéd, exaggerated, and disrespectful to the actual victims of the Holocaust. See Chapter 8 for a discussion of the fallacy reductio ad Hitlerum.)

In an academic or any other context, it is essential that we learn not to commit the straw man fallacy. If you are arguing against a point of view, it is necessary first to demonstrate that you have accurately understood it. Only then have you demonstrated that you are qualified to discuss its truthfulness. Furthermore, the attempt to ridicule other’s views is rationally counterproductive; it does not advance the discussion and seeks only to mock other people. (See Everyday Logic: Love and Logic for how you can avoid the straw man fallacy and the appeal to ridicule.)

When we seek to defend our own views, the intellectually responsible thing to do is to understand opposing viewpoints as fully as possible and to represent them fairly before we give the reasons for our own disagreement. The same applies in philosophy and other academic topics. If someone want to pontificate about a topic without having understood what has already been done in that field, then that person simply sounds naive. To be intellectually responsible, we have to make sure to correctly understand what has been done in the field before we begin to formulate our own contribution to the field.

Everyday Logic: Love and Logic

When it comes to real-life disagreements, people can become very upset—even aggressive. This is an understandable reaction, particularly if the disagreement concerns positions we think are wrong or perspectives that challenge our worldview. However, this kind of emotional reaction can lead to judgments about what the other side may believe—judgments that are not based on a full and sophisticated understanding of what is actually believed and why. This pattern can be the genesis of much of the hostility we see surrounding controversial topics. It can also lead to common fallacies such as the straw man and appeal to ridicule, which are two of the most pernicious and hurtful fallacies of them all.

Logic can help provide a remedy to these types of problems. Logic in its fullest sense is not just about creating arguments to prove our positions right—and certainly not just about proving others wrong. It is about learning to discover truth while avoiding error, which is a goal all participants can share. Therefore, there need not be any losers in this quest.

If we stop short of a full appreciation of others’ perspectives, then we are blocked from a full understanding of the topic at hand. One of the most important marks of a sophisticated thinker is the appreciation of the best reasoning on all sides of each issue.

We must therefore resist the common temptation to think of people with opposing positions as “stupid” or “evil.” Those kinds of judgments are generally unfair and unkind. Instead we should seek to expand our own points of view and remove any animosity. Here are some places to begin:

We can read what smart people have written to represent their own views about the topic, including reading top scholarly articles explaining different points of view.

We can really listen with intelligence, openness, and empathy to people who feel certain ways about the topic without seeking to refute or minimize them.

We can seek to put ourselves “in their shoes” with sensitivity and compassion.

We can speak in ways that reflect civility and mutual understanding.

It will take time and openness, but eventually it is possible to appreciate more fully a much wider variety of perspectives on life’s questions.

Furthermore, once we learn to fairly represent opposing points of view, we may not find those views to be as crazy as we once thought. Even the groups that we initially think of as the strangest actually have good reasons for their beliefs. We may or may not come to agree, but only in learning to appreciate why these groups have such beliefs can we truly say that we understand their views. The process and effort to do so can make us more civil, more mature, more sophisticated, more intelligent, and more kind.

Fallacy of Accident

The fallacy of accident consists of applying a general rule to cases in which it is not properly applied. Often, a general rule is true in most cases, but people who commit this fallacy talk as though it were always true and apply it to cases that could easily be considered to be exceptions.

Some may find the name of this fallacy confusing. It is called the fallacy of accident because someone committing this fallacy confuses the “essential” meaning of a statement with its nonessential, or “accidental,” meaning. It is sometimes alternately called dicto simpliciter, meaning “unqualified generalization” (Fallacy Files, n.d.). Here are some examples:

“Of course ostriches must be able to fly. They are birds, and birds fly.” (There clearly are exceptions to that general rule, and ostriches are among them.)

“If you skip class, then you should get detention. Therefore, because you skipped class in order to save someone from a burning building, you should get detention.” (This may be an extreme case, but it shows how a misapplication of a general rule can go astray.)

“Jean Valjean should go to prison because he broke the law.” (This example, from the novel Les Miserables, involves a man serving many years in prison for stealing bread to feed his starving family. In this case the law against stealing perhaps should not be applied as harshly when there are such extenuating circumstances.)

The last example raises the issue of sentencing. One area in which the fallacy of accident can occur in real life is with extreme sentencing for some crimes. In such cases, though an action may meet the technical definition of a type of crime under the law, it may be far from the type of case that legislators had in mind when the sentencing guidelines were created. This is one reason that some argue for the elimination of mandatory minimum sentencing.

Another example in which the fallacy of accident can occur is in the debate surrounding euthanasia, the practice of intentionally ending a person’s life to relieve her or him of long-term suffering from a terminal illness. Here is an argument against it:

It is wrong to intentionally kill an innocent human being.

Committing euthanasia is intentionally killing an innocent human being.

Therefore, euthanasia is wrong.

The moral premise here is generally true; however, when we think of the rule “It is wrong to intentionally kill an innocent human being,” what one may have in mind is a person willfully killing a person without justification. In the case of euthanasia, we have a person willingly terminating his or her own life with a strong type of justification. Whatever one’s feelings about euthanasia, the issue is not settled by simply applying the general rule that it is wrong to kill a human being. To use that rule seems to oversimplify the issue in a way that misses the subtleties of this specific case. An argument that properly addresses the issue will appeal to a moral principle that makes sense when applied to the specific issues that pertain to the case of euthanasia itself.

It is difficult to make general rules that do not have exceptions. Therefore, when specific troubling cases come up, we should not simply assume the rule is perfect but rather consider the merits of each case in light of the overall purpose for which we have the rule.

Fallacies of Composition and Division

Two closely related fallacies come from confusing the whole with its parts. The fallacy of composition occurs when one reasons that a whole group must have a certain property because its parts do. Here is an example:

Because the citizens of that country are rich; it follows that the country is rich. (This may not be the case at all; what if the government has outspent its revenue?)

You should be able to see why this one reaches an incorrect conclusion:

“If I stand up at a baseball game then I will be able to see better. Therefore, if everyone stands up at the baseball game, then everyone will be able to see better.”

This statement seems to make the same mistake as the baseball example:

If the government would just give everyone more money, then everyone would be wealthier. (Actually, giving away money to all would probably reduce the value of the nation’s currency.)

A similar fallacy, known as the fallacy of division, does the opposite. Namely, it makes conclusions about members of a population because of characteristics of the whole. Examples might include the following:

That country is wealthy; therefore, its citizens must be wealthy. (This one may not follow at all; the citizens could be much poorer than the country as a whole.)

That team is the best; the players on the team must be the best in the league. (Although the ability of the team has a lot to do with the skills of the players, there are also reasons, including coaching and teamwork, why a team might outperform the average talent of its roster.)

These types of fallacies can lead to stereotyping as well, in which people arrive at erroneous conclusions about a group because of (often fallacious) generalizations about its members. Conversely, people often make assumptions about individuals because of (often fallacious) views about the whole group. We should be careful when reasoning about populations, lest we commit such harmful fallacies.

CHAPTER 8

8.1 Obstacles to Critical Thinking: The Self

In order to improve our ability to think critically and logically, we must first be aware that we ourselves are already highly prone to thinking uncritically and irrationally. Some of the reasons for this come from outside of us. Part of thinking well is the ability to focus on the actual claim being made and the reasons being put forth to support that claim. However, this is often difficult because so many other things might be going on, such as watching a TV commercial, listening to a political candidate, or talking with a friend, coworker, or boss. However, sometimes our tendency to be uncritical comes from our own beliefs—which we often accept without question—as well as inherent biases, or prejudices. For example, we might be affected by the status or other features of the person advancing the argument: Perhaps a claim made by your boss will be regarded as true, whereas the same claim made by a coworker might be subject to more critical scrutiny. Our positive biases lure us into favoring the views of certain people, whereas our negative biases often cause us to reject the views of others. Unfortunately, these responses are often automatic and unconscious but nevertheless leave us vulnerable to manipulation and deceit. Some people have been known to win arguments simply because they speak more loudly or assert their position with more force than their opponents. It is thus important to learn to recognize any such biases. This section will examine what are known as stereotypes and cognitive biases.

Stereotypes

Some stereotypes, like doctors having poor penmanship, are relatively harmless and frequently joked about. Think of the stereotypes like this one that you encounter each day.

A stereotype is a judgment about a person or thing based solely on the person or thing being a member of a group or of a certain type. Everybody stereotypes at times. Think about a time when you may have been judged—or in which you may have judged someone else—based only on gender, race, class, religion, language (including accent), clothes, height, weight, hair color, or some other attribute. Generalizations based on a person’s attributes often become the basis of stereotypes. Examples include the stereotypes that men are more violent than women, blonds are not smart, tall people are more popular, or fat people are lazy. (See A Closer Look: A Common Stereotype for another example.) Note that stereotypes can be positive or negative, harmless or harmful. Some stereotypes form the basis for jokes; others are used to justify oppression, abuse, or mistreatment. Even if a stereotype is not used explicitly to harm others, it is better to avoid drawing conclusions in this way, particularly if the evidence that supports such characterizations is partial, incomplete, or in some other way inadequate.

If you are thinking to yourself that you are already sensitive to the negative effects of stereotyping and thus guard your speech and actions carefully, you may want to think again. The fact is that we all accept certain stereotypes without even noticing. Our culture, upbringing, past experiences, and a myriad of other influences shape certain biases in our belief systems; these become so entrenched that we do not stop to question them. It is quite shocking to realize that we can be prone to prejudice. After all, most of us do not use stereotypes consciously—for example, we do not always think such things as “that person is of ethnicity X, so she must really like Y.” This is why stereotyping is both common and difficult to guard against. One of the advantages of logical reasoning is that it helps us develop the habits of thinking before acting and of questioning beliefs before accepting them. Is the conclusion I am drawing about this person justified? Is there good evidence? How specific is my characterization of this person? Is that characterization fair? Would I approve of someone drawing conclusions about me on the same amount of evidence? These are questions we should ask ourselves whenever we make a judgment about someone.

How Preexisting Beliefs Distort Logical Reasoning

As an experiment described in this video demonstrates, preexisting beliefs can distort logical reasoning. People are much more skeptical about information they do not want to believe than information they do. A challenge, however logical, only entrenches people in their own position.

It can be even more difficult to avoid using stereotypes when some of these generalizations turn out to be accurate or have some support. For instance, it is rare to find an overweight marathon runner, a tall jockey, or a short basketball player. Thus, one might hear that a person is a professional jockey and reasonably conclude that this person is very short; after all, the conditions of the job tend to rule out tall individuals. But, of course, it is not impossible that there may well be a jockey who is 5’10” (Richard Hughes), or a 430-pound marathon runner (Kelly Gneiting), or a 5’3” NBA player (Muggsy Bogues). Stereotypes allow us to make quick judgments on little evidence, which can be important when our safety is involved and we have no way of getting information quickly. Although there may well be situations in which we have to make some generalizations, we need to be prepared to abandon such generalizations if good evidence to the contrary emerges.

Regardless, although stereotypes may be useful in some circumstances, in most cases they lead to hasty generalizations or harmful and misguided judgments. The reason for this is that stereotypes are founded on limited information. Accordingly, stereotypes are frequently based on extremely weak inductive arguments or on fallacies such as hasty generalization.

A Closer Look: A Common Stereotype

One damaging yet common stereotype is that women are not good (or not as good as men, at least) at mathematics. This stereotype can become a self-fulfilling prophecy: If young girls hear this stereotype often enough, they may begin to think that it is true and, in response, take fewer and less difficult math classes. This results in fewer women in math-related careers, which in turn fuels the stereotype further. Similarly, if a teacher is convinced that such a stereotype is true, he or she may be less likely to encourage female students to take more math courses, again leading to underrepresentation in the field. This stereotype may have prevented many women from being as successful at mathematics as they might have been otherwise.

This trend is not exclusive to mathematics. Numerous studies show that women are underrepresented in science and engineering as well, with fewer women receiving doctorates in those fields. Furthermore, there are disparities in the science and engineering fields between men and women in their pay, applying for grants, the size of the grants applied for, success in receiving funding, and being named in patents. The consistency of these results in both the United States and Europe and the number of studies that have come to the same conclusions suggest that there may be systematic bias at play, leading to fewer women being attracted to science and engineering careers and lower rewards for those who do enter those careers. For more details on these results, see http://genderedinnovations.stanford.edu/institutions /disparities.html.

Cognitive Biases

Errors in critical thinking can also stem from certain misperceptions of reality—what psychologists, economists, and others call cognitive biases, thus distinguishing them from the more harmful kind of prejudicial and bigoted judgments to which stereotypes can lead. A cognitive bias is a psychological tendency to filter information through our own subjective beliefs, preferences, or aversions, a tendency that may lead us to accept poor reasoning. Biases are related to the fallacies we discussed in Chapter 7. But whereas a fallacy is an error in an argument, a bias is a general tendency that people have. Biases may lead us to commit fallacies, but they also color our perception of evidence in broader ways. By understanding our tendencies to accept poor reasoning, we can try to compensate for our biases.

Let us begin by considering a simple example that shows one kind of cognitive bias that can prevent us from drawing a correct conclusion. Imagine you have flipped a fair coin 10 times, and each time it has come up heads. The odds of 10 consecutive heads occurring are 1 in 1,024—not very good. What do you think the odds are of the 11th coin flip in this sequence being heads again? Does it seem to you that after 10 heads in a row, the odds are much better that the 11th coin toss will turn up tails?

Reasoning that past random events (the first 10 coin tosses, in our example) will affect a future random event (the 11th coin toss) is known as the gambler’s fallacy. We are prone to accepting the gambler’s fallacy because we have a cognitive bias. We expect the odds to work out over the long run, so we think that events that lead to them working out are more likely than events that do not. We expect that over time the number of heads and tails will be approximately equal. This expectation is the bias that frequently leads to the gambler’s fallacy. So, many people reason that since 10 tosses have come up heads, the odds of it happening an 11th time are very small. But, of course, the odds of any individual coin toss coming up heads are 1 in 2.

Many people have lost quite a lot of money by committing the gambler’s fallacy—by overlooking this cognitive bias in their reasoning—and many people have, naturally, profited from others making this mistake. Often the mistake is to think that an unusually long string of unlikely outcomes is more likely to be followed by similar outcomes—that lucky outcomes come in groups or “lucky streaks.” If you are gambling, whether with dice, coins, or a roulette wheel, the odds are the same for any individual play. To convince yourself that a roulette wheel is “hot” or that someone is on a “streak” is to succumb to this cognitive bias and can lead not just to mistakes in reasoning but also the loss of a lot of money.

Biased thinking leads to some of the same errors as stereotypical thinking—including arriving at the wrong conclusion by misinterpreting the assumptions that lead to the support for that conclusion. Unlike stereotypical thinking, however, biased thinking often involves common, broad tendencies that are difficult to avoid, even when we are aware of them. (See A Closer Look: Economic Choice: Rational Expectations Versus Cognitive Bias for an example.) Researchers have identified many cognitive biases, and more are identified every year. It would be impossible to compile a comprehensive list of all the ways in which our perceptions and judgments may be biased. From the standpoint of critical thinking, it is more important to be aware that we always face bias; hence, we should have a healthy dose of skepticism when considering even our own points of view. By examining a handful of biases—confirmation bias, probability neglect, selection bias, status quo bias, and the bandwagon effect—we can be more aware of the biases we all are prone to and begin to work on compensating for them in our thinking.

A Closer Look: Economic Choice: Rational Expectations Versus Cognitive Bias

Psychologist and Nobel laureate Daniel Kahneman has dedicated his research to examining how people arrive at decisions under conditions of uncertainty. Mainstream economists generally believe that people can set aside their biases and correctly estimate the probabilities of various outcomes. This is known as rational expectations theory: People are able to use reason to make correct predictions.

Kahneman, however, discovered that this is not true. His classic article “Judgments of and by Representativeness,” written with his longtime research partner Amos Tversky and republished in the 1982 book Judgments Under Uncertainty, describes a number of cognitive biases that show a systematic departure from rational behavior. The findings by Kahneman and Tversky are supported by data gathered from surveys that contained questions such as the following:

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice and participated in antinuclear demonstrations. Which two of the following five alternatives are more probable?

Linda is a bank teller.

Linda wears glasses.

Linda enjoys skiing.

Linda is a bank teller and active in the feminist movement.

Linda runs a local bookstore.

How would you answer this question? How do you think most survey respondents answered this question? Rank these five claims in order of probability and then turn to the end of the chapter for the answer to the problem.

Kahneman found that most respondents ranked these incorrectly. If people are not able to make correct predictions about probability, then one of the assumptions of rational expectations theory is wrong. This in turn can lead to an inaccurate economic predictions. For example, the price we are willing to pay for a product is partially based on how long we think the product is likely to last. We are willing to pay more for a product that we think will last longer than for one we think will not last as long. If we cannot accurately estimate probabilities, then our purchasing decisions may be less rational than economic theory supposes they are.

Probability Neglect

Although the probability of one’s home being destroyed by a natural disaster is relatively low, many people choose to ignore the statistical probability and pay monthly insurance premiums to protect themselves.

Probability neglect is, in a certain way, the reverse of gambler’s fallacy. With probability neglect, people simply ignore the actual statistical probabilities of an event and treat each event as equally likely. Thus, someone might argue that wearing a seat belt is not a good idea, because someone was once trapped in a burning car by a seat belt. Such a person might go on to say that since a seat belt can save your life or result in your death, there is no reason to wear one. This person is ignoring the fact that the probability of a seat belt saving one’s life is much higher than the probability of the seat belt causing one’s death.

In the seat belt case, the probability of an unlikely event is overemphasized. But in many cases it is also possible to underestimate the odds of an unlikely event occurring. We may think such an event is miraculous, when in fact a simpler explanation is available. When there are many opportunities for an unlikely event to occur, the odds can actually be in favor of it occurring sometimes. The lottery provides a good illustration here. The odds of winning a lottery are extremely small, about 1 in 175 million for Powerball (Missouri Lottery, 2014). Accordingly, we often advise our loved ones to avoid being lured by the potential of a big win in light of such odds. Yet it is precisely this extremely small chance, combined with the huge number of tickets sold, that there is a decent chance that somebody will win. So it is no miracle when someone wins.

Or suppose that you happen to be thinking about a friend and feel a sudden concern for her welfare. Later you learn that just as you were worrying about your friend, she was going through a terrible time. Does this demonstrate a psychic connection between you and your friend? What are the chances that your worry was not connected to your friend’s distress? If we consider how many times people worry about friends and how many times people go through difficulties, then we can more easily see that the chances are high that we will think about friends while they are having difficulties. In other words, we overlook the high probability that our thinking about our loved ones will coincide with their experiencing problems. It would actually be more surprising if it never happened. No psychic connection is needed to explain this.

Moral of the Story: Gambler’s Fallacy and Probability Neglect

When your reasoning depends on how likely something is, be extra careful. It is very easy to hugely overestimate or underestimate how likely something is.

Confirmation Bias

People tend to look for information that confirms what they already believe—or alternatively, dismiss or discount information that conflicts with what they already believe. As noted in Chapter 5, this is called confirmation bias. For example, consider the case of a journalist who personally opposes the death penalty and, in writing an article about the effectiveness of the death penalty, interviews more people who oppose it than who favor it. Another example, which is also discussed in Chapter 5, is our own tendency to turn to friends—people likely to share our worldview—to validate our values and opinions.

The easy access to information on the Internet has made confirmation bias both worse and yet easier to overcome. On the downside, it has become increasingly easy to find news sources with which we agree. No matter where you stand on an issue, it is easy to find a news outlet that agrees with you or a forum of like-minded people. Since we all tend to feel more comfortable around like-minded people, we tend to overemphasize the importance and quality of information we get from such places.

On the upside, it has also become easier to find information sources that disagree with our views. Overcoming confirmation bias requires looking at both sides of an issue fairly and equally. That does not mean looking at the different sides as having equal justification. Rather, it means making sure that you know the arguments on both sides of an issue and that you apply the same level of logical analysis to each. If we take the time and energy to explore sources that disagree with our position, we will better understand what issues are at stake and what the limitations of our own position may be. Even if we do not change our mind, we will at least know where to strengthen our argument and be equipped to anticipate contrary arguments. In this way our viewpoint becomes the result of solid reasoning rather than just something we believe because we do not know the arguments against it. The philosopher John Stuart Mill (1869/2011) had this to say:

He who knows only his own side of the case, knows little of that. His reasons may be good, and no one may have been able to refute them. But if he is equally unable to refute the reasons on the opposite side; if he does not so much as know what they are, he has no ground for preferring either opinion. (p. 67)

Overcoming confirmation bias may require actually being a little harsher—or so it might seem—on your own side of an issue than on the opposing side. Since we all have a natural tendency to accept arguments with whose conclusion we agree and reject those with whose conclusions we disagree, a truly fair analysis may seem like we are being harder on our own side than the other. Conversely, we need to be extra careful not to misrepresent the arguments with which we disagree. (This would be committing the straw man fallacy discussed in Chapter 7.) As you learn more about logic and critical thinking and practice the techniques and principles you are learning, you will be in a better position to be sure that you are treating both sides equally.

Moral of the Story: Confirmation Bias

Always remember that you are more likely to accept arguments with conclusions you already believe and that you are likely to overestimate just how common your own beliefs are. Take time to study both sides of an issue before coming to a conclusion, and try to be a little extra critical of arguments on your own side.

Selection Bias

Selection bias is introduced by not having a representative sampling of the group being studied. If the group is not chosen correctly, the results may be skewed by the sample being biased. For instance, if one surveyed women’s attitudes toward work and questioned only women in well-paid managerial jobs, many other women—who might not be as satisfied with their salaries or work conditions—would not be represented.

Earlier in the section, we discussed how confirmation bias is evident when we justify our views by relying only on people or sources with similar views. This is also an example of selection bias. As noted in Chapter 5, it is unlikely that your circle of friends is reflective of the broader population; their opinions would not represent the opinions of all, or even most. If many people around you accept your opinions, you may even think that your opinions do not need much support and that they are more widespread than they actually are. Just as with confirmation bias, overcoming selection bias would mean ensuring you considered the opinions of a range of people, including those who disagree with you.

Although confirmation bias can be seen as a form of selection bias, selection bias is broader. For example, Londa Schiebinger (2003) discusses the fact that women have been underrepresented in medical research and considers some possible ways of improving the situation. Because this underrepresentation affects the representativeness of samples we see and is not simply a preference for samples with which we agree, it is a case of selection bias rather than confirmation bias. We all prefer to do things in the easiest way possible. In general it is easier to reason from whatever samples are handiest rather than go to the extra effort to ensure that our samples are representative or that our evidence comes from the broadest scope of sources possible. When we simply gather our sample from whatever is convenient, we are falling prey to selection bias.

Status Quo Bias

Status quo bias is the tendency to prefer that things remain as they are or have been recently rather than changing them. It may be exhibited when an individual argues that something is fine just as it is, based on the observation that it has always been a certain way without observable problems. For example, some have argued that saying the Pledge of Allegiance should remain a school tradition, since reciting it has not caused problems in the past. Although the Pledge of Allegiance remains a controversial and undecided issue in some circles, one can see how status quo bias can be problematic when it is used to rationalize the oppression of others. For example, in 1900, a (male) voter in the United States might have said, “Women never voted before, and we did just fine”—would it follow, then, that women should not be allowed to vote?

As you may have guessed, the status quo bias is closely connected the appeal to tradition fallacy. Someone who commits that fallacy may do so because of a bias toward the status quo. But this bias also affects how we view the world more broadly. We have a strong tendency to simply accept the way things are without questioning them, simply because that way is familiar to us. Our days are filled with routine and ordinary practices that we continue not because we have compared them to other ways of doing things, but simply because that is how we have been doing them. One need not make an argument in order to exhibit a status quo bias. A status quo bias is often increased by selection and confirmation biases. We think that a policy has not created problems in the past because we have not heard of any problems. Even if we do hear of a problem, our tendency will often be to dismiss it as unimportant. Biases often work together to create a greater effect than they would individually.

The Bandwagon Effect

The bandwagon effect (also referred to as “herd mentality” or “groupthink”) is our tendency to go along with what we see others doing and believing. This is commonly seen with investment trends and real estate “bubbles”—often with disastrous results. A video on YouTube may become widely seen just because a lot of people have seen it; others reason that it must be good if it has been seen by so many people. This is not a logical argument, any more than a book’s status alone as a best seller signifies anything about its quality; it simply has sold a lot of copies. Yet when a belief is widely held, we have a strong tendency to think that it must have something going for it. However, when we stop to think about it, we can see that this is not always the case. Few of us would think that the fact that many people were in favor of slavery is any indication at all that slavery is a good thing. Nonetheless, we often take beliefs to be obvious simply because they are widespread.

At the same time, it is important to point out that the bandwagon effect is not always problematic. For example, if people flock to restaurant A while restaurant B remains largely empty, then it is reasonable to think that restaurant B is not attractive to most people. We can then choose to find out why for ourselves or choose to try the most favored restaurant first. Likewise, we often rely on Internet reviews of vendors, products, physicians, hairstylists, and even churches in order to get an idea of people’s opinions. This does not mean that what most people like will be the same as what we will like, but it gives us a starting point of information. If we receive bad news from a physician, then it is reasonable to seek more opinions. If two out of three doctors recommend a particular treatment, then it would be foolish to completely ignore the majority opinion, although we would likely need to take additional factors into account in deciding the course of treatment. Problems arise when we give too much importance to popular or majority opinion rather than researching the issue ourselves.

An interesting recent case that illustrates both the prevalence and significant effects of biases is the change in public opinion regarding same-sex marriage. From 1996 to 2014 public opinion shifted from only 27% of people approving of same sex marriage to 55% approval (Gallup, 2014). This is a large and rapid shift in opinion on a topic that touches on a central part of many people’s lives. The shift is especially notable because it does not seem to be the result of newly discovered evidence. A more likely explanation is that biases played a significant role in the shift. Perhaps people in 1996 suffered heavily from a status quo bias, which would have led them to reject marriages that were not like what they were used to. Perhaps many people in 2014 suffered from the bandwagon effect, accepting same-sex marriages largely because they perceived people around them as being more accepting of them. Perhaps selection biases led people in 1996 to undervalue how the inability to marry affected same-sex couples and their children. As same-sex couples became more vocal and visible in society, this information would be more readily available to people, thus leading to a change in opinion. We do not know just why there has been such a shift in public opinion on this issue, but it seems likely that biases played a significant role. (See Everyday Logic: Cognitive Biases in Our Own Lives for some thoughts on how to address pervasive cognitive biases.)

Everyday Logic: Cognitive Biases in Our Own Lives

If you were raised in the United States, it is likely that you believe it is one of the greatest countries on earth, perhaps even the greatest country. It is also likely that you are opposed to communism and think that it is evil and oppressive. Yet not all people who hold these beliefs have done the research necessary to support them. Think about the kind of arguments and research it would take to convince you of the opposite of what you now believe in these cases. If your belief is not based on that same kind and amount of evidence, then it is likely due, at least in part, to some form of bias.

We cannot eliminate every source of bias in every one of our beliefs. Neither can we take the time and energy necessary to fully and dispassionately justify every one of our beliefs. Whether it is worth doing so in a particular case depends on what is at stake. Believing that your country is the greatest has some positive benefits: It makes your life more pleasant, builds community spirit, and gives you optimism for the future. In this sense it is not really a bad thing to believe, even if you cannot fully support it. The problem arises when such a belief becomes the basis for arguments about other issues. Suppose someone suggests a policy change and uses data from other countries to support it. If your response is that your country, the greatest country on earth, does not do things that way—and if that belief is based largely on bias, rather than evidence—then that bias may have a real and negative effect.

As mentioned, there are many more of these cognitive biases that have been the focus of a great deal of research by psychologists, political scientists, economists, and others who study the phenomenon of decision making. Here again, logic can play an important role in eliminating or at least decreasing the effects of cognitive biases by helping us to examine the claims, the arguments, and whether the support for the claims is provided in a solid, objective way. Logic is an extremely helpful tool to help us step back, take a deep breath, and see if the claims we have made—and the reasoning we have provided to support those claims—are justifiable or are instead subject to the kinds of mistakes introduced by stereotypes and cognitive bias.

8.2 Obstacles to Critical Thinking: Rhetorical Devices

Today we are bombarded by information, but 100 years ago, unless one lived in a big city,  information was available from just a few sources: books and magazines and a local  newspaper or two. With the development of cable TV, satellite radio, and of course, the  Internet, there is very little difficulty finding information on virtually any topic; indeed,  some might say that there is so much information that we have trouble finding the information we need and knowing that the information we do find is credible. Much of this  information is accompanied by advertising; one may sign in to a social media website one  day, click on an advertisement for a product, and discover that everywhere one goes on the Internet there are ads for similar products, as well as messages about such products  showing up in one’s e-mail. All of these communications are designed to persuade us of something. A research  article attempts to persuade us to accept its conclusion. An advertisement attempts to  persuade us to buy a product. In both cases we are being asked to accept certain claims and to base our beliefs and behavior on those claims. We should exercise caution in deciding  whether to do so.

The fundamental point to keep in mind here is that when we encounter a claim and wish to evaluate it, it is important to focus on the evidence, the reasons, and the argument that  support this claim. As this section will discuss, however, there are a variety of ways to state the claim that may make it seem either more or less reasonable than it may be. These are  all techniques that seek to prevent us from focusing on what we should be emphasizing—whether there are good reasons to accept or reject a claim. Instead, they encourage us to  react emotionally or irrationally, often based on pleasant (or unpleasant) memories and  associations, rather than evaluate the claim on its own merits. If someone can convince you to buy laundry soap simply because of the way it is described, then there is no need to  worry about the actual quality of the laundry soap. In the same way, if someone wants to  convince you to vote a certain way or adopt a particular viewpoint simply because of the  way the candidate or viewpoint is described, there is no need to focus on the actual merits of the case. Hence, we need to be conscious of these methods, especially since they do not  occur just in commercial and political contexts. When we have a conversation—and particularly a disagreement—with someone, how that other person employs language may also appeal to things that  prevent us from focusing on reasons, evidence, and the actual argument. We also need to be aware of our own tendency to employ these techniques so that our own attempts at  persuasion can remain focused on the merits of the case rather than on getting someone to accept our position on irrelevant grounds.

As discussed in Chapter 1, rhetoric is the study and art of effective persuasion. Although  rhetoric typically includes some focus on logic as a persuasive technique, many effective  persuasive techniques have little to do with logic and critical thinking. Rhetoric employs  techniques known as rhetorical devices, and this section lists some of the best known and most popular. It is important to note that rhetorical devices are not necessarily contrary to logic—rhetoric is the art of persuasion, not of irrationality or duplicity—but many of these techniques are used to get us to focus on something other than the  quality of the reasons and arguments offered. There are many others, and new ones are  always being developed; in marketing, for example, once discerning consumers become aware of  one such technique, it may become ineffective. As long as we are aware that how language  is used can have an enormous impact on how effective a message is, we will be better  prepared when that language is used in an illegitimate way, persuading us to do or believe something when we should not be persuaded.

Weasel Words

Weasel words, or weaselers, are terms used to qualify a claim to make it easier to accept  and more difficult to reject by introducing some degree of probability or “watering down”  the claim without really changing its significance. If someone were to say, “All philosophers are brilliant,” that would be fairly easy to disprove; you need only find a single philosopher who is not brilliant to do so. But if someone were to say, “Most philosophers are brilliant,” or “A lot of philosophers are brilliant,” or “Philosophers generally seem to be brilliant,” the claim is not as easy to disprove.

Weasel words such as probablymostseemsin a senseup to, and many others have this  same effect—they make a claim sound better than it may actually be and make it more difficult to show  that the claim is false. If a particular over-the-counter pain reliever promises to relieve pain for “up to 12 hours,” that claim is actually  true even if it only works for 20 minutes; “up to 12 hours” includes that 20 minutes. But  the force of the claim tends to suggest that the relief will be for all 12 hours, until one looks more closely at what precisely is being claimed. It should not be surprising that such terms are frequently encountered in advertising. After all, would you be more likely to buy a  toothpaste that “may prevent cavities” or one that “may or may not prevent cavities,” even  though the two claims are really the same? Weasel words, used effectively, can make a  person (or a company) sound as if an important claim is being put forth. However, on closer inspection, we may find that the claim is weak because of the use of a weaseler.

On the other hand, you should not dismiss a claim simply because it contains words that  could be used as weaselers. There are legitimate uses of such terms. Chapter 9 will discuss  the use of qualifiers (sometimes known as guarding terms) that are legitimately used to  make claims more accurate. Qualifiers like sometimesusually, and possibly weaken claims  so that they are more likely to be true. We generally call such terms “weasel words” only  when they are being used in ways that are sneaky or deceptive.

Scientific studies often employ what may appear as weasel words when describing their  results, yet they frequently use them in a legitimate way. For example, a medical report  may note that a certain substance may cause cancer in some populations. However, such a  report is likely just being cautious (though accurate). When determining whether a claim  uses weaselers in a manipulative sense, it is important to examine the claim and its  justification to determine whether the guarding terms are being used in a way that is  legitimate or deceptive.

Euphemisms and Dysphemisms

Euphemisms and dysphemisms are both common ways of providing “spin” to information—that is, presenting the information in either a positive or negative light, respectively.

euphemism is a term that makes something sound more positive than it might be otherwise. We are all familiar with some of these, and many are standard expressions. We might say that a person has passed on, passed away, or gone to a better place, rather than just say the person died. The actual facts in the situation do not change, of course, but it may be  easier to hear of a loved one’s death if a euphemism is used.

For someone getting bad news, euphemisms make perfect sense in order to deliver that  news in as painless a way as possible. In other settings, however, a euphemism may be  used not to be sensitive but to avoid describing a harsh reality. Politicians are particularly  adept at employing euphemisms. Would a potential voter prefer to hear about higher taxes or about “revenue enhancement”? Either one may cost the taxpayer $500 over the next  year, but politicians are well aware of the fact that voters do not like to hear about taxes  going up. A euphemism can help take the sting out of that message. If you want someone to accept what you are saying, euphemisms can be very helpful by putting your message in a more positive context. It is also worth noting that some terms that might have once been  considered euphemisms were eventually adopted as both more sensitive and more  accurate terms. For example, terms such as differently abled and disabled may have started out as euphemisms but are now considered simply appropriate descriptions of those with physical or mental disabilities.

Dysphemisms are the opposite of euphemisms; these are descriptions used to put  something in a more negative light. Often such words are used to shock, offend, drive home a point, or get someone’s attention. We can see how the use of a dysphemism might be used to help support a certain agenda. Imagine a politician discussing two groups of insurgents in different foreign countries. Both groups use violence, kidnapping, bank robbery, and  other similar means to make their political points. But one group is friendly to the  politician’s own views, so the politician refers to the group’s members as “freedom fighters.” The members of the other group, whom the politician abhors, are described as “terrorists.” These two groups are doing the same kinds of things, but the use of a euphemism  (“freedom fighter”) makes one group look much different than the use of a dysphemism  (“terrorist”). One might also consider some of the dysphemisms that are used to refer to  lawyers. Who would you trust more, an “attorney at law” or an “ambulance chaser”?

Table 8.1 provides more examples of euphemisms and dysphemisms.

Table 8.1: Euphemisms versus dysphemisms

Euphemism

Neutral word

Dysphemism

Lady of the night

Prostitute

Hooker

Voluptuous

Overweight

Fat

Pre-owned car

Used car

Clunker

Estate tax

Inheritance tax

Death tax

Collectible

Old

Junk

Salvage depot

Wrecked and loose parts merchant

Junkyard

Moral of the Story: Euphemisms and Dysphemisms

Pay attention to how things are worded. A careful choice of wording can lead you to accept or reject a position when there really is not enough evidence to justify doing so.

Proof Surrogates

proof surrogate is used to provide some degree of authority to a claim without actually offering any genuine support. Frequently, one will hear “studies show” that something is  the case, even in everyday conversation. Without the actual studies, we are not really told  any additional information, but the claim sounds stronger. Thus, if someone says, “People  in Slovakia live longer because they eat a lot of yogurt,” you may or may not accept that  claim at face value. You may, after all, not really know the life expectancy of those in  Slovakia or their rate of yogurt consumption. But if, instead, someone were to say, “Studies show that people in Slovakia live longer because they eat a lot of yogurt,” somehow that  claim sounds more authoritative. Of course, without providing the actual studies, the  second claim is no different from the first, but this is a surprisingly powerful and effective  technique. Even if it is true that there are studies backing the claim, without access to the  studies, we really are in no position to assess the level of support they provide. We do not  know whether the studies were well done, whether they are unduly biased, or whether  their conclusions have been interpreted properly.

The trick in this case is not so much what is being claimed; rather, it is in not believing a  claim simply because it is preceded by “experts agree,” “studies show,” or “most people say.” To challenge this kind of statement is straightforward. If one is suspicious, one can alway ask, “What studies?” or “Which experts?” A person who asserts something based on such a proof surrogate and who cannot actually provide any support beyond an empty rhetorical  reference to “experts” and “studies” will generally be quickly exposed as not having much  information to back up his or her claim. When you hear a claim that sounds odd or unlikely to be true based on what you already know, you should be careful to follow up on any  potential proof surrogates. Many hoaxes use proof surrogates to lend an air of credibility to their claims. As hoaxes become more sophisticated, it is always a good idea to check on  claims that seem unlikely, outrageous, or otherwise suspicious.

Moral of the Story: Proof Surrogates

Be prepared to provide sources whenever you advance an appeal to experts, studies, or  popular opinion. If a claim seems odd or unlikely, check the sources; do not assume that  what has been claimed is actually true or that the proof surrogates correspond to actual sources.

Hyperbole

Hyperbole is really just another term for exaggeration. Consider these typical uses of  hyperbole: A student e-mails her teacher to say she is “deathly ill” when she may actually just have a bad cold. A  teenager accuses his parents of being dictators when he is asked to clean his room. A  grandfather tells stories of his youth that involve walking 15 miles to school every morning in 3 feet of snow. A teacher says, “If I have to grade one more paper, it’s going to kill me.”  Such hyperboles often do not do any harm. After all, if someone tells you that Bill Gates has “more money than God,” it is unlikely that you take that to mean much more than Bill Gates having a great deal of money.

But in some contexts, particularly political contexts, hyperbole can be used to make a point sound more plausible than it might be otherwise, and to immediately accept such a claim  can be risky. As always, one should look at the reasons, evidence, and argument in  evaluating such a claim. For instance, someone arguing against gun control might insist that those who favor it “want to take away all of our guns.” This would be using hyperbole if  those who advocate gun control do not want to do so. Similarly, someone claiming that  those who advocate using coal for electricity “don’t care if children can breathe” is using  hyperbole. Most advocates of coal-fired electricity do, in fact, care if children can breathe. Even if you are not fooled into  actually believing such claims, the use of hyperbole indeed fuels our feelings of outrage (see A Closer Look: Hyperbole and Godwin’s Law for a classic example). Since political speech is  aimed at getting us to do something (for example, vote), this outrage may be enough to  achieve those aims, even if we realize there is some hyperbole going on. As you may have  guessed, hyperbole is often used in the straw man fallacy (see Chapter 7).

Hyperbole can sometimes be difficult to identify. After all, if one claims that Pablo Picasso is the greatest painter of the 20th century or that Michael Jordan is the greatest basketball  player of all time, are those examples of hyperbole or defensible claims? In any case one  cannot simply assert such propositions without being prepared to defend them with  evidence and argument.

A Closer Look: Hyperbole and Godwin’s Law

In 1990 Michael Godwin, after observing many conversations on the Internet devolve into name-calling, looked a bit more closely at the specifics of how this occurred. He came up with the idea that “as an online discussion grows longer, the probability of a comparison involving  Nazis or Hitler approaches one” (Godwin, 1994, para. 8). In other words, the longer a  debate continues, the more likely a comparison will be made between one’s opponent and Hitler or the Nazis. This became known as “Godwin’s Law” and is a good example of  hyperbole. Internet disagreements are certainly common enough, but it is unjustified to  compare an opponent to Hitler merely because you disagree.

When one compares someone to Hitler, it is often a case of the informal fallacy reductio ad Hitlerum, in which the mere fact that someone is in some way comparable to Hitler is used as a reason for thinking he is wrong or his claim is mistaken. In some circles it is taken as a rule that the first person to compare an opponent to Hitler (or the Nazis) loses the Internet debate (or in other versions that the debate is effectively over). It is rare to see a comparison to Hitler that furthers a  discussion; getting to that stage indicates that one really has nothing productive to add.  Note that many of these conversations can quickly devolve into the use of fallacious slippery slope claims, since characterizing people, actions, or measures in such an exaggerated  manner can easily result in more illogical comparisons.



Innuendo and Paralipsis

Often a point can be made—particularly about one’s opponent in a debate or confrontation—without directly stating it. It can be implied by what is actually said, which is generally  known as innuendo. Or it can be emphasized by noting how something is not said, which is a common-enough strategy but referred to with an uncommon word, paralipsis.

In films, books, TV shows, and elsewhere, there are often salacious or off-color jokes made using innuendo. In Alfred Hitchcock’s film To Catch a Thief, the lead  couple goes on a picnic, and the woman (played by Grace Kelly) reaches into the picnic  basket and asks the man (played by Cary Grant), “Do you want a leg or a breast?” This was  relatively shocking in 1955, but since then it has become an acceptable sexual innuendo in popular culture.

Innuendos can also be used to make points that are less risqué. For instance, if a teacher  tells a student that her paper is extremely well typed, that might be an innuendo suggesting that the paper is not particularly good. After all, if the best compliment a teacher can offer  is about the appearance of the paper, that might imply that the content of the paper is  mediocre. One roommate might observe about another that he seems to be “extremely up-to-date” on TV shows; that might, again, be an innuendo suggesting that he is watching too much TV and could be spending his time more productively. For that matter, a simple  gesture, such as looking at your watch (or looking at your wrist even if there is no watch  there) can suggest, without saying anything, that time is being wasted or that a person is  late. These are all pretty familiar. We simply need to be aware of situations in which a claim is implied by innuendo. If such a claim is being suggested and we wish to challenge it, then we need to look beyond the implications and instead at what explicit support exists.

Paralipsis is a technique that one can use to emphasize something by indicating that it will not be mentioned. This is frequently seen in political campaigns, in which a candidate might say, “I would never discuss my opponent’s frequent indiscretions and scandals.” By saying this, of course, the audience hears about both the indiscretions and the scandals, but the  politician can respond that he or she never mentioned them. Frequently, this technique is  introduced by such phrases as, “It goes without saying,” “I need not mention,” or “I need not remind you,” after which what goes without saying is said, what does not need mentioning is mentioned, and one is immediately reminded of precisely that of which one does not  need to be reminded. In this way some characteristic or feature of one’s opponent—often negative—is introduced without having to state it directly.

One of the most famous uses of innuendo and paralipsis in literature is Mark Antony’s  speech in Shakespeare’s Julius Caesar, in which Antony eulogizes his assassinated friend  and ruler. Antony is allowed to make his speech after promising Marcus Brutus, one of the  conspirators against Caesar, that he will not speak ill of the conspirators in his speech.  Antony’s speech thus begins by appearing to justify the actions of Brutus and the other  conspirators. In the course of his speech, however, Antony uses various rhetorical  techniques to paint Caesar in such a positive light that he succeeds in convincing his  Roman audience to feel rage against the conspirators and to support him, rather than  Brutus, as Caesar’s successor.

While reading this speech, see if you can find some of the rhetorical techniques we have  examined here; in particular, Antony’s use of innuendo and paralipsis. Why do you think  these techniques might be effective on Antony’s audience?

Friends, Romans, countrymen, lend me your ears;

I come to bury Caesar, not to praise him.

The evil that men do lives after them;

The good is oft interred with their bones;

So let it be with Caesar. The noble Brutus

Hath told you Caesar was ambitious:

If it were so, it was a grievous fault,

And grievously hath Caesar answer’d it.

Here, under leave of Brutus and the rest—

For Brutus is an honourable man;

So are they all, all honourable men—

Come I to speak in Caesar’s funeral.

He was my friend, faithful and just to me:

But Brutus says he was ambitious;

And Brutus is an honourable man.

He hath brought many captives home to Rome

Whose ransoms did the general coffers fill:

Did this in Caesar seem ambitious?

When that the poor have cried, Caesar hath wept:

Ambition should be made of sterner stuff:

Yet Brutus says he was ambitious;

And Brutus is an honourable man.

You all did see that on the Lupercal

I thrice presented him a kingly crown,

Which he did thrice refuse: was this ambition?

Yet Brutus says he was ambitious;

And, sure, he is an honourable man.

I speak not to disprove what Brutus spoke,

But here I am to speak what I do know.

You all did love him once, not without cause:

What cause withholds you then, to mourn for him?

O judgment! thou art fled to brutish beasts,

And men have lost their reason. Bear with me;

My heart is in the coffin there with Caesar,

And I must pause till it come back to me. (Shakespeare, 1599, 3.2.1617–1651)

Moral of the Story: Innuendo and Paralipsis

Implying something rather than stating it clearly can soften the presentation of a claim.  However, it can also be used to make a claim seem more plausible than it really is. Try to  set out the claims clearly before accepting them.

8.3 The Media and Mediated Information

As should be evident by now, rhetorical devices and other persuasive techniques can be found almost anywhere. However, it is perhaps easiest to see persuasion in action in what is known broadly and collectively as “the media”—the sources of mass communication that include television, newspapers, radio, and the Internet.

If we consider how the media functions, we can see the origins of the word media itself (the singular is medium). There is, of course, ourselves: the viewers, the readers, or the listeners. There is also the world that we want to understand. To a certain extent, a person can learn a great deal by observation—looking, listening, and so forth—to discover how that world is and how it functions. But a great deal of information is not available to us this way; we rely on the media to provide it. Thus, the media are intermediaries between ourselves and our world; the media mediate between us and the world we seek to understand.

With that role comes the risk that the information provided is distorted. Any media source has to make decisions about how to present information. After all, if we want to hear about what took place in a 3-hour meeting, we probably do not want to watch, read, or listen to 3 hours of content. Rather, we want the relevant information summarized accurately, in perhaps just a few sentences or a short video. Thus, media sources have to make numerous decisions about what to emphasize, what to omit, how to frame the information, what the relevant background is, how others might interpret it, and so on. These decisions can be difficult to make, and good writers and editors recognize that with each decision, one risks distorting the information. While there may not be a perfectly unbiased media source, some work extremely hard to present an objective perspective. Others, on the other hand, seem intent on using bias and spin to increase their viewership. As should be clear from our earlier discussion, this phenomenon is not limited to the media alone: Any source, whether it be a politician or your next-door neighbor, is mediating the information you receive.

The last section focused primarily on how language can be used to persuade a listener or an opponent of some claim—not based on the actual evidence or in terms of an argument, but on how that claim is stated. However, it is not just language that can be used and misused in this fashion. Images, or the combination of images and words, can also be manipulated to send a message, and we should be just as aware of this possibility as we are of rhetorical techniques.

Manipulating Images

Even before computer software made altering photos relatively easy, dictators were known to try to alter the historical record by changing photographs. One classic example involved the so-called Gang of Four in the People’s Republic of China in 1976. These four influential Chinese Communist Party officials, having fallen out of favor with the political leadership in China, were simply removed from a widely circulated picture of the 1976 memorial service for party founder Mao Zedong. (To see a copy of this photo, visit Scientific American’s slideshow on photo tampering: http://www.scientificamerican.com/slideshow/photo-tampering -throughout-history/#2.)

This sort of photo manipulation might seem quite bold and blatant, but photo manipulation can also be more subtle. With the development of technology, it can be difficult to discern that there have been changes to an image and to determine what those changes may be. Most of us are familiar with the controversy surrounding airbrushed models and messages about body image they convey. But these same techniques can be used to color our perceptions of other things as well. During O. J. Simpson’s controversial murder trial in 1995, Simpson’s picture was presented on the cover of Time magazine and the cover of Newsweek magazine. The two magazines ran the same picture, but the darker cast of the Time cover seemed, in the view of many critics, to make Simpson look scarier and more menacing. (See the two covers side by side and read about what happened when that Time cover hit the newsstands at http://blogcritics.org/ojs-last-run-a-tale-of.)

Although it can be difficult to be aware of these kinds of alterations, the point is a general one: When information—whether words or images—looks suspicious, or an image might be considered to be just a bit too “convenient” to support a claim, one should investigate further. Are there other sources for the same picture? Are there ways of discovering that the image has been altered or even faked? As usual, the best we can do is be aware of the possibility, and when in doubt, see if we can critically examine the image (or information) to determine if it is genuine or not.

Moral of the Story: Manipulating Images

A picture may be worth a thousand words, but those words can still be lies. When in doubt, verify that the picture has not been altered or faked.

Advertising

Thinking About Advertising

Various photographers, as well as a museum curator and an advertising creative director, offer thoughts on and criticisms of advertising techniques.

Critical Thinking Questions

What are some techniques that advertisers use to promote their products?

What techniques did the artists interviewed in this clip use to get the audience thinking about the effects of advertising?

Think of some of today's most influential advertisements or brands. Is there any way to prevent oneself from being influenced by advertising? Is it important to insulate oneself from advertising?

Most of us are not surprised that advertising presents information in a way that tries to persuade the consumer to purchase a good, or a service; that is, after all, the purpose of advertising. However, being aware of advertisers’ goals helps us maintain a critical perspective. Indeed, there are slogans to remind us to regard advertising with a bit of skepticism: caveat emptor (“let the buyer beware”), “if it is too good to be true, it probably is,” and “always read the fine print.”

We have already seen some of the techniques that advertisers use to convince viewers to buy a product, such as proof surrogates and weaselers: Recall the commercial about toothpaste that “may prevent cavities.” The Chapter 7 discussion about fallacies outlines many of the other ways others try to convince us (illegitimately) of something. In this section, we will look at a few more specific examples how marketers pair images with rhetorical devices and fallacies to override our critical thinking skills.

One advertising technique is to pair positive images with the product, which can cause viewers to associate the product with positive feelings. Generally, most of us are not conscious of this—which is what marketing professions desire (see A Closer Look: Does Advertising Work?). If you have a positive response to a product because of what you associate with it, presumably you are more likely to buy it. For example, perhaps a certain beer is consumed by the world’s most interesting man; should we be drinking this beer too if we want to be more interesting ourselves? Of course, most marketing campaigns are considerably more subtle. Indeed, as viewers become more aware of advertising techniques, advertisers develop better techniques.

A Closer Look: Does Advertising Work?

Many people claim that advertising does not affect them. The obvious question is why marketers spend $70 billion dollars a year on television ads alone if they are so ineffective. Perhaps the marketers know something that those who deny the influence of advertising do not.

As market research analyst Nigel Hollis (2011) has observed:

Contrary to many people’s beliefs, advertising does influence them. But advertising’s influence is subtle. Strident calls to action are easily discounted and rejected because they are obvious. But engaging and memorable ads slip ideas past our defenses and seed memories that influence our behavior. You may not think advertising influences you. But marketers do. (para. 13)

In Simons and Chabris’s experiment, subjects were asked to watch a video of people in black T-shirts and white T-shirts passing a basketball. They were asked to count the number of passes between team members, and afterward were asked if anything out of the ordinary took place. While most participants were able to accurately count the passes made, most failed to notice that a person in a gorilla suit walked through the video—they were too busy counting passes.

Likewise, most of us are not devoting our full attention to advertising when we are watching TV or flipping through a magazine. We are thus more likely to remain unconscious of the persuasive techniques being used to influence our opinions and buying habits. As University of Calgary psychology professor Julie Sedivy (2011) wrote:

In the scientific work on persuasion, there’s a well-known result that, while not quite as funny as the Simons and Chabris study, is very similar to the invisible gorilla effect: it’s the finding that people are often apt to ignore the difference between strong and weak arguments in forming attitudes or choosing how to behave. (para. 7)

Many commercials attempt to present weak arguments by means of images intended to offer the support that premises would otherwise provide.

Suppose that you saw a Coke commercial with the polar bears. Maybe you liked the polar bears, and these images remained in your memory. You suddenly realize that you are thirsty. And then you find yourself leaving the store with a six-pack of Coke. Indeed, some commercials do not offer any clear claims at all.

Consider the Got Milk? advertising campaign, often mentioned as one of the most effective campaigns in recent history (Bowman, 2012). Is there a claim in the pictured ad? If so, is the claim persuasive? Why or why not? If there is not a claim here, is this ad still effective? What message do you think the ad seeks to convey? How does professional basketball player Chris Bosh help convey that message? Does the ad make you less likely or more likely to buy milk?

If a marketer can leave you with an unconscious association between something pleasant and a particular product, you may be more likely to buy that product, regardless of whether there is a claim or a suggested argument involved.

These associations are not meant to be taken literally or examined critically. Marketers do not expect viewers to drop everything and run to the store to buy their touted brand of cigarettes, much less interview doctors or demand access to a cited survey or study. Associations are meant to be stored in one’s brain in an unconscious way, so that those who have these associations will have a more positive view of the product. Maybe, just maybe, someone who sees a cigarette ad—especially one who sees it repeatedly but not in a conscious, critical way—will buy that brand of cigarettes in the future.

Other Types of Mediated Information

What other kind of information should we consider with at least a slightly critical perspective? As mentioned earlier, all information that is provided to us through the media is, as the name indicates, mediated. Almost all this information will be condensed and packaged in a way that it can be consumed, and thus, we have to be on guard that the information has not been distorted to the extent that it can be misleading or even false. So, it is not just toothpaste and beer commercials that we must thoughtfully evaluate, but also political speeches, reports on sporting events, and celebrity news—any information that is transmitted to us through others. Of course, if the local hockey game is reported with simply the score, that may not be terribly controversial, and we are unlikely to cast much doubt on that report. But, what about a politician’s speech or reports on an environmental disaster or economic data? How that material is presented can help determine what we think about it.

For instance, consider a standard sort of economic report, the federal government’s monthly report on jobs. The government reports that 200,000 new jobs were created and that the unemployment rate dropped to 7.5%. That seems to be relatively uncontroversial; we have a number reported, and that’s that. But it might be presented in different ways that can affect how this report is interpreted. Compare these two reports providing the same data:

The government announced today that only 200,000 new jobs were created in the last month, barely keeping up with the numbers of those entering the workforce; the unemployment rate dropped only slightly, to 7.5%. Analysts indicate that this is more evidence of a sluggish economy.

The government announced today that at least 200,000 new jobs were created in the last month, a dramatic increase over recent months and outpacing the number of those entering the workforce. The unemployment rate continues to drop and reached 7.5%. Analysts indicate that this shows the economy is picking up steam.

Both reports state the same basic facts. But the way these facts are presented changes whether the numbers indicate a positive or negative result: “only” versus “at least,” the analysts selected to comment, “barely keeping up” versus “a dramatic increase.” This example indicates the way language can help frame even a specific, “objective” number in two different ways, with different interpretations or “spin.” If this can be done with such a specific number, it becomes clear how more complex or nuanced information can be presented, with similar results. A senator who puts forth a bill, a president who announces a new military strategy, or a governor who proposes a tax plan are all presenting information that has to be interpreted. The more complex that information, the greater the possibility that distortions will be introduced by how that information is presented.

We can thus see that critical evaluation of the information we are given is an activity and takes some energy. If we simply passively take in that information without evaluating it, we may end up understanding it incorrectly or inadequately or even be asked to believe two contradictory things. (If one report states that undocumented immigration is increasing, while another report at the same time claims that undocumented immigration is decreasing, these cannot both be true.) We may not want to work quite that hard when we hear a meteorologist tell us what tomorrow’s weather is going to be like; we may already have certain suspicions about how accurate such forecasts are anyway. But if you hear of public policy or a political platform that affects you directly, or indirectly, you may want to spend that extra energy listening (or reading) carefully, critically, and with at least a bit of skepticism. The following are some suggestions of questions to ask when critically evaluating information:

What is being stated? That is, what facts are involved?

Is the information accurately stated, with the relevant context provided?

Does the language used slant or bias the way the information is presented?

What is omitted?

Are any implications that are stated reasonable implications to draw from the information provided?

8.4 Evaluating the Source: Who to Believe

It is not only important to evaluate how information is presented, but who is presenting it. This points to an important implication behind all of our discussions about arguments and persuasive techniques: that many people or organizations have an agenda or some purpose when they are sending a message. Of course, a person’s intent might be fairly innocuous, but it also might not be. Knowing some background information about a source, particularly any biases the source may have, will influence how believable you should think the source is.

Evaluating the source is always an important part of evaluating information, but it becomes an even more critical task when you want to use this source to support one of your own arguments. This section highlights three factors that you might consider: the source’s reputation and authorship, accuracy and currency, and purpose and potential bias.

Reputation and Authorship

Think about something surprising you read on the Internet. Should you believe it? Should you pass on the information? You’re probably aware that not everything you read is true or should be taken seriously. Considering a source’s reputation and authorship can help you assess its reliability.

A source’s reliability comes from having procedures in place that ensure that the information it produces is accurate. Over time, this leads to its having a strong history of accuracy. Sources that have a strong reputation for being reliable are more likely to carry weight with the intended audience for your argument. On the other hand, citing sources that are known for being inaccurate or heavily biased will weaken your argument.

Longevity also plays a role in a source’s reputation. Sources that have a long history will have developed a reputation as either reliable or unreliable. Of course things can change, but a source that has a long history of providing relatively accurate information should be seen as more trustworthy than a source with a very short history, or one with a long history of providing inaccurate information. If a source often provides poor information, that will become known over time. This does not mean that the source will go out of business; some sources make their money precisely because they are sensationalist. You have probably seen some of these sources with incredible headlines as you went through the checkout line at a grocery store. Nonetheless, sources that have been around a long time, and which depend on being credible in order to survive, are generally likely to have reliable information. For example, mainstream newspapers that have been around for 50 or 100 years are likely to be relatively reliable in reporting information. If they were not generally reliable, they probably would not still be in business.

Another point to consider is whether a source has methods in place to ensure the accuracy of the information it presents. For example, most academic journals employ a process of peer review. Before a paper is published, it is reviewed by other experts in the field to ensure that the information presented is credible. The review is generally anonymous, so the reviewers do not know who wrote what they are reviewing, and the author does not know who the reviewers are. This anonymity encourages reviewers to be honest and objective in their reviews. Peer review is one of the strongest methods for ensuring that information is of high quality. The use of peer review allows even new academic journals to be credible. However, peer review is very time consuming, so it is not widely used outside of academic journals. Instead, editorial review may be used when a publication seeks to be not only accurate but timely. In editorial review, one or more editors review information before it is published. The editor makes sure that the information is believable and may ask the author for clarification or further research if needed. Editorial review can be quicker than peer review, but it does have limitations. First, editors are expensive, and better editors are more expensive. Small organizations are unlikely to have adequate editorial staff to fully ensure the accuracy of what they publish. Second, editorial review is subject to the biases of the editor doing the review. Finally, reviewing for information quality is only one of the jobs of an editor. Just because a source has an editor does not mean that the editor is solely focused on information quality rather than other issues.

The best sources have solid review methods and longevity in their favor. For example Cambridge University Press was begun in 1534 and employs careful editing and review processes. It is unlikely that a manuscript would be published by a long-standing university or commercial press that contained either an enormous number of factual errors or promoted an especially bizarre viewpoint. Of course, this does not mean that such sources are infallible: Mistakes can still be made, and misleading or inaccurate information published. However, these sources are unwilling to risk their reputations and thus will subject themselves and their materials to fact-checking, peer review, and other methods of scrutiny. Any mistakes that are made are much more likely to receive a great deal of attention in the media, which, in turn, helps to alert us to the mistake. Indeed, credibility may be the most valuable commodity a publisher possesses: It may be the most difficult to obtain and the easiest to damage or lose.

When we talk about a source’s authorship, we are discussing who produced the information in question. This is particularly important in the age of the Internet (see A Closer Look: Using Wikipedia). A media source presents information that is originally produced by individuals or another organization. For example, a newspaper may present a story that is written by a specific reporter or one that is carried by a wire service such as Associated Press or United Press International. Knowing who authored the story can help us assess how credible it is. If the source is authored by an organization, then we should consider the reputation of the organization. If the author is an individual or group of individuals, then we should consider whether they are trustworthy on the subject they are writing about. Being trustworthy on a subject amounts to knowing about the subject and being likely to be relatively unbiased about the subject. The next section will deal with bias. For now, let us consider the issue of the author’s knowledge.

What reason is there to think that an author knows the subject they write or talk about? If the author has credentials and experience in the subject matter, this is a good sign. Someone with a PhD in biology and years of experience working in the field is generally a good source for information about biology. However, this same person may not be credible on the issue of tax policy. As another example, let us say you find a book or article about the history of the American Civil War, and the author is an air conditioner repair man. The author’s profession alone does not necessarily mean that you cannot trust the information, but it does suggest that you should look further to find reasons to do so. It is possible for someone to have extensive expertise in an area which is primarily a hobby for them. But if he or she is an expert, then you should be able to find evidence of that expertise. For example, if an author is widely cited by people who do have credentials, this may indicate that he or she is considered an expert. This is another good time to look at the reputation of the publisher. If the work is published by a reputable publisher, this is far better than if it is simply posted on the author’s own blog. It can be difficult to assess an author’s level of expertise, but it is essential to do so before simply accepting what he or she has to say.

Authorship does not necessarily have to refer to a person: Publications and websites can also have credentials. For example, many journals are associated with universities. A journal that is not so associated is not necessarily unreliable, but those that are need to uphold not only their own reputation but also the reputation of the affiliated university. That the university allows the affiliation can be seen as an endorsement, and the journal benefits somewhat from the university’s own credibility. When it comes to websites, you should be aware of what domain hosts the web page. Internet sources will have a suffix indicating the top-level domain; for instance, .edu indicates an academic source, whereas .gov indicates a governmental source. These domains will tend to be more reliable than other domains, though there is still the potential for an individual publishing unverified information on a personal page within the domain. Websites in .com, .net, and .org will require you to look closely at the credibility of the source, since anyone can purchase a website within these domains.

A Closer Look: Using Wikipedia

Questions about authorship become all the more important when we consider that many of us begin our research on the Internet. Anyone can publish whatever he or she pleases on the Internet. The online encyclopedia Wikipedia is a case in point: Most articles in Wikipedia can be edited by anyone, whether that person is an expert or not. Because the content is largely uncontrolled, many schools and professionals regard it as a wholly unreliable source. Others recommend using it but with caution, while still others do not object to its use in order to begin one’s research, but would strongly object to using it as an authoritative source itself. Typically, universities do not consider Wikipedia an acceptable source because university papers must indicate the original sources and Wikipedia is not an original source. If you do use Wikipedia or a similar website in your research on a topic, it is up to you to follow up on the referenced sources and verify that they exist and agree with the article. Knowing the source of a claim is not enough to guarantee that the claim is true or false, but it is a good first step.

Accuracy and Currency

One of the advantages of the Internet is the free flow of information. However, with no controls in place, the most absurd rumors can be circulated, and before some may even realize they have been misled, a particular rumor may have been accepted by many as the truth. A popular saying states that a lie gets halfway around the world before the truth has a chance to get its pants on. Though this is true both online and off, it has never been truer than in the age of the Internet. Thus, it is critical that you examine whether the source is accurate and up-to-date, or current, before you accept it as truth or a worthwhile argument.

The value of accuracy is fairly self-explanatory. However, determining whether information is accurate can be trickier. Here are some tips: First, consider whether or not the information provided can be verified in print sources. If a substantial claim is put forth, one should be able to find various sources that confirm that claim. Second, investigate whether the source provides further references or a bibliography. Are these sources themselves credible? If a source cannot back up claims with references, this can be a warning that some of the material is conjectural, or even made up. Third, read carefully for misspellings, obvious errors, or embarrassing production values. Numerous minor errors often indicate that the source has not been vetted by others.

Though accuracy is paramount in evaluating a source, the currency of even accurate information can determine whether a source should be used. Information that is out-of-date is simply less useful and may lead to inaccuracy. Information that was the best available 30 years ago may have been discredited or substantially changed with the discovery of new information. Check the publication date of any source that has one. Look to see that the source references recent research and events. For instance, if an article about the American Civil Rights Movement does not include any information after 1966, this is an indication that its information is at least out-of-date; if it is a recent article, it may be an indication that the author has not stayed current on the issues involved. Note that what is considered out-of-date in one discipline may be considered current in another discipline. An algebra book from the 1950s is much more likely to be adequately up-to-date than a global communications book from the same era.

Interested Parties

The goal of advertisements is to get you to purchase their product. That's their motive. Does this make them biased? Do you believe there can be unbiased advertisements?

We return to a point made at the start of this section: that many people and organizations have a purpose when sending a message. Thus, we must evaluate sources for any potential bias. An interested party is one that has a stake in the outcome of certain decisions, such as those made by legislatures and courts, and include anyone who has anything to gain from our believing something to be true. Often an interested party has an economic stake in how an issue is perceived in the media and thus, indirectly, by the public. Interested parties can be large, such as political entities and organizations. Interested parties can also be individual players, such as a sales associate who is paid on commission and thus is unlikely to be objective about the product he is selling. Or consider a political leader who supports a bill that will bring money to her home state and who knows that such a bill is likely to gain her votes when she comes up for reelection. She is an interested party because she stands to gain if the bill passes.

Nonetheless, having a personal interest in an outcome is not necessarily a problem. The important thing to keep in mind is the legitimacy of the argument presented by interested parties. When evaluating an argument, you must find out whether the source is an interested party. If so, does the individual or the group represented have a particular stake in the issue? You might hesitate to take “The Association of Cotton Growers” as an objective, disinterested source on whether the government should provide tax subsidies to cotton farmers. Or, if you are reading about climate change, you might consider whether the information is provided by a petroleum company or a solar energy company. If a source quotes an expert on whether a particular weapon should be developed by the military, it might make a difference if the expert works, has worked, or will work for the company that makes the weapon or a competitor of that company.

It is often difficult to determine whether a source is undermined by being an interested party. But if economic, political, or any other power-related connections can be established, or are suspected, then this demands heightened scrutiny when critically examining the arguments and information provided by such a source.

No source is perfectly reliable; all sources, thus, deserve some degree of critical scrutiny. Naturally, those sources that have, over many years, developed a reputation for journalistic integrity may need much less scrutiny than a website that appears to have been constructed in the last few weeks. As in all evaluation of information, one should always try to identify the facts involved and determine if the claims being made are plausible and if the reasoning that supports those claims is persuasive.