1 Module 1: How Psychologists Think

The Unit 1 introduction that you just read lists many topics that psychologists are interested in. You may have been surprised to discover such a wide range of topics. Part of the reason people tend to have such a limited view of psychology is that their exposure to psychologists is often limited. We tend to hear only about psychologists who provide professional services for people, therapy or counseling. Of course, many people with education in psychology are involved in these activities. More, however, are devoted to other activities. In fact, the very large majority of people who have degrees in psychology (undergraduate and graduate) devote their careers to some other goal than providing therapy or counseling.

The characteristics that psychologists (individuals who have a doctorate degree in psychology) really have in common, along with anyone else who has at least a college-level exposure to the discipline, is an understanding of the essential role of science and research and an objective evaluation of ideas about human behavior and mental processes.

This module is divided into three sections. It begins by introducing you to the characteristics of a scientific discipline and explaining how they apply to psychology. The second section, acknowledging that much of what you will hear about psychology in your everyday life will come from the popular media (TV, magazines, internet, social media, and so on), gives you advice about how to begin to evaluate the psychological claims that you might come across. The final section outlines some key ways that people mentally distort the world when they fail to take a more scientific view.

  • 1.1  Understanding the Science of Psychology
  • 1.2 Watching Out for Errors and Biases in Reasoning
  • 1.3 Thinking Like a Psychologist About Psychological Information

READING WITH A PURPOSE

Remember and Understand

By reading and studying Module 1, you should be able to remember and describe:

  1. Difference between beliefs and knowledge (1.1)
  2. History of how psychology came to be considered a science (1.1)
  3. Five key properties of scientific observations (1.1)
  4. Operational definitions (1.1)
  5. Six types of reasoning errors that people typically make: statistical reasoning errors, attribution errors, overconfidence errors, hindsight bias, confirmation bias, false consensus (1.2)
  6. Seven tips for evaluating psychological information (1.3)

Apply

By reading and thinking about how the concepts in Module 1 apply to real life, and practicing, you should be able to:

  1. Begin thinking like a scientist in everyday life (1.1)
  2. Generate simple examples of operational definitions (1.1)
  3. Recognize examples of reasoning errors in your life and correct them (1.2)
  4. Use the “7-tips” to evaluate psychological claims- note this is also an Evaluate goal (1.3)

Analyze, Evaluate, and Create

By reading and thinking about Module 1, participating in classroom activities, and completing out-of-class assignments, you should be able to:

  1. Determine whether a particular subject or discipline is scientific or not (1.1)
  2. Outline how you would change a non-scientific observation into a scientific one (1.1)
  3. Separate flawed reasoning based on overconfidence errors from solid reasoning in an individual’s argument. (1.2)
  4. Articulate a set of reasons why a particular psychological claim might not be trustworthy (1.3)

1.1 Understanding the Science of Psychology

Activate
  • Is each of the following a science or not? Chemistry. History. Biology. Psychology. Physics. What distinguishes the sciences from the non-sciences in this list? What does it mean for a discipline to be scientific?
  • Why do you think people care whether or not psychology is a science? Do you care whether psychology is a science or not? Why or why not?

Every day, we attempt to achieve the same goals that psychologists do. We see someone do something, and we try to explain why. For example, imagine that you encounter your best friend in the hall outside class, and he ignores you. Very likely, you would try to explain this behavior. Did he not see you? Is he angry with you? Is something troubling him? You might not stop at the question stage, however. Most people will answer the question and have high confidence that their answer, their expiation of the behavior, is correct. Psychologists do something different, though. They replace everyday observations and explanations with scientific ones. Science is nothing more than a method of gaining knowledge about the physical world. But it is a highly valued method.

Think of what it means to know something, as opposed to just believing it. Many children in the United States grow up believing in Santa Claus, the Easter Bunny, or the Tooth Fairy. As they get older they discover the many contradictions and inconsistencies that accompany belief in these characters—for example, “How does Santa get into our house? We don’t have a fireplace.” Eventually, as they realize that the beliefs are not justified, that the characters were invented to disguise gift-giving by their parents, the children discover an inescapable fact: Believing something to be true does not make it true. We are not saying that beliefs are wrong. We are saying that in order to know something, the belief must be justified. If you are approaching a railroad crossing in your car, you would much rather know that you will beat the oncoming train than simply believe it.

So, you can think about knowledge as correct, justified belief (although philosophers argue that the concept of knowledge is more complicated than that). Science has emerged as the most important method of providing the justification for belief, bringing it closer to knowledge. A scientist believes something to be true because it has been supported by evidence, evidence produced under tightly controlled conditions designed to allow the scientist to draw valid conclusions.

Throughout this book you will encounter many explanations of psychological phenomena. We frequently use real-life examples to illustrate these phenomena. You should always remember, however, that psychologists base their explanations not on casual everyday observation but on careful scientific research.

science: A set of methods intended to justify people’s beliefs by producing evidence under tightly controlled conditions. A full definition of science also includes its five key properties: empirical, repeatable, self-correcting, reliant on rigorous observation, and objective.

The Importance of Science to Psychology

If you have the opportunity, take a look at some other general or introductory psychology textbooks. Many of them make a big deal out of the assertion that psychology is scientific. (If you do not have the opportunity, take our word for it; they do) You might wonder, why does it matter if psychology is scientific or not?

Think of all of the classes you have taken in high school and college. How many of them began with a statement that the discipline you were about to study is a science? Of course, many disciplines are not sciences (for example, English, history, and foreign languages). What about biology, chemistry, or physics, though? Why doesn’t a chemistry textbook explain that chemistry is a science in its first chapter? The answer is probably obvious; it is because everyone knows that chemistry is a science. Aha, now we are on to something. The reason that psychology textbooks have to explain the link with science is that not everyone knows that psychology is a science (Lilienfeld, 2012). Unfortunately, that seems to include other scientists. As a consequence, psychology sometimes seems as if it is “fighting for respect” among the scientific disciplines (Stanovich, 2019).

Over the past few centuries, science has emerged as the most important and most widely respected way of discovering truths about the physical world—in other words, of turning belief into knowledge. Even in the 18th century, scientific ideals were held up as the model for many disciplines. Unfortunately, Immanuel Kant (2004/1786), an influential 18th century philosopher, had asserted that a scientific psychology was impossible. Given the respect with which scientific disciplines were treated, the implication may have been that psychology was not “good enough” to be a science.

It is interesting to note, however, that many of the scholars who were interested in psychological concepts during the 18th and 19th centuries had a scientific background. To give one quick example, Hermann von Helmholtz, who in 1852 proposed a theory of color vision that is still accepted by psychologists today, was a physicist. (sec 10.1) Also, it seemed reasonable to believe that if other complex systems—for example, the universe—could be studied scientifically, why not the human mind?

Still, when psychology emerged as a legitimate discipline, it had to struggle to establish itself as a science. One reason that the German researcher Wilhelm Wundt is credited with being the first psychologist is because he worked so hard at establishing psychology as a science throughout Europe (Hunt, 2007).

Five Key Properties of Science

It is not just the word of scientists or other authorities that gives science its special power to justify people’s beliefs. Rather, it is the characteristics of scientific inquiry itself that make it so effective. It has five key properties:

  • Science is empirical.
  • Science is repeatable.
  • Science is self-correcting.
  • Science relies on rigorous observation.
  • Science strives to be objective.

As you read about these properties, try to imagine ways that you can apply them to your own attempts to understand the world. You will find that with practice, you can apply a more scientific approach to your everyday thinking (and as you will see soon, that is a good thing).

Science Is Empirical

Empirical means “derived from experience.” Simply put, science proceeds as scientists “experience” the world and make observations in it. The other kind of potential observation is an inside-the-head one, observation of one’s own consciousness and thought processes. This second technique, known as introspection, was very important in the early history of psychology. For example, you can imagine lying on a beach and relaxing and then to report how that thought makes you feel. You might report it to be a very effective way of helping you to relax, but because you did not have to leave your own head, so to speak, your feeling is not an empirical observation.

It is probably fair to say that empirical observations are the most fundamental principle of science. These experience-based, public observations are what allow the remaining four characteristics of science to be achieved.

empirical: Derived from experience. Empirical observations are the fundamental basis of science.

Science Is Repeatable 

If you were to conduct a scientific research project, you would seek to publish an article about your research in a scientific journal. One of the sections of that article is called Methods and it would lay out in great detail how you conducted your study. If future researchers want to repeat your study, all they would have to do is pick up your article and follow your methods like a recipe. This process, repeating a research study, is called replication.

Well, that sounds boring and useless, you might think. How do science and psychology progress if researchers spend their time repeating someone else’s study? First, replication is precisely what creates the third key property of science, the capacity for self-correction (see below). Second, relatively few studies are simple repetitions of previous studies (although that is changing somewhat; see Module 4). Instead, the replication seeks to repeat some key aspects of an earlier study while introducing a new wrinkle. To give you a simple example, a replication of a study done on learning in preschool children might examine the same phenomenon in children throughout the primary grades. It could show that the way that preschool children learn also applies to children of other ages as well.

replication: The process of repeating a scientific research study. Replication applies both to methods and the results of a study.

Science Is Self-Correcting

We suggested above that replication is what allows science to be self-correcting. Let us explain. Self-correcting means, roughly, that evidence based on good research tends to accumulate, while information based on bad research tends to fade away, forgotten.

Suppose you are watching the evening news fifteen years from now. A vaguely familiar person is being interviewed about her amazing new psychological discovery. As she is describing how her research has thrown into question everything we previously thought was true about human behavior and mental processes, you suddenly realize that you know this person. She was the person who goofed off in, rarely showed up to, and most likely failed the General Psychology class you took together back in college. “No way,” you think to yourself as she describes how the practical applications of her research finding will make her a multimillionaire. “She must have made a mistake when she did her study.” Quite simply, you do not believe that she got the correct results.

As someone who understands the science of psychology, you have a way to check up on her. Find her journal article, repeat the methods, and see if you can replicate her results. If you do, your results are another point in her favor, as an independent researcher has produced additional evidence for her findings. If you get different results, you have generated an official scientific controversy. Now a third researcher has to come along and replicate the study. The new replication may agree with you, or it may agree with your rival. Then, another researcher has to come along. And so on. Over time, the evidence will start to pile up on one side. Most of the researchers will obtain results that agree with one another, and the few that do not will be forgotten.

Here are two real-life examples of this scenario. Neither is from psychology, but it is important for you to realize that scientific principles have nothing to do with the subject matter. As long as you adhere to the principles, you are a scientist.

First, in 1989, a team of scientists claimed that they had achieved something called cold fusion, a nuclear reaction previously thought to be impossible. Observers noted that the results of these experiments, if verified, could be harnessed to solve the world’s energy supply problems (Energy Research Advisory Board, USDOE, 1989). Researchers across the world could not believe that this difficult problem, with such important potential for the human race, had finally been solved. Many tried to replicate these results in their own labs. The vast majority was unable to do so, and the original research was forgotten.

The second example is from biology. In 1997, a team of researchers again claimed that they had achieved what had previously been thought impossible. They were able to clone a higher mammal, a sheep; they named her Dolly. Doubting researchers across the world attempted to replicate these results, and this time, they were successful. Since the cloning of Dolly, there have been other sheep, cats, deer, dogs, horses, mules, oxen, rabbits, rats, and rhesus monkeys (NHGRI, 2017). It is now commonly accepted scientific knowledge that cloning of higher mammals is possible.

And because this is a psychology textbook, let us conclude with a more relevant example. In 1929, the electroencephalogram (EEG) was invented by Hans Berger. He placed electrodes on a person’s scalp and was able to amplify and therefore measure the electrical signals coming from the brain. Skeptical researchers did not believe that Berger was actually measuring brain signals; some even produced similar looking signals from a bowl of quivering gelatin. But over the next several years, a funny thing happened. Numerous researchers were able to reproduce these EEG signals, and it was eventually accepted as genuine (Luck, 2014). Interestingly, EEG is still in use today as a key method of measuring brain activity.

Of course, it can take many years for enough evidence to accumulate on one side of a controversy in order to draw a firm conclusion. This lengthy time frame makes it very frustrating to be a consumer of scientific information. We may learn through media reports, for example, that a study found a particular diet to be safe and effective. Soon after, another study is reported in which the first study is contradicted. What is happening is that we are hearing about the individual pieces that compose the scientific controversy while it is still in progress.

Science Relies on Rigorous Observation

Earlier, we said that scientific evidence was produced under tightly controlled conditions designed to allow the scientist to draw valid conclusions. The conditions under which scientific observations are made are laid out by specific research methods. These methods are essentially the rules for making scientific observations. (see Module 2)

For example, you might be interested in discovering whether caffeine improves exam performance. To do this, you would probably select a research method called an experiment. There are entire courses that teach the details (that is, the rules) about this method and explain why it would be the method you should choose. The important point here is that scientists learn about phenomena by carefully controlling, recording, and analyzing their empirical observations.

Science Strives to Be Objective

You should be aware of two related but distinct senses of objective. First, scientists strive to be personally objective; they try to not let their personal beliefs influence their research. Second, the observations that scientists make must be objective, meaning that different observers would observe the same thing. For example, if a research participant answers a question on a survey by choosing a number on a 5-point scale, different observers would be able to agree which number was chosen.

It can be very difficult to make objective observations. Imagine sending different observers out to watch a group of children and count how many aggressive acts they commit. As you might guess, the different observers might come back with very different reports. One source of difficulty can be the personal background and beliefs of the individual observers. Perhaps one observer believes that boys are more aggressive, so he watches them more carefully than he watches girls.

Another source of difficulty when trying to make objective observations is a lack of clarity about precisely what is being observed. In order to make observations more objective, researchers use operational definitions. Operational definitions specify exactly how a concept will be measured in the research study. For example, an operational definition for aggressiveness could be a checklist of behaviors that observers might see in the children they are watching: hitting, punching, kicking another child, using profanity toward another child, directing a threat toward another child, and so on. The goal is to come up with a list of behaviors that are a reasonable reflection of aggressiveness and that different observers can consistently recognize as aggressiveness. An operational definition like this gives observers a way to know what to count as an aggressive behavior so they can compare apples to apples.

The role of peer-review in science

Scientific research uses a technique called peer review to help ensure that the features of good science are contained in any specific research project. Here is how it works. If you want to have a report of your research study published in a scientific journal, it will be reviewed by a small group (often three) of experts in the research area. These experts, the peers, will evaluate your article, making comments about and suggestions to improve the scientific strength of the project and report. Publication decisions are based on the recommendations of the peer reviewers. As a result of peer-review, a great many articles are rejected, and nearly all others are required to make significant revisions before they can be published. Peer review, then, is the basic mechanism that we use for quality control throughout the scientific disciplines. We should point out that peer review is certainly not perfect. Low-quality studies can slip through, and high-quality studies may occasionally be rejected by a powerful but biased reviewer. It is, however, the best procedure we have available to maintaining the level of scientific rigor in published research.

operational definition: A definition of a concept that specifies how it will be measured in a research project.
peer review: the process through which prospective scientific research articles are evaluated by a group of experts in the field.

Debrief:

  • Think about some non-scientific disciplines, such as history, philosophy, and humanities. Can you imagine how they might be made scientific?
  • Would it be a good idea or a bad idea to make non-scientific disciplines more scientific?

1.2 Watching Out for Errors and Biases in Reasoning

Activate

  • In your opinion, what type of people are the worst drivers? How did you form this opinion?
  • Do more people in the US die from falling or from fires? How sure are you that you are correct?

Human beings, probably on a daily basis, try to accomplish the same goals as scientists. When we witness some event, rather than simply being a passive observer, we often try to explain why it happened. Specifically, in the case of psychology, we see someone engage in a behavior and then try to explain it and the mental processes underlying it. For example, if we see someone running down the hall at school and yelling, we might wonder, “Why did he do that? Is he being chased? Is he celebrating because he just finished his finals?” We would have to call this very common human activity of searching for explanations naïve, or intuitive psychology, however, because it takes place without the benefit of scientifically gathered evidence. Other disciplines are similar; for example, researchers have discovered that people generate their own explanations for physical phenomena without relying on formal physics principles (as you might guess, it is sometimes called naïve, or intuitive physics).

Why should we care about intuitive reasoning (about psychology and the physical world)? Well, psychologists who study reasoning and thinking have discovered an important fact about it: We make many predictable sorts of errors when we try to draw conclusions about our everyday observations without thinking scientifically. And, from our selfish perspective, it is a good thing, too. After all, if your explanations about human behavior and mental processes were all correct before you took this class, psychology educators would be out of a job. In other words, if naïve psychology were always correct, there would be no need for scientific psychology.

In the following sections, we will outline a few important biases and errors. First, however, let’s talk about what we mean by biases (we will assume you know what we mean by errors). A bias is a specific tendency, a consistent way of thinking, seeing, believing, or acting. One important source of bias is one’s personal experiences and background. So now you might realize that when we spoke earlier about scientists’ need to ignore their personal backgrounds and make objective observations, we were in fact talking about the need to move beyond their biases. We distinguish between error and bias because an error, by definition, is always wrong. A bias in some specific situations might lead to a correct conclusion. For example, professors who have a bias that students are dishonest may be very successful at identifying cheaters in their classes. This can make it very difficult for people to discover that their biases might be incorrect (see also the confirmation bias below). The key idea is that if a bias is applied consistently, eventually it will lead to an error.

So with that in mind, here are a few important types of biases and errors in reasoning:

Statistical reasoning errors. There are many situations in which we try to make some judgment about the frequency or likelihood of something. For example, if we see a man running down the hall at school, we might need to judge how likely it is that he is being chased or fleeing some catastrophe. This is essentially what statisticians do, but, unlike naïve psychologists, they base their conclusions about likelihood on much more data and on the laws of probability. Statistical reasoning errors are poor judgments about likelihoods. Largely because we do not have the time or ability to calculate probabilities in our heads, we use shortcuts when trying to judge likelihood, which leads to many important errors. (sec 6.2)

Attribution errors. We also tend to make errors in the types of explanations that we come up with for people’s behavior—in short, attribution errors. For example, many people are very likely to explain someone’s behavior by attributing it to internal causes—that is, something about the person’s disposition or personality. (sec 18.1) So, observing someone running through the halls yelling, we are more likely to assume that he is a rude and obnoxious person and less likely to assume that some situational factor, such as an emergency, is responsible.

Overconfidence errors. Making matters worse, we have a set of biases that lead us to think that we are correct more often than we actually are. Individually, each bias is quite a dangerous overconfidence error. Together, they combine to make us overconfident of our ability to explain and know things without relying on scientific research. And we can be very overconfident. In one study, research participants judged which of two kinds of events were more deadly (for example, do more people in the US die from fires or falls) and how likely their judgments were to be correct. When they said that there was a million to one chance against being wrong, they were actually wrong 10% of the time (Fischhoff, Slovic, & Lichtenstein, 1977). (For the record: according to the Centers for Disease Control, 38,707 people died from falls and 6,196 people died from fires in the US in 2018.) There is little doubt that people are rewarded for confidence, and even for overconfidence. For example, research participants judge that experts are more believable when the experts are more confident; interestingly, they even overestimate how often overconfident experts are correct (Price and Stone, 2003; Brodsky, Griffin, and Cramer, 2010).

So, if you see a man running down the hall, not only is there a pretty good likelihood that you will make the wrong judgment about him, there is also a pretty good likelihood that you will be nearly sure that your wrong judgment is correct. Some of the specific biases that lead to overconfidence are the hindsight bias, confirmation bias, and false consensus effect:

Hindsight bias. Once an event has happened, it seems to have been inevitable, and people misremember and believe that they could have predicted the event (Fischhoff, 1982; Lilienfeld, 2012). This has been called the hindsight bias, or the “I knew it all along” bias. For example, on many autumn Monday mornings, football fans across the US engage in what is known as “Monday morning quarterbacking.” Fans complain about the interception that the quarterback for their favorite team threw: “It was obvious that the cornerback was going to blitz; why didn’t he just throw the ball out of bounds?” But the event was not inevitable, it could not have been predicted, and had the fans been questioned before the interception actually occurred, they would not have “known it all along.” And you need not be a sports fan to fall for the hindsight bias. One study tested participants ranging from 3 to 95 years old; the bias was common in all of the age groups (Bernstein, et al. 2011). Another demonstrated the bias among Japanese and Korean participants (Yama et al., 2010). You should realize that the hindsight bias also works powerfully to make people believe that much research is unnecessary. When told that researchers have made some discovery, many people’s response is “I knew that; who needed to do research to find that out?” When people find themselves thinking, “I knew that already!” as a result of the hindsight bias, they often turn out to be overconfident about their beliefs as well.

Confirmation bias. We once asked a few friends what type of people are the worst drivers. The answers we received included teenage boys, people over 80, 20-something women with cell phones, moms in minivans, and older men wearing hats. Interestingly, several people were absolutely sure that they were right. Yet, it is impossible that they were all right. Only one group of drivers can be the worst. The strength of our friends’ beliefs results from something called the confirmation bias (Ross and Anderson, 1982). People have a tendency to notice information that confirms what they already believe. It works this way: At some point you may have picked up the belief that older men wearing hats are the worst drivers (one friend heard it on a radio show). Now, every time you see an example that confirms that belief—for example, a 70-year-old man in a bowler straddling two lanes while driving 15 miles per hour under the speed limit—you make a mental note of it. “Oh, there is another old man in a hat. They should not be allowed to drive!” The flip side of confirmation bias is that we fail to notice information that disconfirms our belief. So, we might not pay attention to the 18-year old who is driving the Mustang that crossed the yellow line and narrowly avoided a truck trying to pass the older man in the hat. The confirmation bias is very common in many different situations (Nickerson, 1998). For example, people suffering from insomnia may incorrectly recall that they sleep less than they actually do, in part because of the confirmation bias (Harvey and Tang, 2012). The confirmation bias is a particularly dangerous one because it often directly leads us to draw the wrong conclusion while it is simultaneously increasing our confidence in that wrong conclusion.

By the way, according to the National Highway Traffic and Safety Administration, males between 16 and 20 years old have the highest rate of involvement in automobile accidents. Females in the same age group are in second place. Sorry, there were no data on older men in hats.

False consensus. The famous developmental psychologist Jean Piaget proposed that young children have difficulty taking someone else’s point of view; he called it egocentrism (Module 16). But the characteristics that Piaget described do not apply to children only. We can find many examples of adults who fail to take other people’s point of view. False consensus, the tendency to overestimate the extent to which other people agree with us, is an important example of this failure (Pronin, Puccio, and Ross, 2002). In essence, we tend to think our point of view is more common than it actually is, failing to consider that other people might not see things the same way. In 2003, we asked approximately 100 General Psychology students to rate their degree of support for the U.S. war with Iraq, which was then near its peak. Then we asked them to estimate how many of their fellow students gave the same rating—that is, how many agreed with them. Ninety percent of the students believed that more people agreed with them than actually did, a very strong false consensus effect (Gray, 2003). Again, this error contributes to our overconfidence and to our belief that research is not necessary. It is all too tempting to believe that we have learned the truth about the whole world by observing ourselves and our small part of the world. Research is important because it helps us find out objectively how common or uncommon our personal beliefs may be.

attribution error: Mistaken conclusion that someone’s behavior is a result of personality only and not any possible environmental reasons.
confirmation bias: The tendency to notice and pay attention to information that confirms your prior beliefs and to ignore information that disconfirms them.
false consensus: The tendency to overestimate the degree to which other people agree with us.
hindsight bias: The mistaken belief that some event or explanation is something that you already knew or that you foresaw.
naïve (or intuitive) psychology: The search for explanations about human behavior and mental processes without the benefit of scientifically gathered evidence.
overconfidence error: A general tendency for people to be more confident in their judgments than they should. It results from several specific biases, including hindsight bias, confirmation bias, and false consensus.
statistical reasoning error: The error of judging probabilities or likelihoods without collecting sufficient data.

 

Debrief

  • Try to think of an example from your life in which you or someone you know might have committed each of the errors described in this section: statistical reasoning error, attribution error, overconfidence error, hindsight bias, confirmation bias, false consensus.

1.3 Thinking Like a Psychologist About Psychological Information

Activate

  • Have you ever read a self-help book? If so, did you follow the advice in the book, and did it help?
  • Have you ever found yourself in a discussion in which someone says, “I read somewhere that ___,” where the blank is filled with some claim about psychology (human behavior and mental processes), such as “men and women solve problems differently” or “most people are right-brained.” How did you respond to the statement?

Unless you major in psychology, this might be the only psychology class you ever take. Even if you wind up taking one or two additional classes, your most significant lifetime exposure to psychological information will be as a casual user of the information. Even psychology majors who end up earning advanced degrees will be bombarded with psychological information from the popular media and other non-academic sources—newspaper and magazine reports, or Facebook posts that summarize some new finding, commercial websites touting some remarkable relationship-saving communication strategy, psychological claims made by “experts” on television and YouTube, claims made by friends and acquaintances during conversations, and so on. So whatever you may decide to do as a student of psychology, it is important that you learn how to make sense of these claims and to evaluate them.

The basic principles of scientific thinking and time-tested research methods and statistical techniques will help you sort out the good from the bad, the sense from the nonsense. This section focuses on some critical thinking skills (sec 7.1) that will help you overcome problems you will face when you are exposed to psychological information and research in everyday life. As an added bonus, many of the tips in this section can also be applied to help you evaluate media reports of claims and research from other disciplines or even advertising and political campaigns.

Often, the only way to draw valid conclusions about some claim will be for you to enlist the thinking skills that you acquire through your education in science; remember, the whole purpose of science is to provide justification for belief. So you would need to locate scientific journal articles, read them carefully, and compare the articles to one another and to the claims from the popular media.

As you might guess, this can be an enormous undertaking, one that could be a full-time career, so even psychologists with advanced degrees do not often do all of this work. How can you decide when you should go to the trouble? You should judge how important it is for you to not be misled about each individual claim. For example, if you are currently having serious difficulty in a romantic relationship, you may want to determine whether the relationship-saving claims from someone’s website are supported by scientific research before you follow the advice we know we would).

Another strategy is to use the suggestions from this section as a set of warning flags during your initial encounter with the psychological information. If the popular claims that you are evaluating do not pass the tests suggested by the following seven tips, you should be very cautious. It might be time for you to take a deep breath and begin wading through the scientific literature to find more authoritative information.

Tip # 1. Be aware of your pre-conceived ideas

If you think about the confirmation bias from Section 1.2 for a minute, you might realize something important about it. If we go through life typically paying attention only to information that confirms what we already believe, it might be remarkably difficult to change our minds. Indeed, researchers have demonstrated that this is exactly what happens. It is called belief perseverance, and it is very common. People sometimes even refuse to change their minds when their beliefs are proven completely wrong (Anderson, 2008; Ross, Lepper, and Hubbard, 1975). As you might realize, the ability to critically challenge your own beliefs is one of the most important thinking skills you can develop. The reason is simple; no one is always right.

One of the greatest dangers we face when evaluating psychological claims, then, is that we tend to be very uncritical about the information that we already believe. Many people have very little interest in and devote very little effort into proving themselves wrong (Browne & Keeley, 2009). If we happen to be wrong, though, we will never find out. If your goal is to find the truth, sometimes you have to admit that your pre-conceived ideas were wrong.

belief perseverance: The tendency to hold onto beliefs even in the face of contradictory evidence.

Tip #2. Who is the source?

Although an advanced degree in psychology from a reputable university is certainly not a guarantee that a claim will be correct, the lack of such a degree can be a cause for caution. A person who makes psychological claims should be qualified to make those claims. Dr. Laura Schlessinger, for example, is the author of several bestselling books that dispense psychological information, as well as the host of a national call-in radio advice program. She bills herself as America’s #1 Relationship Talk Show Host. One problem: Her Ph.D. is in physiology (read that carefully; it didn’t say psychology). Although Dr. Laura, as she calls herself, has a certificate in Marriage, Family, and Child Counseling, it is the Ph.D. that qualifies someone to refer to herself as “Dr.” It seems a bit misleading to dispense psychological information as a “Dr.” in physiology.

How about Dr. John Gray, the author of the successful Mars and Venus books? According to his website, MarsVenus.com, the original book in the series, Men Are from Mars, Women Are from Venus, has sold more than 15 million copies, and Dr. Gray is the “best-selling relationship author of all time.” John Gray does indeed have a Ph.D. in psychology, so he may appear qualified. His degree, however, is from Columbia Pacific University, a school that was ordered by the state of California in 1999 to cease operations because it had been granting Ph.D. degrees to people for very low-quality work. (Hamson, 1996, 2000).

Organizations can also sometimes deceive us about their true origins and purpose. Have you ever heard of the American Academy of Pediatrics? According to their website, it is a large group of pediatricians (established in 1930) with a national organization and 59 individual chapters throughout the US (and 7 in Canada). Among other activities, the Academy shares with the public medical consensus opinions about various topics intended to improve the health of children (AAP, 2020). Well, how about the American College of Pediatricians? According to their website they, too, are an organization of pediatricians (and other healthcare professionals). It was established in 2002 and now has members across the US and in other countries. (ACPEDS, 2020). If you are at a computer, please take a few minutes right now to Google ACPEDS. (Go ahead, you have time; you are almost finished with this section.) Don’t go to their website, but look at some of the other results that Google gave you. Did you find the one that states that the Southern Poverty Law Center has labeled the American College of Pediatricians a hate group? Others refer to it as a fringe group of pediatricians with an obvious ideological bias. Now, keep in mind, we are not saying that this quick Google exercise has definitely unmasked this group as a fraud. But we certainly have quite a bit more to think about before we automatically accept their information.

When you are faced with the problem of trying to figure out if an individual or group is legitimate, do what fact-checkers do. Do not simply read the “About” section of a website. Do an independent investigation of the person’s (or organization’s) background, experience, or credentials. You don’t have to hire a private detective; just do a bit of a Google search. Use Wikipedia (tell your professor we said it was ok in this case). All we are trying to do is get a sense for someone’s background and whether or not they are associated with any controversies. An informal search like this will work quite well for those purposes.

Tip # 3: What is the purpose of the information?

This one might seem obvious, and sometimes it is. When the first thing you see on a website is a Buy Here button, you know that they want to sell you something. Sometimes it is not exactly obvious, though. A common persuasion technique is to disguise an attempt to persuade as information (Levine, 2020). For example, financial advisors who work on commission often try to sell annuities or other financial products by sponsoring free educational seminars or by publishing “informational booklets” about financial products in general. Other common hidden purposes include political agendas (see ACPEDS above for a possible example) and obtaining personal information about users for marketing purposes (cough-cough, Facebook).

Tip #4. Is it based on research?

If you learn nothing else from this course, we’d like you to learn this next point. No, wait. On second thought, there are a great many things we would like you to learn from this course. Among those, we would like to emphasize this next critically important nugget. There is only one reason you are allowed to say that something is true in psychology. And that reason is that someone did the research. Not just one someone, but lots of someones. You see, that is what it means to be a scientific discipline. We cannot rely on casual observation or opinion, even expert opinion. We must only draw conclusions when they are warranted by careful research conducted by a number of different researchers.

And this certainly applies to the psychological information to which you are exposed on a regular basis outside of the confines of this course. Consider self-help, for example. Self-help is an enormous industry. (Just for fun, we just Googled self-help; it returned 4,150,000,000 results in 0.75 seconds.) There are many excellent self-help resources. Unfortunately, however, there are also many that are, well, “not excellent.” The fact that a book has been published, for example, says only that the publisher believes that it will sell; sadly, it says nothing about the quality of the information.

How do you tell the good from the poor resources, then? The task involves several of the tips we have given in this section and more. Pay attention to the qualifications of the author. Look for signs that the author is oversimplifying (Tip #5 below) or relying on persuasion tricks (Tip #7).

Most importantly, does the resource have a good grounding in scientific research? Is there a section somewhere prominent that lists the studies cited in the resource? Are the studies from scientific psychological journals, such as The Journal of Personality and Social Psychology and Health Psychology? Is the underlying research described in the resource itself? Have the authors conducted any of the research themselves? If the answer to most or all of these research questions is no, we would be very cautious about accepting the claims in the resource.

Tip # 5. Beware of oversimplifications

Descriptions of psychological concepts intended for the public must simplify (for that matter, so, too, must undergraduate textbooks). If they presented the information in as much detail as one typically finds in a scientific journal, very few people would ever pay attention, even if they could understand the information (scientific journal articles are notoriously difficult to read because they are typically written for an audience of Ph.D.-level psychologists). So simplification is acceptable, often even necessary. But oversimplification is simplification that goes too far and ends up distorting or misrepresenting the original information.

An expert can usually recognize oversimplification, but how can a non-expert? It would seem that you would need to know the complicated version of the information in order to know if it is being distorted when simplified. To help you recognize oversimplification, you should get in the habit of looking for the common clues that it is occurring. For one thing, there are very few absolutes (none, never, always) in psychology. (What if we had said there are no absolutes in psychology? Would you have believed us?) If someone tells you that something is always true, it is a good bet that they are oversimplifying.

Also, be very careful when people make sweeping generalizations that seemingly apply to everyone. These are called overgeneralizations, incorrectly concluding that some fact or research finding true of one group is automatically true of a larger or different group. For example, a headline on an internet site we first encountered a few years ago trumpeted “This Food Makes Men Aggressive,” a statement that certainly sounds like a sweeping generalization. Clicking on the link, we discovered that the title of the article was “This Food Can Make Men Aggressive,” a bit of a hedge from the headline, but still an oversimplified overgeneralization of the research, as you will see. The article cautioned readers about the potential dangers of eating soy burgers because research had discovered a link between soy and aggression. The actual research, published in the journal Hormones and Behavior, reported that monkeys that were fed a diet high in soy isoflavones (125 mg daily) were more aggressive than monkeys fed no isoflavones (Simon et al., 2004). A typical soy burger has 7 mg of soy isoflavones (Heneman et al., 2007). So unless you are eating 18 of them per day and are as small as an average monkey, we recommend waiting before throwing away that package of soy burgers in the freezer. We call this specific kind of oversimplification the headline effect, distorting some research results by creating a very short headline-like summary. Keep in mind, the headline effect does not happen every time someone uses a short summary; it only comes into play when that headline-like summary distorts or hides some important aspects of the larger story.

Another clue that someone may be oversimplifying is when an explanation uses a very firm either/or approach; this type of oversimplification is called creating a false dichotomy or false choice. For example, the popular media might report that some new research has uncovered a genetic component for personality, implying that environment, therefore, has no influence on personality. In other words, personality is presented as the product of either nature or nurture. As you will learn throughout this book, though, psychological phenomena appear to be a combination of biological and environmental influences. Few things in psychology have only one cause or explanation. Any report that emphasizes one explanation to the exclusion of everything else is probably oversimplifying.

headline effect: A type of oversimplification in which some research results are distorted through the creation of a very short summary, a headline as if in a newspaper.
false dichotomy or false choice: A type of oversimplification in which a potential explanations are presented as a strict either/or possibility. As a result, a phenomenon is incorrectly explained as resulting from one cause to the exclusion of all others.
overgeneralization: A type of oversimplification in which some fact or research finding true of one small group is incorrectly generalized to a larger or different group.
oversimplification: Simplification that goes too far and ends up distorting or misrepresenting the original information.

Tip # 6. Beware of distortions of the research process

Controversies

Science is, by nature, uncertain. For very long periods of time, some theory or claim may be quite controversial, with large groups of psychologists standing on both sides of the issue. For example, at a presentation in which the speaker asked the audience—about 125 college and high school psychology instructors—whether personality was determined more by a person’s genes or by the environment, the instructors split nearly 50-50. Obviously, these psychologists did not all agree.

When psychological claims are presented in the media, however, they often skip the disclaimer that not all psychologists agree with the claim, or that the results of a specific study represent a snapshot: one piece of information at one point in time. An honest and complete view acknowledges that progress in psychological knowledge is more of a back-and-forth process than a straight line and that individual results must be put into the context of all of the research that preceded it (science is self-correcting over time). (sec 1.1)

More dramatically, the media can sometimes present the opinions of a very small number of scientists (sometimes as few as one) as if they represent the existence of a legitimate scientific controversy. For example, in April 2020, during the height of the COVID-19 quarantine in the United States, two urgent care doctors, Dan Erickson and Artin Massihi, recorded a series of videos on YouTube that asserted (among other things), that the shelter-in-place orders had little to no effect on the spread of the coronavirus. Somehow, they and the millions of supporters of their videos seemed to think that the doctors’ conclusions were more valid than the conclusions of the World Health Organization, the Center for Disease Control, and highly credentialed epidemiologists throughout the world. We like to think of this as the myth of two equal sides. Although it is true that there are usually two sides to a story, it does not mean that the two sides are equally good. Two doctors who treat patients in an urgent care facility on one side do not form an equal counterweight to the entire scientific discipline of epidemiology and the most prestigious health organizations in the world. The very strong scientific consensus is that sheltering in place led to a very substantial slowing of the spread of the virus, probably saving millions of lives by largely preventing the overloading of hospitals. In this case, as sometimes happens, supporters began to treat the doctors as if they were some kind of martyrs, shunned and censored because they dared to challenge the accepted wisdom, as if they were some modern Galileo (whom you may recall was persecuted by the establishment when he proposed that the earth revolved around the sun). But as science historian Michael Shermer (2002) has pointed out, you do not get to be Galileo simply by being shunned by establishment science; you must also be correct.

Spurious correlations

In Module 2, you will learn an important point: just because two things are associated, it does not mean that one caused the other one. Oh, what the heck. We might as well introduce you to the point here. Then we can review it in Module 2 in the context of interpreting statistics. Researchers who want to determine that there is a cause and effect relationship between two variables, have the ideal research design available to them: experiments. The experimental research design allows causal conclusions precisely because the researcher manipulates one variable and measures the other variable while holding other variables constant to rule out alternative explanation. A different type of research design is correlational, in which a researcher measures the correlation, or the association of two variables. For example, students who have higher grades in high school tend to get higher grades in college (and vice versa: low performing students in high school tend to get lower grades in college). This relationship, or association, is useful for predicting: for example, if you want to predict who will do well in college, focus first on the students who did well in high school. But, as you will see in some detail in Module 2, this does not mean that doing well in high school causes students to do well in college. Unfortunately, people sometimes make that exact sort of claim. They speak as if the association between two variables automatically means that they are causally related. For example, in 2010, a Los Angeles Times headline trumpeted, “Proximity to freeways increases autism risk, study finds,” even though the research was based on a correlation. In fact, the researchers themselves cautioned against drawing that causal conclusion but that did not stop the headline (Roan, 2010). (We hope this sounds familiar; if not, see the Headline Effect above).

Actually, it turns out that this tip really includes most of Module 2, too. Seriously. If you want to be able to evaluate claims about research, you have to know quite a bit about different research strategies, when they are appropriate, their strengths and limitations, etc. You also need to know a bit about the use and misuse of statistics. Module 2 will give you a good solid foundation to be able to recognize some of the common distortions that can occur, so the continuation of this tip really is “most of Module 2.”

Tip #7. Beware of persuasion tricks

Do you believe that advertising does not affect you? If you said yes, well, that is exactly what they want you to believe. You might think we are joking, but we are not. In 2019, Facebook earned $70 billion (with a b) from advertising (this was even after their well-publicized mishaps handling our private information became public). This is an extraordinary amount of money. How extraordinary? Stack $1 bills on top of each other. One inch of dollar bills would be about $230. Seventy billion dollar bills would be a stack that reaches over 4700 MILES in the air. And Facebook is only one company. There is a simple reason why companies earn thousands of miles of money from advertising. It works. And the less aware you are of these persuasion effects, the more you are likely to fall for them (Wegener & Petty, 1997). So, sorry to have to do this to you, but you really should read the section on Persuasion to get a more complete understanding of how these techniques are used on (or against) us (Module 21) To get you interested, let us consider one of those techniques here: testimonials.

A testimonial is a report on the quality or effectiveness of some treatment, book, or product by an actual user. For example, diet ads often use testimonials to demonstrate the effectiveness of their plan. It is a persuasive technique because presumably the person giving the testimonial is someone just like you or me, an unpaid consumer who happened to be so impressed by the performance of the product that they just could not help but thank the company.

Some testimonials are clearly contrived. But even honest testimonials present a problem: Each testimonial is useful for describing the experiences of one single person only. The individual in question might not be all that typical or all that much like you. Buried in the small print, diet ads may tell you that the “results are not typical.” Just because one person lost 75 pounds by eating only the crusts of white bread, it does not mean that everyone, or even most people, will see similar benefits. In terms of the principles of science, a testimonial is not based on rigorous observations, and one person’s experience might not be repeatable (Schick and Vaughn, 1999). (See Section 1.1 above)

By the way, if you are paying close attention, you might realize that this sounds awfully similar to overgeneralization. They are assuming that just because something is true of one person, it is true of everyone else. If you did notice that, congratulations. You should probably get extra credit.

testimonial: A user’s report on the effectiveness of some treatment or product.

Debrief

Have the tips in this section led you to reconsider whether some psychological claim that you believed is true or not? If so, what was the claim, and which tips led you to doubt it?

 

 

 

 

 

 

License

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

Introduction to Psychology, 2nd Edition Copyright © 2021 by Ken Gray; Elizabeth Arnott-Hill; and Or'Shaundra Benson is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.

Share This Book