14 Module 14: Biopsychology: Bringing Human Nature into Focus

Module 14: Biopsychology: Bringing Human Nature into Focus

Scientific psychology is less than 150 years old. Although scientists had been interested in studying the brain before the discipline of psychology got started, it is safe to say that inquiries into the structure and function of the brain were in their infancy. After all, the human brain is widely considered the most complex biological organ in the universe. We are probably still in the infancy of learning about the brain.

The very early days of biopsychology yielded overly simplistic and sometimes completely wrong-headed ideas. On the other hand, a few ideas were so good that it is astounding that researchers were able to come up with them without the methods of inquiry that we have available today. It is a very small portion of the overall research, though, that we still remember and admire today.

It wasn’t that the early researchers were poor scientists. You should not be surprised to find that early discoveries about the biological bases of psychology were not always correct. Until the advent of advanced brain imaging techniques, such as PET and fMRI (see Module 11), brain researchers had to make a lot of guesses. When you think about it, these early researchers were like the rest of us when we are having trouble seeing. Perhaps it is too dark to see, the objects we are trying to see are too far away, or we have poor eyesight. We may often be able to get along despite these limitations. For example, while driving at night, we may very well be able to figure out what a blurry sign says before we can actually see the words on it. Sometimes, though, we make mistakes. When we tire of making too many mistakes, we may try to augment our natural observation abilities; we can turn on a spotlight, buy a pair of binoculars, or get fitted for eyeglasses. The scientific version of a new pair of glasses is a more advanced technology for doing research. PET and fMRI have helped to bring otherwise blurry images of the brain into focus.

The need for “a new pair of glasses” in science is not always obvious, however, because our exposure to science in everyday life does not reveal the bumps, turns, and missteps that occur along the way. Nearly all research in a scientific field is destined to be forgotten because it is overly simplistic or just plain wrong. Before the science majors among you decide to change to business or art history because of this messy truth, however, you should realize that scientific progress depends on researchers making small improvements over previous research. Although individual studies may turn out to have been too simplistic in the way they explained some phenomenon, those earlier studies were essential. New, improved research would perhaps not be attempted if old research had not already been done. Thus, scientific progress is incremental. Another important fact about science that you should keep in mind is that progress is not continuous. Many people think of science as a steady series of groundbreaking discoveries, each of which greatly advances the field. Although the broad trend may look that way, when you look at the day-by-day history of science, you find that a small minority of discoveries turn out to be the blockbusters we hear about in the news. Quite often, new theories and ideas turn out to be flat-out wrong. When that happens, the best that can happen is that researchers go off on a tangent; at worst, the whole field is set back. As you come to understand the development of the biological perspective in psychology, you will see both the incremental progress and the wrong turns.

Of course, it’s easy for us today to look at the research that turned out to be simplistic or misleading and criticize it as crackpot science. We are, however, falling victim to hindsight bias, which was introduced in Module 1. After the good ideas turn out to be good, and the bad ideas turn out to be bad, it seems obvious in hindsight that they would do so. But in reality, it is not so obvious. Some brain scientists who got things amazingly right ended up on the wrong track about something else. Consider Paul Broca, who discovered that the seat of spoken language production is in the left frontal lobe—a significant early discovery that has stood the test of time. But Broca was also a proponent of craniometry, using skull size and shape to categorize people’s race, intelligence, morality, and other characteristics (Carroll, 2003). For example, Broca believed that women are less intelligent than men because their brains are smaller. Of course, we say, craniometry was a terrible idea that was motivated by people’s personal prejudices. But what do you think about the idea that people’s brain size adjusted for body size is related to intelligence? Is this an obviously good or bad idea? In reality, this is a 150-year controversy within the scientific community. Some researchers have found a positive correlation between brain size (adjusted for body size) and intelligence (Posthuma et al., 2002; Rushton & Ankney, 1996). Others have found no correlation (Schoenemann et al. 2000). Recent meta-analyses have indicated that there is a small positive correlation between brain size (adjusted for body size) and intelligence, much smaller than some researchers had found, but not quite zero (Pietschnig et al., 2015; Woodley of Menie et al., 2016). Fifteen years from now, assuming these results hold, they will seem to have been obviously right, the other obviously wrong. That is how the hindsight bias works.

So, with the benefit of hindsight, what were some of those great ideas that revolutionized our thinking about biopsychology and some of those poor ideas that hijacked the field for a time? We will look in this Window at discoveries about the structure of the neuron and about localization of brain functions, thus hitting a couple of the major topics of the modules in this unit. You will see that developments in research technology were sometimes a key to making these discoveries, just as a new pair of glasses might help us improve our game or our grades in a rather dramatic way. Other discoveries were made despite the researchers’ severely limited research techniques. We will also examine a couple of recent detours in brain research that are currently being reexamined. At the end of the Module we will address the issue of the extent to which “nature” and “nurture” affect the neural system and the role of evolutionary psychology, a new theoretical tool, in this debate. There is a current controversy in the field regarding whether evolutionary psychology is a new pair of glasses or the wrong prescription altogether.


craniometry: a discredited belief that a person’s skull size and shape reflected their race, intelligence, morality, and other characteristics


Discovering the Structure of Neurons

As Module 9 related, the early biopsychology researchers had only very crude methods available to them. For example, they could examine individual cases of people who had suffered brain damage, they could open the skulls of dead people, or they could experiment on the nervous systems of non-human animals. Microscopes were nowhere near as powerful as those available today, and methods of examining a functioning brain, such as PET and fMRI, were not even the stuff of science fiction. Researchers relied on their ability to make ingenious inferences from observations using their limited methods.

For example, in 1850, Hermann von Helmholtz reported his discovery of the speed of neural transmission, a problem that had previously seemed unsolvable (R.I. Watson, 1979). Helmholtz made his discovery by applying an electric current to a neuron in a preserved frog’s leg. The electric current generated a neural impulse, which made its way through the neuron. Then, the signal was sent to the leg’s calf muscle, causing it to contract. When the calf muscle moved, it lifted a small weight and broke the contact in the electricity generator, thus stopping the current. The duration from onset to a cessation of the current was how long it took the neural impulse to travel.

Advances in research techniques helped refine researchers’ focus and led to important new discoveries. In the late 1800’s Camillo Golgi developed a method of staining neurons so that they could be seen under a microscope. Ramon y Cajal was able to use this method to show that neurons remained separate from each other.

In 1906 Charles Sherrington built on Cajal’s and Helmholz’s findings by describing how neural communication through the synapse differs from the type of signaling that occurs inside the neuron. Sherrington, too, made his discovery by the ingenious inference method. He compared the speed of neural transmission within a single neuron to the speed of transmission over an equal distance when multiple neurons were involved. Because multiple-neuron transmission was slower, Sherrington inferred that a different kind of transmission takes place between neurons; Sherrington postulated a space between neurons and called the area a synapse. We now know that the synapse is where the chemical signaling involving neurotransmitters occurs. Note that between Helmholtz’s and Sherrington’s discoveries, 56 years passed. During that period a great deal of research was conducted, some of which built on the previous research, some of which wound up being a dead-end, most of which ended up forgotten.

Localizing Brain Functions

In a classic Bugs Bunny episode, Bugs dresses up as a “mind reader” and offers to read the bumps on his co-star’s head. When his victim, a gambler in search of a lucky rabbit’s foot, protests that he doesn’t have any bumps, Bugs obliges, giving him some by tapping on his head with a hammer. Many people probably laughed at the joke without realizing that it was a reference to phrenology, the analysis of people’s traits and abilities by examining the bumps on the skull. Literally a subject of ridicule, phrenology actually got something right. Franz Gall, the developer of phrenology in the early 1800’s, guessed correctly that different brain areas were responsible for different functions. Unfortunately for him and his place in history, Gall also guessed, this time incorrectly, that those different functions were reflected in different sizes of brain areas, which then caused the skull to bulge from the pressure of larger sections. Phrenology captured the imagination of many people throughout the 19th century (it also led directly to craniometry); it even has proponents today, despite the complete lack of scientific evidence supporting it (Carroll, 2003). Although phrenology was a major detour from scientific progress, it did prompt a long and continuing line of research to find out which areas of the brain govern which functions. Once again, the evolution of research technology, especially recently, has helped researchers refine their focus.

Broca’s area. Paul Broca, despite his belief in craniometry, is credited with the first solid discovery of the function of a specific brain area. Broca made his discovery, in the mid-1800s, by examining case studies of patients who had lost the ability to speak. (See Broca, 1861 for a description of his most famous patient.) After the patients died, Broca found damage in the middle section of the left frontal lobe.


phrenology: the discredited belief that people’s traits and abilities could be determined by examining bumps on their skulls.

Broca’s area: an area in the left frontal lobe that plays a very important role in producing speech.


Cortex. In the 1950s, neurosurgeon Wilder Penfield electrically stimulated patients’ brains during surgery for epilepsy. Because he made a serious attempt to discover the functions of brain areas prior to cutting into them, he is responsible for some very important advances in our knowledge of the brain. Specifically, Penfield is still admired today as the neuroscientist who mapped the primary motor cortex and sensory cortex. He showed that different sections of the cortex control different parts of the body.

Penfield also stimulated other parts of his patients’ brains and was able to get the patients to report images, which he interpreted to be memories. Even today, some people believe Penfield’s original conclusion that memories are recorded permanently in specific neurons in the cortex (Penfield, 1955; Penfield & Perot, 1963). Daniel Schacter (1996), on the other hand, has pointed out that Penfield was able to get these “memories” from only a very small number of his patients, and the reports are suspect. For example, some patients reported events that clearly had not happened. Schacter suggests that these reactions to brain stimulation are more reasonably interpreted as hallucinations than memories.

Today, we do believe that specific brain areas are involved in memory, but they are thought to be involved as processing sites, not storage sites. Module 9 explains that a key processing site for working memory is in the prefrontal cortex, and a key processing site for storing memories is in the hippocampus. New tools for studying brain activity have allowed this refinement of Penfield’s ideas. Penfield’s experiments had interesting results, though, if you think about it: using a single procedure, he was able to make one of the most important discoveries as well as one of the most famous errors in mapping brain functions.

Hippocampus. The discovery of the hippocampus’s role in memory is a good example of the way scientific progress occurs as we refine our focus and discover complexities about brain areas. Probably the first breakthrough in our knowledge came from the most famous case study of memory research, that of a patient known by the initials H.M., a man whose temporal lobes were damaged by surgery that attempted to cure his epilepsy. Several specific brain parts were removed, including both hippocampi. H.M.’s seizures were reduced (but not eliminated), but he suffered several minor deficits as a result of the surgery—and one major one. He lost his memory. Not his total memory, however. He was able to remember events from long before the surgery but lost his memory of most of the 11 years immediately preceding it. In addition, he lost the ability to transfer new information into long-term memory.

On the basis of H.M.’s case, researchers began to believe that the hippocampus helps us to store new memories into long-term memory and to make those memories permanent. Other research (some on H.M) helped to sort out what kinds of memories are involved. For example, many case studies of brain-damaged patients and research with normal people and non-human animals have suggested that the hippocampus helps storage of explicit memory (for facts and episodes) but not implicit memory (for skills) (Schacter and Tulving, 1994; Squire and Knowlton, 2000); see the Unit 2 Window for more on this research. Recent research has even recorded changes in individual neurons of the hippocampus of monkeys as they learn new explicit memory associations (Wirth et al. 2003).

Other researchers have discovered that the hippocampus appears especially important for spatial memories. For example, the taxi driver study mentioned in Module 9 (Maguire et al. 2000) used MRI brain scans to show that the taxi drivers had especially large hippocampi. The more we discover about the hippocampus, the more we realize that it is an extremely complex brain area, involved in many different, but certainly not all, kinds of memories.

Corpus callosum. Our left and right hemispheres are not mirror images of each other, as section 9.2 explains. Each is somewhat specialized, better equipped to handle certain functions. For example, in most people, the left hemisphere is more adept at speech production and word comprehension. The left hemisphere also does a better job of seeing details in visual scenes, and it is better at arithmetic. The right seems to beat the left in understanding the emotional content of language, seeing overall patterns in visual scenes, and processing spatial information, as in geometry. The two brain hemispheres are ordinarily joined by the massive corpus callosum. Some of our important discoveries about the differences between the left and right hemispheres come from case studies involving people whose corpus callosum has been severed. In some cases of severe epilepsy, in which seizures travel from one side of the brain to the other, the only successful treatment has been this dramatic surgery, which leaves the patients with a “split-brain.” These patients appear completely normal, but their two half-brains function independently.

Through research with these split-brain patients, Roger Sperry and his colleagues were able to demonstrate that the left hemisphere has much better ability to handle language than the right (Gazzaniga, 1967). They made this discovery by flashing words or pictures to the left visual field or the right visual field. Input to the left visual field goes to the brain’s right hemisphere, and vice versa. Split-brain patients could say a word that was flashed to the right visual field but could not say a word flashed to the left visual field (because the left hemisphere could “talk” while the right could not). They could, however, indicate with their left hand—which is controlled by the right hemisphere—that they recognized the word, perhaps by picking up an object that the word named (Nebes, 1974). In people with intact corpus callosums, information that is initiated on one side of the brain is nearly instantly transmitted to the other side, so you would certainly not be able to observe these different functions of the left and right hemispheres in casual observations.

Frontal lobe. As indicated by the case of H.M., mistakes about the functions of brain areas have sometimes had disastrous consequences. Sometimes, these mistakes resulted from researchers’ failure to make serious efforts to determine the effects of their surgery or other treatments before understanding the functions of brain areas. For example, throughout the 1940’s and 1950’s, 40,000 patients suffering from psychological disorders were given prefrontal lobotomies, a surgery in which the frontal lobes are separated from the rest of the brain. As amazing as it sounds, the lobotomy was tried on humans because it had been successful at calming a single chimpanzee on which the procedure was performed (Pinel, 2003). Supporters of lobotomies believed that the procedure calmed patients without serious side effects. Although lobotomies did tend to calm the patients, it also left them with very serious side effects, including loss of morality, emotional unresponsiveness, and an inability to plan. Today, we think of the prefrontal cortex as the major brain area for integrating input from many other parts of the brain so that we can perform our most complex mental activities, such as planning and reasoning.

The case of H.M. and the large-scale tragedy of prefrontal lobotomies remind us that discoveries about the localization of brain functions have not been academic exercises. Some real people who suffered very serious consequences have contributed to what we know today.


prefrontal lobotomy: a surgery in which the frontal lobes are separated from the rest of the brain; the surgery was performed during the 1940’s and 1950’s in the US to try to calm psychiatric patients.


Getting Back on Track with a New Focus

Throughout this Module we have highlighted some important discoveries and some bad mistakes along the path to learning about biopsychology. It is important that you realize that missing a turn and going down a wrong track is not simply something of historical interest. Our knowledge of the brain is currently undergoing a radical change because researchers now realize that they have gotten some facts completely wrong for many years. They have been able to see those mistakes mainly because advanced techniques, such as PET and fMRI technology, give them unprecedented means of examining the living brain while it is working. Thus, we are in the process of getting back on track from a number of detours and setbacks.

Two important recent discoveries of wrong turns are described here. But how do we know whether these current hot topics in brain research represent true progress or just new detours? The answer is, we don’t. Only in hindsight can we judge with confidence whether a development was a progression, digression, or regression. In the meanwhile, we must critically evaluate both sides of every scientific debate.

Mistake #1: The brain makes no new neurons after early childhood. Most of you have undoubtedly been told that neurons, once killed, can never come back. Perhaps you first heard this “fact” as a teenager, in the assertion that drinking alcohol kills brain cells. It is true that dead brain cells do not come back to life. Researchers also believed, however, that the brain does not generate any new brain cells after early childhood, so the dead brain cells could not ever be replaced. Brain researchers’ disbelief in the possibility of neurogenesis, as it is called, has severely hampered scientific progress over the past 40 years (Gage & Van Praag, 2002).

Isolated researchers through the years did find evidence of new neuron formation in birds and in mice and rats, but it was not until 1998 that a persuasive demonstration of neurogenesis in humans was provided. Eriksson and colleagues (1998) found that some cancer patients generated new neurons in the hippocampus. The fact that the process occurs in the hippocampus suggests that neurogenesis is important for memory (Alam et al., 2008; Gage and Van Praag, 2002).

Our developing knowledge about neurogenesis has been spurred by cutting-edge research technologies. Early on, researchers relied on the electron microscope; more recently, they have used the techniques of growing neurons in a culture and tracing specific genetic markers associated with new neuron formation. Brain-imaging techniques cannot observe neurogenesis, but they can reveal areas with more or fewer neurons than expected, often assumed a result of the rate of neurogenesis (Shelene, 2003).

Currently, brain researchers believe that neurogenesis in the adult human brain is a daily phenomenon. The basic process is that the brain produces neurons called stem cells. These are “general purpose” neurons that can develop into any specific type of neuron. The stem cells can move to different parts of the brain while they become specialized into particular types of neurons. Researchers are currently trying to figure out just what neurogenesis accomplishes for our brains.


neurogenesis: the creation of new neurons in the nervous system

stem cells: general purpose, immature neurons that have the capacity to develop into any specific type of neuron


Mistake #2. Glia are really only glue. For many years, researchers believed that glia play a relatively minor supporting role in the brain, despite their outnumbering neurons approximately ten to one. They likened glia to Elmer’s Glue, believing that they did little more than hold the brain together (the word glia means glue). For many years, researchers have realized that glia contain glycogen, which is how sugar is stored in the body for energy release. Thus, glia are the storage houses for the brain’s fuel. Also, as Module 11 relates, the substance myelin, which surrounds many axons, comes from glia.

The traditional belief was that these support functions were the only functions of glia. Researchers have discovered, however, that glia also participate in the neural transmission process (Magistretti & Ransom, 2002; (Volterra, Magistretti, & Haydon, 2002)). Again, advanced techniques gave researchers the tools to allow them to change their focus and make these new discoveries. For example, Yuan and Ganetsky (1999) demonstrated that glia communicate with neurons by tracing a specific kind of protein produced by glia that found its way to axons.

One important research methodology is ablation, in which researchers remove the cells of interest using a variety of techniques from an adult brain of an animal and observe the consequent changes in behavior. For example, using this method, researchers have discovered that some glia cells function as an extension of the immune system in the brain (Jakel and Dimou, 2016).

Other researchers have demonstrated that glia form synapses with neurons in the brain. These synapses were first discovered in the hippocampus, but have since been discovered in many other areas as well (Sun and Dietrich, 2013).  Neuroscientists still do not know what the purposes of these synapses are, perhaps because they have an odd property. Neurons have synapses in which their signal is sent to the glia, but the glia do not have their own synapses for sending the signal on beyond that.

There are many additional functions of glia, some quite well understood, others are still a mystery (Jakel and Dimou, 2016). That is quite a interesting story for a substance that we used to think was the brain’s Elmer’s Glue.


glia: cells that are located throughout the brain; they store glycogen, the fuel that the brain uses. They also participate in neural communication, form myelin, and help make and function as part of the immune system in the brain


Examining the Nature-Nurture Through Evolutionary Psychology: Is It a Leap Forward or a Wrong Turn?

Aggressiveness, anxiety, intelligence, happiness, depression, shyness, loneliness, obesity, and many other traits tend to run in families. Many people observe these correspondences and assert that they prove that the traits or behaviors in question are a consequence of heredity, or nature. Others are equally convinced that these observations validate their belief that the traits or behaviors are a consequence of environment, or nurture. They are both wrong. Or rather they are both right. When you observe the similarities among members of the same family, you are very likely witnessing the influences of both nature and nurture.

The nature-nurture controversy has a long history. As Module 4 describes, the debate about the influence of nature versus nurture began in philosophy, as typified by the writings of Rene Descartes on the side of nature and John Locke for nurture. As psychology became scientific, questions about the roles of nature and nurture began to be examined empirically. As is often the case when there is a controversy between two somewhat extreme positions, the truth is somewhere in the middle. As mentioned in Module 10, behavior geneticists have discovered heritabilities for many psychological characteristics to be around 0.5; that is, about half of the variation across a group is explained by genetic differences. In essence, all human behavior and mental processes are likely a complex interaction between heredity and the environment, nature and nurture. Hence, it no longer makes sense to talk of nature versus nurture. The question is how do nature and nurture interact?

Evolutionary psychology is one of the newer influences on the nature and nurture debate. Its rise has given some biopsychologists a new theoretical tool for examining why human behavior and mental processes are what they are. To use the “focusing” analogy, they believe that evolutionary psychology is like the Hubble space telescope, a remarkably powerful tool that can provide us with both a grand picture of the universe and very fine details.

Evolutionary psychologists essentially offer two answers to the question of how nature and nurture interact. First, they interact through the process of natural selection. The environment provided the adaptive problems that shaped our human ancestors. Those who survived the challenges of the environment were able to pass their genes on to their offspring; in a sense, nurture shaped nature. This is the grand picture. Second, nature and nurture interact on a smaller scale in all of us, providing us with the fine details. Our human nature, brought to us through many generations of natural selection, leaves us with many predispositions but no guarantees. For example, as Steven Pinker (1994) points out, human language ability is an instinct; it is part of our nature. Every human is born with a predisposition to learn language. The specific language you learn, however, is the one to which you are exposed, and this is the influence of nurture.

As we mentioned in Module 10, the evolutionary view of psychology is still controversial. Some psychologists and biologists believe that evolutionary psychology is obscuring the facts, not helping us focus. Supporters of evolutionary psychology contend that we are on the verge of a great unification of knowledge, what Edward O. Wilson (1999) has called consilience; they believe that evolutionary psychology offers an overarching explanation for human behavior and mental processes. They believe that evolutionary psychology offers us a bridge between biology and psychology. Critics have two main arguments against evolutionary psychology. Some claim that evolutionary psychology distorts our understanding of human behavior and mental processes by trying to explain them all as evolutionary adaptations; thus it seeks to make psychology unnecessary. Others, that evolutionary psychology is unscientific and thus built on a shaky foundation.

Steven Pinker (2002), a well-known proponent of evolutionary psychology, has tried to address the first criticism. He contends that critics have incorrectly labeled evolutionary psychology “deterministic” and “reductionistic.” As we have already seen, genes do not determine behavior, they predispose it: evolution applied to psychology does not change that point. No one seriously denies that we can deviate from at least some of our genetic blueprints in response to our environment. Pinker also notes that evolutionary psychology does not seek to explain psychology out of existence (the “reductionism” charge). Rather, it seeks to fit psychology firmly within the hierarchy of sciences of living organisms. In other words, evolutionary psychology seeks to provide explanations for psychological phenomena that are consistent with everything we know from evolutionary biology. In this way, evolutionary psychology seeks only to become a new perspective, a new way of looking at human psychology (see Module 3).

David Buss (2007), has defended evolutionary psychology from the charge of being unscientific, however, by pointing out that it is unlikely that anyone will overturn Darwin’s theory of evolution by natural and sexual selection. Further, one cannot plausibly deny that human beings are biological entities like any other and thus subject to Darwin’s theory. There is no reason to expect that behavioral tendencies would be exempt from selection. Thus, it seems very likely that some kind evolutionary psychology should apply to humans. Buss argues that we should be debating specific evolutionary hypotheses. But we are, in fact, still debating whether evolutionary psychology should even exist (Smith, 2019, see Module 10 for details).

As you may have guessed, the jury is still out on the question of whether evolutionary psychology offers the potential for a valid new set of explanations. We will, at times do as David Buss recommends and evaluate its specific claims rather than its basic existence. Evolutionary psychology may simply be a new tool, a new pair of glasses with which we might be able to bring human nature into better focus. Like advanced brain-imaging techniques, evolutionary psychology provides us with a different view. The brain imaging techniques have done wonders for helping us see how the brain is organized; evolutionary psychology may offer us the opportunity to discover why it is organized that way. Seen through the lens of evolutionary psychology, our behavior and mental processes become consistent with the most important idea in all of biology, namely natural selection. We may someday decide it is the wrong lens, but for now, at least, it is offering us a fascinating and useful view of the nature and nurture of psychology.

 

Sometimes You Can Keep Your Old Glasses Too

Note what we said about the emergence of interdisciplinary. approaches above. Just because a new perspective or a new tool has been introduced, it does not necessarily mean that the old methods of discovery get abandoned. First, the new approaches do not typically take over a field immediately. There are a great many researchers in the field at any given time who are attached to the techniques and perspectives with which they are comfortable. Many are senior researchers in psychology that feel too invested in their approach to learn a new way of doing things. So the new and the old approaches coexist for a while, as researchers slowly migrate over, or researchers who use the old approaches retire and are replaced by users of the new tools.

Something very different can happen too. Many people in their 50’s need reading glasses because their close-up vision starts to deteriorate. But what if they already had glasses for far-away vision? They certainly do not throw those away when they start using the reading glasses. Both the old and the new glasses stick around because each offers its own unique and useful contribution to the user’s vision. EEG and fMRI offer a great example of this “getting new glasses, but keeping the old ones, too” scenario. As you have seen a couple of times already, EEG was developed around 1930. For several decades, it was the only way to “see” regular brain activity. The view was in many ways blurry, though. Measuring electrical activity inside the brain through a small number of sensors (sometimes, 19 or even fewer) placed on the scalp is not a great way to see where brain activity is taking place. This is where fMRI is quite good, however. fMRI has a spatial resolution of about 1 millimeter. In other words, a readout of an fMRI image will be able to show the location of brain activity within about 1 millimeter. The temporal resolution of fMRI is not particularly good, though. So fMRI is very good at showing where brain activity is occurring, but not very good at showing when it is occurring. And this is where EEG shines. The technique that EEG researchers use is called event-related potential (ERP). In ERP research, a participant is presented with some stimulus (the event), and a positive or negative electrical charge (the potential) is observed a short time later. And the temporal resolution is excellent, as little as 1 millisecond in optimal conditions (Luck 2014). So if we want to know both where and when brain activity occurs, we will need the results of both fMRI (the new glasses) and EEG/ERP (the old glasses).


event-related potential (ERP): the brain scanning technique used with EEG, in which a stimulus is presented and a corresponding electrical charge is detected a short time later.

spatial resolution: the accuracy level of location information from a brain scanning technique.

temporal resolution: the accuracy level of timing information from a brain scanning technique.

definition

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Introduction to Psychology Copyright © 2020 by Ken Gray; Elizabeth Arnott-Hill; and Or'Shaundra Benson is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book