Duke Alumni Magazine

Scientists in Duke's new Center for Cognitive Neurosciences are uncovering surprising clues on how the human brain enables us to understand language, pay attention, grasp numbers, and store emotion-laden memories.

Tamara Swaab and Edith Kaan are about to read my “mind.” Kaan gently fits an elastic, electrode-studded cap
Mindreading: Meredith, left, is "capped" by postdoc fellow Kaan to measure brain-wave responses.
down over my scalp and fastens it with a chinstrap, also taping electrodes to my face.

Through a hole in each electrode, Swaab, an assistant professor of psychology, carefully squirts a bit of conductive gel onto my skin to ensure a good electrical contact. Such contact between electrode and scalp is essential, they tell me, if the infinitesimal signals from my brain are to be detected accurately by the bank of amplifiers and the computer that crowd the small room. They also caution me not to move my eyes, blink, yawn, or talk during the testing, since even the interference from fidgeting facial muscles would swamp the faint neural signals.
  Kaan, a postdoctoral fellow, installs me in a comfortable chair in the small testing room that is padded with waffled, sound-absorbent foam, and which I am told is shielded from radio-frequency interference. There, I am instructed to sit motionless during the test, only my brain sparking away on the sentence recognition that is my task. Those sentences will appear word-by-word on the computer screen before me, which is covered with cardboard except for a small word-sized rectangle.

    My job is easy. Just press the button Kaan places in my hand when I detect something wrong with a sentence. But Swaab and Kaan’s job is incredibly hard—analyzing immense masses of data from dozens of subjects for clues about how the brain processes language. I tense, my thumb poised to jab at the button the instant I see a wrong sentence. My writer’s pride is at stake, since I’m supposed to have an eye, or rather a brain, for sentence errors. The sentences begin to flash onto the screen.
  “Kevin put the pill in his mouth and Mario the money in his wallet.” (No press.)
  “Alice looked at the raincoat beside the umbrella that were rather old.” (Press!)
  “Most rich people have a very nice villa with a swimming pool.” (No press.)
  “The dentist checked the address beside the tooth that he was going to extract.” (Press!)
  “The woman behind the counter recommends the map that are very detailed.” (Press!)

Next, the two scientists ask me to listen to a series of tones, most identical, pressing the button to count the occasional ones that are higher than the rest. It’s a control task, they explain, to distinguish my language perception from my brain’s general ability to perceive sound.
  Once I’m finished, Swaab and Kaan show me the jagged traces of my brain waves during the language tests, pointing to telltale peaks that marked the very instant I recognized a sentence’s syntactic or grammatical error.
  They had eavesdropped on my brain by recording my brain waves, and then computer-averaging the signals to extract “event-related potentials” (ERPs) to the words. This analysis allows them to pinpoint in time with amazing accuracy when an event happens in the brain. If the scientists had wanted to know where brain activity was occurring, they could have had me carry out my sentence-recognizing while inserted into one of Duke’s powerful “functional Magnetic Resonance Imaging” (fMRI) machines in Duke Medical Center’s new Brain Imaging and Analysis Center. These machines use powerful-but-harmless magnetic pulses to map the brain, pinpointing active brain regions.
  These two techniques constitute the high-tech foundation for research by the cadre of young scientists recruited to the new Center for Cognitive Neuroscience (CCN) to tackle what until recently has been considered an impossible ambition—understanding how the hundred billion or so neurons in the human brain somehow produce the mental abilities that constitute our mind. Until now, says the center’s director, Ron Mangun, these abilities—language, memory, attention, consciousness, and emotion—were mysterious components of the “black box” that is the brain. That black box had been probed from two different directions, says Mangun. Using behavioral experiments, cognitive psychologists developed overall theories about the mechanisms in the black-box brain; and neurobiologists had disassembled the black box to tease apart the finest details of the brain’s wiring. Now, he insists, it’s possible to bridge the intellectual gulf between the two approaches.
  “We’ve come through thirty years of growth in knowledge about the brain, but our knowledge is still in its infancy,” says Mangun. “Just as the cognitive psychologist’s theories stop at the black box, it’s not enough for the neurobiologists to stop at saying, ‘Well, we know that neurons are connected and they squirt out chemicals and they communicate electrically and they form circuits and they form systems, and somehow that produces behavior.’ There’s no reason to wait another thirty years before we start asking interesting questions about the mental life of the human mind and how it’s really organized. So, cognitive neuroscience meetings are places where the psychologists and the neurobiologists can come together to convey their part of the story and to join in developing new cognitive-neurobiological models of the mind.”
  The new center represents just such an arena, where faculty from both campus and medical center—neurobiologists, neurologists, psychologists, philosophers, engineers, and computer scientists—can find common intellectual ground on which they can cultivate a new understanding of the mind. And mapping this common ground—as translators, educators, collaborators, experimentalists, and theoreticians—are the young scientists whom Mangun has recruited, and whom he has dubbed Duke’s “Mind Trust.”
  Mangun says Duke’s initiative in cognitive neuroscience and cognitive neuro-imaging is the largest ongoing program of development in cognitive neuroscience in the country. “And it’s putting Duke on the map in that area, along with Harvard, Princeton, M.I.T., and Caltech.” Mangun himself exemplifies this handpicked cadre, having been lured in 1999 from the University of California at Davis, where he headed the psychology department’s Perception and Cognition Area.
  The scientific mysteries that the faculty are tackling illustrate the potential for cognitive neuroscientists to explain the mind, as well as the daunting research challenges they face. They seek to understand how the human brain enables us to understand language, pay attention, grasp numbers, and store emotion-laden memories.
  Swaab, in whose laboratory I suffered the assaults of ungrammatical sentences, explores how the brain understands language. Besides testing normal subjects, she uses ERP to eavesdrop on the brains of aphasic patients, whose brain damage may be so severe that it prevents them from understanding or using language normally. Those studies are suggesting a very different underlying handicap. “Traditionally, most researchers in aphasia have thought of aphasic people as having lost the linguistic information, or the representations responsible for understanding meaning or structure of sentences,” she says. Cognitive neuroscientists have uncovered evidence that such patients may retain some linguistic ability, but that this understanding is somehow “imprisoned” in a malfunctioning brain. The challenge, says Swaab, is somehow penetrating the walls of that prison. “A major problem in testing aphasic patients is asking them to perform a task, because one of their problems is that they have difficulty understanding language. That made me think of another way of testing them: If you can’t ask the patient, you can ask their brain what they still understand of normal language.”
  So Swaab and Kaan were using ERP to “ask my brain” what I was understanding when I sat in that small room watching those sentences flash past. Such studies are leading Swaab to believe that subtle problems in processing language information may be the root of language comprehension problems in aphasic patients.
  “We as normal language-users usually don’t think about it, but language is actually a very complex but also a very rapid process,” she says. Her studies aim to distinguish the meaning-related elements of understanding language from those that deal with processing. “We’d like to see whether these aphasic patients have lost the relevant information, or maybe there is a problem in the processes that access this information or use it in real time.”
  Just as Swaab had set me the task of recognizing sentences, Elizabeth Brannon has set children, monkeys, and even pigeons the task of perceiving numbers, with startling results. Much to the surprise of anybody who struggled through math in school, she has found that numerical thinking appears to be built in to our brains through the pressure of evolution.

LaBar:his experiments explore the brain, not as an information processor but as an "emotion processor"

    In 1998, while at Columbia University, Brannon and her colleagues reported that two rhesus monkeys named Rosencrantz and MacDuff showed that they could compare groups of objects—up to nine—and figure out which group had fewer. The study, which involved teaching the monkeys to use a computer touch screen to select images of groups of objects in numerical order, convinced Brannon that numerical thinking was built in to the brain.
  “You can imagine that a monkey chased up a tree and surrounded by a group of wild dogs would need to keep track of where they were and how many were there,” says Brannon. “So, if some of the dogs left, they would need to know if one were still lurking behind a bush.”
  What’s more, when Brannon tested humans using the same system, she found that their reaction times in judging the numerical pictures were very similar to the monkeys’, suggesting that both species use a common and ancient “math mechanism.”
  At Duke, Brannon has launched the Cognitive Development Laboratory to study numerical thinking in children. With her “fun-and-games” approach to studying two-year-olds, she and her colleagues have come up with some seriously fascinating findings. In her experiments, she shows a child two trays holding various-sized boxes and asks the child to pick the tray with the greater number of boxes. A correct choice wins the child brightly colored stickers.
  “Previous studies of two-year-olds have shown that they don’t understand the meaning of the number words or how to count,” says Brannon. “Their performance was not as impressive as the monkeys’. But over a large number of trials we found when first shown that the larger number always contained the stickers, they reliably chose the tray with the larger number.” Now, Brannon is tracing numerical thinking farther back in development by presenting infants with a given quantity of objects and after accustoming them to that number of objects, changing the quantity. By measuring how long the infants stare at the new quantity, she can determine whether they are recognizing a difference in number. “We’re generally finding they are looking longer at the new number,” she says. “So, that does suggest an innate understanding of quantity.”
  In some of her latest work, Brannon has traced numerical ability farther back in
evolution, discovering that pigeons can apparently subtract. In her experiments, she presented pigeons with keys to peck to get a food reward after a number of light flashes. One key always yielded food after fewer flashes than the other; and of course, the hungry pigeons preferred the key that required them to wait for fewer flashes. Next, Brandon changed the experiment’s rules so that the pigeons had to do arithmetic to decide which key would yield a reward after the fewer number of flashes. The pigeons quickly adjusted. “We don’t know exactly how the pigeons are doing it, but they’re solving the task in a way that’s hard to explain other than arguing that they’re subtracting.”

Ultimately, Brannon and her colleagues hope to compare animal and human numerical thinking in such detail that they can understand whether the same process occurs throughout the animal kingdom and where in the brain it is taking place.
    Assistant Professor of Psychology Kevin LaBar is exploring the brain as not just an information processor but also as an “emotion processor”—storing the indelible memory of that first kiss or that terrifying car accident in a very different way than emotionally neutral memories. LaBar’s research seeks to understand how the brain’s machinery stores such emotion-laden memories, which certainly “feel” different than other memories.
  “Memories for emotional kinds of events go through a slightly different neural system than memories for neutral events,” explains LaBar. “And the details of that system have not been worked out. Most people who study memory typically use more mundane kinds of stimuli in their experiments, such as line drawings of objects. I’m more interested in understanding how we remember things that are personally relevant, emotionally arousing."
In his efforts to bring emotion into the laboratory, LaBar conducts experiments in which he presents subjects with emotionally arousing stimuli and studies how their brains process the resulting memories. Of course, he can’t terrify humans in his experiments, so instead he mildly annoys them. In one kind of experiment, he conditions subjects to associate a light flash with an unpleasant noise. Then, he scans a subject’s brain using fMRI to study how memory-related structures in the brain activate in response to the light. LaBar is trying to understand how the almond-shaped brain region called the amygdala seems to process emotional memories. Nestled next to the hippocampus where memories are processed, the amygdala seems to filter memories, somehow rendering emotional ones more visceral.
  LaBar’s studies of people with brain lesions in their amygdala reveal that they are incapable of establishing such emotional memory responses. Those studies suggest that the damaged amygdala is incapable of triggering the memory circuitry to establish an emotional memory. LaBar plans to combine the timing information of ERP and the spatial information of fMRI to study normal and amygdala-damaged subjects. He aims to zero in on the parts of the amygdala involved in establishing emotional memory and how they work with each other and with the rest of the brain’s memory-storage machinery.
  While some emotional memory is normal, and even critical to survival—such as a baby’s learning to avoid a hot stove—other emotional memories can ruin lives. LaBar foresees that basic understanding of emotional memories could lead to therapies to control the corrosive effects of memories related to such problems as phobias and post-traumatic stress disorder. “The challenge will be to learn to target a therapy to treat a person’s particular problem,” he says. “Some people might generalize their fear to other stimuli. So, we may need to look at generalization of fears. Other people might be retrieving the fear in an inappropriate context. You’re talking about contextual control of the fear memory.”
  Besides understanding that emotion clearly colors memory, psychologists have long known that memory itself is no monolithic process. Says newly arrived researcher Roberto Cabeza, “It’s clear that memory is not a single function, but rather there are several different memory systems in the same way that there are different anatomical systems for, let’s say, vision and hearing. It’s been known that some types of brain lesions impair or destroy one type of memory without affecting other forms. People with a lesion in their medial temporal lobes will not remember that they had ridden a bicycle before, but still they can ride a bicycle.” 
  Cabeza is focusing his studies on “episodic memory,” which is memory for an experience. In his experiments, he first creates an episodic memory in his subjects by exposing them to lists of words or pictures. Then, as their brains are being scanned in an fMRI machine, he asks them to remember a word or picture, and he maps which brain regions are active. He is especially interested in the little-understood changes in memory with aging.
  “As we all know, as we age we start having trouble remembering things we have to do, where we parked our car, and so forth,” says Cabeza. Researchers have established that the brain changes with aging. The volume of brain tissue declines, neurons die, and neurotransmitter systems are affected. “But surprisingly, we know really little about the relations between brain changes and cognitive changes. In the past, there was no clear way to study these relationships directly. However, now with fMRI, we can actually get a picture of the brains of both young and elderly people as they perform a cognitive task, so we can explore the differences in detail.”
  For Marty Woldorff, the process of paying attention offers a fascinating realm of research. The brain is extraordinarily adept at focusing attention, says Woldorff, an associate research professor of psychology. As a prime example, he cites the “cocktail-party effect,” by which a person can focus on a specific conversation in a noisy room full of equally loud conversations. Woldorff’s experiments seek to understand how the brain manages to pay such adept attention by placing subjects in a controlled laboratory version of a cocktail party. 
  In one such study, he and his colleagues played different tones in each ear of subjects wired for electrical signals and a similar technique called “event-related magnetic fields” that yields better spatial location of brain activity. The subjects were instructed to pay attention to one ear or the other, to listen for “target tones” of a different frequency. These experiments revealed that the brain controls attention to sounds by “early selection” of a sound that occurs an instant—20 thousandths of a second, to be precise—after the sound enters the ear. This selection can begin even before the brain’s higher processing centers have a chance to begin analyzing the sound.
  Woldorff has also performed similar attentional studies on the visual system, asking subjects to fix their gaze on a center point, but to “pay attention to” spots appearing in the left or right visual fields. These studies, too, have revealed that the brain exercises early control over attention, before any visual processing even takes place. While such studies are nibbling away at the mysteries of attention, they still have far to go, he says. For example, there’s the mystery of how the brain somehow turns down, but not off, its attention to other sounds or images not being paid attention to. It’s an ability the evolutionary pressure of survival would encourage. “If the brain couldn’t variably adjust its attention, there could be a saber-toothed tiger growling nearby and you’d miss it because you were talking to your caveman friend about the Neolithic equivalent of yesterday’s football game.” 
  In CCN director Mangun’s research, he and his colleagues have taken a key step toward understanding attention by creating the first maps of the brain regions active in high-level “executive” control of attention. According to Mangun, basic understanding of attentional control could provide insights into the attentional problems of Attentional Deficit Hyperactive Disorder (ADHD), schizophrenia, and stroke. If scientists understood the neural basis of attention, he says, they could develop ways to measure the effectiveness of drugs to improve attentional functioning. “Before we can understand how such patients are different in their attentional control, we must know how the process functions normally,” he says.
  In a study reported last year, Mangun and his colleagues used fMRI to map the brains of subjects asked to watch a video screen, shifting their attention from one spot on the screen to another, while keeping their eyes fixated on a spot in the middle.
  Analyzing fMRI brain maps of a multitude of such trials, the scientists pinpointed areas in the brain’s cortex that invariably showed activity during the attentional tasks. The new maps were superior to those from previous studies, says Mangun, because those studies had not distinguished between the initial act of orienting of attention and the subsequent cognitive processing that happens when a person is deciding what to pay attention to or ignore.
  “We wanted to distinguish between the neural networks that activate when you initially tell someone to pay attention to something, from those involved in processing what happens as a result,” he says. “It’s similar to the distinction in the brain’s motor system between what happens when a person decides to reach out for an object and the subsequent neural signals to activate muscle contraction to actually reach out.”
  Mangun emphasizes that the new findings represent only the beginning of efforts to pinpoint the attentional control brain regions. More powerful fMRI techniques will help map the active regions at higher resolution, like distinguishing finer and finer objects in satellite images. The researchers also plan to combine fMRI mapping with ERP recording to map the mechanisms of attention in both space and time in the brain.
  “Now, we can distinguish the brain regions that are active, but we need to understand in detail which ones are active first, second, and third,” says Mangun. “Our objective is to distinguish the different mental operations involved, ultimately to understand the detailed computational process of attention.” 
  While researchers dubbed the 1990s the Decade of the Brain, Mangun says the first decade of the new millennium should be dubbed the Decade of the Mind. With hard work and scientific ingenuity, he says he believes the end of this decade will see researchers worldwide contribute a fundamental understanding of the amazing mental processes that enable you to effortlessly perform such mental feats as reading this story. 

Share your comments

Have an account?

Sign in to comment

No Account?

Email the editor