Jonathan Haidt begins his book The Happiness Hypothesis: Finding Modern Truth in Ancient Wisdom by discussing the ways that the human mind conflicts with itself—that our conscious thoughts don’t always align with what we feel. He uses the metaphor of a man riding an elephant to illustrate this point.
The forth division concerns the ways our thoughts can be deliberate and controlled but also instinctive and automatic:
In the 1990s, while I was developing the elephant/rider metaphor for myself, the field of social psychology was coming to a similar view of the mind. After its long infatuation with information processing models and computer metaphors, psychologists began to realize that there are really two processing systems at work in the mind at all times; controlled processes and automatic processes.
Suppose you volunteered to be a subject in the following experiment. First, the experimenter hands you some word problems and tells you to come and get her when you are finished. The word problems are easy: Just unscramble sets of five words and make sentences using four of them. For example, “they her bother see usually” becomes either “they usually see her” or “they usually bother her.” A few minutes later, when you have finished the test, you go out to the hallway as instructed. The experimenter is there, but she’s engaged in conversation with someone and isn’t making eye contact with you. What do you suppose you’ll do? Well, if half the sentences you unscrambled contained words related to rudeness (such as bother, brazen, aggressively), you will probably interrupt the experimenter within a minute or two to say, “Hey, I’m finished. What should I do now?” But if you unscrambled sentences in which the rude words were swapped with words related to politeness (“they her respect see usually”), the odds are you’ll just sit there meekly and wait until the experimenter acknowledges you—ten minutes from now.
Likewise, exposure to words related to the elderly makes people walk more slowly; words related to professors make people smarter at the game of Trivial Pursuit; and words related to soccer hooligans make people dumber. And these effects don’t even depend on your consciously reading the words; the same effects can occur when the words are presented subliminally, that is, flashed on a screen for just a few hundredths of a second, too fast for your conscious mind to register them. But some part of the mind does see the words, and it sets in motion behaviors that psychologists can measure.
According to John Bargh, the pioneer in this research, these experiments show that most mental processes happen automatically, without the need for conscious attention or control. Most automatic processes are completely unconscious, although some of them show a part of themselves to consciousness; for example, we are aware of the “stream of consciousness” that seems to flow on by, following its own rules of association, without any feeling of effort or direction from the self. Bargh contrasts automatic processes with controlled processes, the kind of thinking that takes some effort, that proceeds in steps and that always plays out on the center stage of consciousness. For example, at what time would you need to leave your house to catch a 6:26 flight to London? That’s something you have to think about consciously, first choosing a means of transport ti the airport and then considering rush-hour traffic, weather, and the strictness of the shoe police at the airport. You can’t depart on a hunch. But if you drive to the airport, almost everything you do on the way will be automatic: breathing, blinking, shifting in your seat, daydreaming, keeping enough distance between you and the car in front of you, even scowling and cursing slower drivers.
Controlled processing is limited—we can think consciously about one thing at a time only—but automatic processes run in parallel and can handle many tasks at once. If the mind performs hundreds of operations each second, all but one of them must be handled automatically. So what is the relationship between controlled and automatic processing? Is controlled processing the wise boss, king, or CEO handling the most important questions and setting policy with foresight for the dumber automatic processes to carry out? No, that would bring us right back to the Promethean script and divine reason. To dispel the Promethean script once and for all, it will help to go back in time and look at why we have these two processes, why we have a small rider and a large elephant.
When the first clumps of neurons were forming the first brains more than 600 million years ago, these clumps must have conferred some advantage on the organisms that had them because brains have proliferated ever since. Brains are adaptive because they integrate information from various parts of the animal’s body to respond quickly and automatically to threats and opportunities in the environment. By the time we reach 3 million years ago, these clumps must have conferred some advantage on the organisms that had them because brains have proliferated ever since. Brains are adaptive because they integrate information from various parts of the animal’s body to respond quickly and automatically to threats and opportunities in the environment. By the time we reach 3 million years ago, the Earth was full of animal with extraordinarily sophisticated automatic abilities, among them birds that could navigate by star positions, ants that could cooperate to fight wars and run fungus farms, and several species of hominids that had begun to make tools. Many of these creatures possessed systems of communication, but none of them had developed language.
Controlled processing requires language. You can have bits and pieces of thought through images, but to plan something complex, to weigh the pros and cons of different paths, or to analyze the causes of past successes and failures, you need words. Nobody knows how long ago human beings developed language, but most estimates range from around 2 million years ago, the time of cave paintings and other artifacts that reveal unmistakably modern human minds. Whichever end of that range you favor, language, reasoning, and conscious planning arrived in the most recent eye-blink of evolution. They are like new software, Rider version 1.0. The language parts work well, but there are still a lot of bugs in the reasoning and planning programs. Automatic processes, on the other hand, have been through thousands of product cycles and are nearly perfect. This difference in maturity between automatic and controlled processes helps explain why we have inexpensive computers that can solve logic, math, and chess problems better than any human beings can (most of us struggle with these tasks), but none of our robots, no matter how costly, can walk through the woods as well as the average six-year-old child (our perceptual and motor systems are superb).
Evolution never looks ahead. It can’t plan the best way to travel from point A to point B. Instead, small changes to existing forms arise (by genetic mutation), and spread within a population to the extent that they help organisms respond more effectively to current conditions. When language evolved, the human brain was not reengineered to hand over the reins of power to the rider (conscious verbal thinking). Things were already working pretty well, and linguistic ability spread to the extent that it helped the elephant do something important in a better way. The rider evolved to serve the elephant. But whatever its origin, once we had it, language was a powerful tool that could be used in new ways, and evolution then selected those individuals who got the best use out of it.
One use of language is that it partially freed humans from “stimulus control.” Behaviorists such as B.F. Skinner were able to explain much of the behavior of animals as a set of connections between stimuli and responses. Some of these connections are innate, such as when the sight or smell of an animal’s natural food triggers hunger and eating. Other connections are learned, as demonstrated by Ivan Pavlov’s dogs, who salivated at the sound of a bell that had earlier announced the arrival of food. The behaviorists saw animals as slaves to their environments and learning histories who blindly respond to the reward properties of whatever they encounter. The behaviorists saw animals as slaves to their environments and learning histories who blindly respond to the reward properties of whatever they encounter. The behaviorists thought that people were no different from other animals. In this view, St. Paul’s lament could be restated as: “My flesh is under stimulus control.” It is no accident that we find the carnal pleasures so rewarding. Our brains, like rat brains, are wired so that food and sex give us little bursts of dopamine, the neurotransmitter that is the brain’s way of making us enjoy the activities that are good for the survival of our genes. Plato’s “bad” horse plays an important role in pulling us toward these things, which helped our ancestor survive and succeed in becoming our ancestors.
But the behaviorists were not exactly right about people. The controlled system allows people to think about long-term goals and thereby escape the tyranny of the here-and-now, the automatic triggering of temptation by the sight of tempting objects. People can imagine alternatives that are not visually present; they can weigh long-term health risks against present pleasures, and they can learn in conversation about which choices will bring success and prestige. Unfortunately, the behaviorists were not entirely wrong about people, either. For although the controlled system does not conform to behaviorists principles, it also has relatively little power to cause behavior. The automatic system was shaped by natural selection to trigger quick and reliable action, and it includes parts of the brain that makes us feel pleasure and pain (such as the orbitofrontal cortex) and that trigger survival-related motivations (such as the hypothalamus). The automatic system has its finger on the dopamine release button. The controlled system, in contrast, is better seen as an advisor. It’s a rider placed on the elephant’s back to help the elephant make better choices. The rider can see farther into the future, and the rider can learn valuable information by talking to other riders or by reading maps, but the rider cannot order the elephant around against its will. I believe the Scottish philosopher David Hume was closer to the truth than was Plato when he said, “Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them.”
In sum, the rider is an advisor or servant; not a king, president, or charioteer with a firm grip on the reins. The rider is Gazzaniga’s interpreter module; it is conscious, controlled thought. The elephant, in contrast, is everything else. The elephant includes the gut feelings, visceral reactions, emotions, and intuitions that comprise much of the automatic system. The elephant and the rider each have their own intelligence, and when they work together well they enable the unique brilliance of human beings. But they don’t always work together well.