The Third Division of the Mind: New vs. Old

Jonathan Haidt begins his book The Happiness Hypothesis: Finding Modern Truth in Ancient Wisdom by discussing the ways that the human mind conflicts with itself—that our conscious thoughts don’t always align with what we feel. He uses the metaphor of a man riding an elephant to illustrate this point.

The third division is the way the brain was built—older, primitive systems beneath the parts the control higher reasoning and thought:

If you live in a relatively new suburban house, your home was probably built in less than a year, and its rooms were laid out by an architect who tried to make them fulfill people’s needs. The houses on my street, however, were all built around 1900, and since then they have expanded out into their backyards. Porches were extended, then enclosed, then turned into kitchens. Extra bedrooms were built above these extensions, then bathrooms were tacked on to these new rooms. The brain in vertebrates has similarly expanded, but in a forward direction. The brain started off with just three rooms, or clumps of neurons: a hindbrain (connected to spinal column), a midbrain, and a forebrain (connected to the sensory organs at the front of the animal). Over time, as more complex bodies and behaviors evolved, the brain kept building out the front, away from the spinal column, expanding the forebrain more than any other part. The forebrain of the earliest mammals developed a new outer shell, which included the hypothalamus (specialized to coordinate basic drives and motivations), the hippocampus (specialized for memory), and the amygdala (specialized for emotional learning and responding). These structures are sometimes referred to as the limbic system (from Latin limbus, “border” or “margin”) because they wrap around the rest of the brain, forming a border.

As mammals grew in size and diversified in behavior (after the dinosaurs became extinct), the remodeling continued. In the more social mammals, particularly among primates, a new layer of neural tissue developed and spread to surround the old limbic system. This neocortex (Latin for “new covering”) is the gray matter characteristic of human brains. The front portion of the neocortex is particularly interesting, for parts of it do not appear to be dedicated to specific tasks (such as moving a finger or processing sound.) Instead, it is available to make new associations and to engage in thinking, planning, and decision making—mental processes that can free an organism from responding only to an immediate situation.

This growth of the frontal cortex seems like a promising explanation for the divisions we experience of our minds. Perhaps the frontal cortex is the seat of reason: It is Plato’s charioteer; it is St. Paul’s Spirit. And it has taken over control, though not perfectly, from the more primitive limbic system—Plato’s bad horse, St. Paul’s flesh. We can call this explanation the Promethean script of human evolution, after the character in Greek mythology who store fire from the gods and gave it to humans. In this script, our ancestors were mere animals governed by the primitive emotions and dries of the limbic system until they received the divine gift of reason, installed in the newly expanded neocortex.

The Promethean script is pleasing in that it neatly raises us above all other animals, justifying our superiority by our rationality. At the same time, it captures our sense that we are not yet gods—that the fire of rationality is somehow new to us, and we have not yet fully mastered it. The Promethean script also fits well with some important early findings about the roles of the limbic system and the frontal cortex. For example, when some regions of the hypothalamus are stimulated directly with a small electric current, rats, cats, and other mammals can be made gluttonous, ferocious, or hypersexual, suggesting that the limbic system underlies many of our basic animal instincts. Conversely, when people suffer damage to the frontal cortex, they sometimes show an increase in sexual and aggressive behavior because the frontal cortex plays an important role in suppressing or inhibiting behavioral impulses.

There was recently such a case at the University of Virginia’s hospital. A schoolteacher in his forties had, fairly suddenly, begun to visit prostitutes, surf child pornography Web sites, and proposition young girls. He was soon arrested and convicted of child molestation. The day before his sentencing, he went to the hospital emergency room because he had a pounding headache and was experiencing a constant urge to rape his landlady. (His wife had thrown him out of the house months earlier.) Even while he was talking to the doctor, he asked passing nurses to sleep with him. A brain scan found that an enormous tumor in his frontal cortex was squeezing everything else, preventing the frontal cortex from doing its job of inhibiting inappropriate behavior and thinking about consequences. (Who in their right mind would put on such a show the day before his sentencing?) When the tumor was removed, the hypersexuality vanished. Moreover, when the tumor grew back the following year, the symptoms returned; and when the tumor was removed again, the symptoms disappeared again.

There is, however, a flaw in the Promethean script: It assumes that reason was installed in the frontal cortex but that emotion stayed behind in the limbic system. In fact, the frontal cortex enabled a great expansion of emotionality in humans. The lower third of the prefrontal cortex is called the orbitofrontal cortex because it is the part of the brain just above the eyes (orbit is the Latin term for the eye socket). This region of the cortex has grown especially large in humans and other primates and is one of the most consistently active areas of the brain during emotional reactions. The orbitofrontal cortex plays a central role when you size up the reward and punishment possibilities of a situation; the neurons in this part of the cortex fire wildly when there is an immediate possibility of pleasure or pain, gain or loss. When you feel yourself drawn to a meal, a landscape, or an attractive person, or repelled by a dead animal, a bad song, or a blind date, your orbitofrontal cortex is working hard to give you an emotional feeling of wanting to approach or to get away. The orbitofrontal cortex therefore appears to be a better candidate fro the id, or for St. Paul’s flesh, than for the superego or the Spirit.

The importance of the orbitofrontal cortex for emotion has been further demonstrated by research on brain damage. The neurologist Antonio Damasio has studied people who, because of a stroke, tumor, or blow to the head, have lost various parts of their frontal cortex. In the 1990s, Damasio found that when certain parts of the orbitofrontal cortex are damaged, patients lose most of their emotional lives. They report that when they ought to feel emotion, they feel nothing, and studies of their automatic reactions (such as those used in lie detector tests) confirm that they lack the normal flashes of bodily reaction that the rest of us experience when observing scenes of horror or beauty. Yet their reasoning and logical abilities are intact. They perform normally on tests of intelligence and knowledge of social rules and moral principles.

So what happens when these people go out into the world? Now that they are free of the distravtiond of emotion, do they become hyperlogical, able to see through the haze of feelings that blinds the rest of us to the path of perfect rationality? Just the opposite. They find themselves unable to make simple decisions or to set goals, and their lives fall apart. When they look out at the world and think, What should I do now?” they see doens of choices but lack immediate internal feelings of like or dislike. They must examine the pros and cons of every choice with their reasoning, but in the absence of feeling they see little reason to pick one or the other. When the rest of us look out at the world, our emotional brains have instantly and automatically appraised the possibilities. One possibility usually jumps out at us as the obvious best one. We need only use reason to weigh the pros and cons when two or three possibilities seem equally good.

Human rationality depends critically on sophisticated emotionality. It is only because our emotional brains works so well that our reasoning can work at all. Plato’s image of reason as charioteer controlling the dumb beasts of passion may overstate not only the wisdom but also the power of the charioteer. The metaphor of a rider on an elephant fits Damasio’s findings more closely: Reason and emotion must both work together to create intelligent behavior, but emotion emotion (a major part of the elephant) does most of the work. When the neocortex came along, it made the rider possible, but it made the elephant much smarter, too.

The Second Division: Left vs. Right

Jonathan Haidt begins his book The Happiness Hypothesis: Finding Modern Truth in Ancient Wisdom by discussing the ways that the human mind conflicts with itself—that our conscious thoughts don’t always align with what we feel. He uses the metaphor of a man riding an elephant to illustrate this point.

The second is the often oversimplified division between the left and right hemisphere of the brain:

A second division was discovered by accident in the 1960s when a surgeon began cutting people’s brains in half. The surgeon, Joe Bogan, had a good reason for doing this: He was trying to help people whose lives were destroyed by frequent and massive epileptic seizures. The human brain has two separate hemispheres joined by a large bundle of nerves, the corpus callosum. Seizures always begin at one spot in the brain and spread to the surrounding brain tissue. If a seizure crosses over the corpus callosum, it can spread to the entire brain, causing the person to lose consciousness, fall down, and writhe uncontrollably. Just as a military leader might blow up a bridge to prevent an enemy from crossing it, Bogen wanted to sever the corpus callosum to prevent the seizures from spreading.

At first glance this was an insane tactic. The corpus callosum is the largest single bundle of nerves in the entire body, so it must be doing something important. Indeed it is: It allows the two halves of the brain to communicate and coordinate their activity. Yet research on animals found that, within a few weeks of surgery, the animals were pretty much back to normal. So Bogen took the chance with human patients, and it worked. The intensity of the seizures was greatly reduced.

But was there really no loss of ability? To find out, the surgical team brought in a young psychologist, Michael Gazzaniga, whose job was to look for the after-effects of this “split-brain” surgery. Gazzaniga took advantage of the fact that the brain divides its processing of the world into its two hemispheres—left and right. The left hemisphere takes in information from the right half of the world (that is, it receives nerve transmissions from the right arm and leg, the right ear, and the left half of each retina, which receives light from the right half of the visual field) and sends out commands to move the limbs on the right side of the body. The right hemisphere is in this respect the left’s mirror image, taking in information from the left half of the world and controlling movement on the left side of the body. Nobody knows why the signals cross over in this way in all vertebrates; they just do. But in other respects, the two hemispheres are specialized for different tasks. The left hemisphere is specialized for language processing and analytical tasks. In visual tasks, it is better at noticing details. The right hemisphere is better at processing patterns in space, including that all-important pattern, the face. (This is the origin of popular and simplified ideas about artists being “right-brained” and scientists being “left-brained”).

Gazzaniga used the brain’s division of labor to present information to each half of the brain separately. He asked patients to stare at a spot on a screen, and then flashed a word or a picture of an object to the right of the spot, or just to the left, so quickly that there was not enough time for the patient to move her gaze. If a picture of a hat was flashed just to the right of the spot, the image would register on the left half of each retina (after the image had passed through the cornea and been inverted), which then sent its neural information back to the visual processing areas in the left hemisphere. Gazzaniga would then ask, “What did you see?” Because the left hemisphere has full language capabilities, the patient would quickly and easily say, “A hat.” If the image of the hat was flashed to the left of the spot, however, the image was sent back only to the right hemisphere, which does not control speech. When Gazzaniga asked, “What did you see?”, the patient, responding from the left hemisphere, said, “Nothing.” But when Gazzaniga asked the patient to use her left hand to point to the correct image on a card showing several images, she would point to the hat. Although the right hemisphere had indeed seen the hat, it did not report verbally on what it bad seen because it did not have access to the language centers in the left hemisphere. It was as if a separate intelligence was trapped in the right hemisphere, its only output device the left hand.

When Gazzaniga flashed different pictures to the two hemispheres, things drew weirder. On one occasion he flashed a picture of a chicken claw on the right ,and a picture of a house and a car covered in snow on the left. The patient was then shown an array of pictures and asked to point to the one that “goes with” what he had seen. The patient’s right hand pointed to a picture of a chicken (which went with the chicken claw the left hemisphere had seen), but the left hand pointed to a picture of a shovel (which went with the snow scene presented to the right hemisphere). When the patient was asked to explain his two responses, he did not say, “I have no idea why my left hand is pointing to a shovel; it must be something you showed my right brain.” Instead, the left hemisphere instantly made up a plausible story. The patient said, without any hesitation, “Oh, that’s easy. The chicken claw goes with the chicken, and you need a shovel out lean out the chicken shed.”

This finding, that people will readily fabricate reasons to explain their own behavior, is called “confabulation.” Confabulation is so frequent in work with split-brain patients and other people suffering brain damage that Gazzaniga refers to the language centers on the left side of the brain as the interpreter module, whose job is to give a running commentary on whatever the self is doing, even though the interpreter module has no access to the real causes or motives of the self’s behavior. For example, if the work “walk” is flashed to the right hemisphere, the patient might stand up and walk away. When asked why he is getting up, he might say, “I’m going to get a Coke.” The interpreter module is good at making up explanations, but not at knowing that it has done so.

Science has made even stranger discoveries. In some split-brain patients, or in others who have suffered damage to the corpus callosum, the right hemisphere seems to be actively fighting with the left hemisphere in a condition know as alien hand syndrome. In these cases, one handed, usually the left, acts of its own accord and seems to have its own agenda. The alien hand may pick up a ringing phone, but then refuse to pass the phone to the other hand or bring it up to an ear. The hand rejects choices the person has just made, for example, by putting back on the rack a shirt that the other hand has just picked out. It grabs the wrist of the other hand and tries to stop it from executing the person’s conscious plans. Sometimes, the alien hand actually reaches for the person’s own neck and tries to strangle him.

These dramatic splits of the mind are caused by rare splits of the brain. Normal people are not split-brained. Yet the split-brain studies were important in psychology because they showed in such an eerie way that the mind is a confederation of modules capable of working independently and even, sometimes, at cross-purposes. Split-brain studies are important for this book because they show in such a dramatic way that one of these modules is good at inventing convincing explanations for your behavior, even when it has no knowledge of the causes of your behavior. Gazzaniga’s “interpreter module” is, essentially, the rider.

The First Division: Mind vs. Body

Jonathan Haidt begins his book The Happiness Hypothesis: Finding Modern Truth in Ancient Wisdom by discussing the ways that the human mind conflicts with itself—that our conscious thoughts don’t always align with what we feel. He uses the metaphor of a man riding an elephant to illustrate this point.

The first division is the mind versus the body:

We sometimes say that the body has a mind of its own, but the French philosopher Michel de Montaigne went a step further and suggested that each hart of the body has its own emotions and its own agenda. Montaigne was most fascinated by the independence of the penis:

“We are right to note the license and disobedience of this member which thrusts itself forward so inopportunely when we do not want it to, and which so inopportunely lets us down when we most need it. It imperiously contest for authority with our will.”

Montaigne also noted the ways in which our facial expressions betray our secret thoughts our hair stands on end; our hearts race; our tongues fail to speak; and our bowels and anal sphincters undergo “dilations and contractions proper to [themselves], independent of our wishes or even opposed to them.” Some of these effects, we now know, are caused by the autonomic nervous system—the network of nerves that controls the organs and glands of our bodies, a network that is completely independent of voulntary or intentional control. But the last item on Montaigne’s list—the bowels—reflects the operation of a second brain. Our intestines are lined by a vast network of more than 100 million neurons; these handle all the computations needed to run the chemical refinery that processes and extracts nutrients from food. This guy brain is like a regional administrative center that handles stuff the head brain does not need to bother with. You might expect, then, that this gut brain takes its orders from the head brain and does as it is told. But the gut brain possesses a high degree of autonomy, and it continues to function well even if the vagus nerve, which connects the two brains together, is severed.

The gut brain makes its independence known in many ways: It causes irritable bowel syndrome when it “decides” to flush out the intestines. It triggers anxiety in the head brain when it detects infections in the gut, leading you to act in more cautious ways that are appropriate when you are sick. And it reacts in unexpected ways to anything that affects its main neurotransmitters, such as acetylcholine and serotonin. Hence, many of the initial side effects of Prozac and other selective serotonin reuptake inhibitors involve nausea and changes in bowel function. Trying to improve the workings of the head brain can directly interfere with those of the gut brain. The independence of the gut brain, combined with the autonomic nature of changes to the genitals, probably contributed to ancient Indian theories in which the abdomen contains the three lower chakras—energy centers corresponding to the colon/anus. sexual organs, and gut. The gut chakra is even said to be the source of gut feelings and intuitions, that is, ideas that appear to come from somewhere outside one’s own mind. When St. Paul lamented the battle of flesh versus Spirit, he was surely referring to some of the same divisions and frustrations that Montaigne experienced.

Theories of Mind Through the Ages

I’ve started Jonathan Haidt’s The Happiness Hypothesis: Finding Modern Truth in Ancient Wisdom, which is about the intersection between ancient wisdom in modern neuroscience and psychology. I enjoyed his synthesis of the history of how people have thought about self-control, from intuition, to rigidly rational, to a refined combination of the two. It reminds me of Issac Asimov’s “The Relativity of Wrong.”

Human thinking depends on metaphor. We understand new or complex things in relation to things we already know. For example, it’s hard to think about life in general, but once you apply the metaphor “life is a journey,” the metaphor guides you to some conclusions: You should learn the terrain, pick a direction, find some good traveling companions, and enjoy the trip, because there may not be anything at the end of the road. It’s also hard to think about the mind, but once you pick a metaphor it will guide your thinking. Throughout recorded history, people have lived with and tried to control animals, and these animals made their way into ancient metaphors. Buddha, for example, compared the mind to a wild elephant:

“In days gone by this mind of mine used to stray whenever selfish desire or lust or pleasure would lead it. Today this mind does not stray and is under the harmony of control, even as a wild elephant is controlled by the trainer.”

Plato used a similar metaphor in which the self (or soul) is a chariot, and the calm, rational part of the mind holds the reins. Plato’s charioteer had to control two horses:

“The horse that is on the right, or nobler, side is upright in frame and well jointed, with a high neck and a regal nose;…he is a lover of honor with modesty and self-control; companion to true glory, he needs no whip, and is guided by verbal commands alone. The other horse is a crooked great jumble of limbs…companion to wild boasts and indecency, he is shaggy around the ears—deaf as a post—and just barely yields to horsewhip and goad combined.”

For Plato, some of the emotions and passions are good (for example, the love of honor), and they help pull the self in the right direction, but others are bad (for example, the appetites and lusts). The goal of Platonic education was to help the charioteer gain perfect control over the two horses. Sigmund Freud offered us a related model 2,300 years later. Freud said that the mind is divided into three parts: the ego (the conscious, rational self); the superego (the conscience, a sometimes too rigid commitment to the rules of society); and the id (the desire for pleasure, lots of it, sooner rather than later). The metaphor I use when I lecture on Freud is to think of the mind as a horse and buggy (a Victorian chariot) in which the driver (the ego) struggles frantically to control a hungry, lustful, and disobedient horse (the id) while the driver’s father (the superego) sits in the back seat lecturing the driver on what he is doing wrong. For Freud, the goal of psychoanalysis was to escape this pitiful state by strengthening the ego, thus giving it more control over the id and more independence from the superego.

Freud, Plato, and Buddha all lived in worlds full of domesticated animals. They were familiar with the struggle to assert one’s will over a creature much larger than the self. But as the twentieth century wore on, cars replaced horses, and technology gave people ever more control over their physical worlds. When people looked for metaphors, they saw the mind as the driver of a car, or as a program running on a computer. It became possible to forget all about Freud’s unconscious, and just study the mechanisms of thinking and decision making. That’s what social scientists did in the last third of the century: Social psychologists created “information processing” theories to explain everything from prejudice to friendship. Economists created “rational choice” models to explain why people do what they do. The social sciences were uniting under the idea that people are rational agents who set goals and pursue them intelligently by using the information and resources at their disposal.

But then, why do people keep doing such stupid things? Why do they fail to control themselves and continue to do what they know is not good for them? I, for one, can easily muster the willpower to ignore all the desserts on the menu. But if the dessert is placed on the table, I can’t resist it. I can resolve to focus on a task and not get up until it is done, yet somehow I find myself walking into the kitchen, or procrastinating in other ways. I can resolve to wake up at 6:00 A.M. to write; yet after I have shut off the alarm, my repeated commands to myself to get out of bed have no effect, and I understand what Plato meant when he described the bad horse as “deaf as a post.” But it was during some larger life decisions, about dating, that I really began to grasp the extent of my powerlessness. I would know exactly what I should do, yet, even as I was telling my friends that I would do it, a part of me was dimly aware that I was not going to. Feelings of guilt, lust, or fear were often stronger than reasoning. (On the other hand, I was quite good at lecturing friends in similar situations about what was right for them.) The Roman poet Ovid captured my situation perfectly. In Metamorphoses, Medea is torn between her love for Jason and her duty to her father. She laments:

“I am dragged along by a strange new force. Desire and reason are pulling in different directions. I see the right way and approve it, but follow the wrong.”

Modern theories about rational choice and information processing don’t adequately explain weakness of the will. The older metaphors about controlling animals work beautifully. The image that I came up with for myself, as I marveled at my weakness, was that I was a rider on the back of an elephant. I’m holding the reins in my hands, and by pulling one way or the other I can tell the elephant to turn, to stop, or to go. I can direct things, but only when the elephant doesn’t have desires of his own. When the elephant really wants to do something, I’m no match for him.

Vincent Bugliosi on Jury Selection Prejudice

Here, Bugliosi describes how lawyers typically think jury selection is a very important process instead of the crap shoot that he sees it as. I think this insight also applies to job interviews:

Voir dire, the questioning of the prospective jurors, was about to begin.

In my opinion, the greatly restricted scope of permissible questions on voir dire reduces jury selection to at best one-third art and skill and two-thirds guesswork. Many experienced trial lawyers concede that after a lengthy and vigorous voir dire, the twelve jurors they end up with are frequently no better than the twelve originally seated in the box by lot. Why? Because the juror one side wants is nearly always one the other side does not. As each side excuses jurors who look good for the opposition, very little progress is normally made.

Nonetheless, a surprising number of lawyers consider voir dire the most important part of a trial. Obviously, it would be if a lawyer had the uncanny insight and ability to select jurors who would end up voting for his cause, regardless of the evidence. But since no lawyer has ever been found who can do this, or even come close, the reality, in my opinion, is that voir dire is far from being the most important part of the trial. Lawyers have a significant amount of control over every other area of the trial, and assiduous preparation pays enormous dividends. During voir dire, a lawyer operates mostly by fallible instinct. If even after years and years of marriage many husbands and wives don’t really know each other, how can there be any reliable way of evaluating prospective jurors by means of a few rounds of questions and answers? Because of this, voir dire has always been the one part of a trial I’ve never felt confident about.

Trial lawyers joke that prosecutors typically look for conservative, crew-cut Nordic types during voir dire, while defense attorneys look for long-haired fellows in well-worn cords and tweeds.

More specifically, it’s generally supposed that artists, sculptors, writers, musicians, and others in the arts, including the liberal arts, tend to be more sympathetic toward defendants in criminal trials. The same assumption is applied to people in the “helping professions,” like nurses and social workers, as well as to Italians, Hispanics, Jews, and blacks. Single people who are not deeply rooted in the community, factory workers, and anyone who prefers reading a book to watching television are all considered defense-oriented personalities. On the other hand, defense attorneys obviously challenge anyone who works in law enforcement, and are similarly wary of secretaries, who, according to a national jury survey, and the most prosecution-oriented of all occupational groups. The only inference I’ve been able to draw from this statistic is that secretaries have to go along with the boss, and in the courtroom, symbolically, the boss is the government. Engineers, scientists, accountants, and bookkeepers are generally considered pro-prosecution jurors as well, perhaps because they are trained to be objective and reach conclusions based solely on facts, not emotions.

But all of this vague conjecture ignores the reality that, not uncommonly, the juror in the characteristically defense-oriented profession turns out to be a staunch member of the John Birch Society, and the juror in the prosecution-oriented profession belongs to the ACLU.

Vincent Bugliosi on Trial Preparation

The preparations that super-lawyer Vincent Bugliosi go through to prepare for a trial are impressive, and his method could be used to prepare for any number of things:

With the trial scheduled to start in the morning, my yellow-pad sheets of paper, covering every aspect of the trial (even case law authority to overcome anticipated objections, and optional lines of follow-up questions dependent on how a witness on cross-examination answered a particular question), rose to a height of almost a foot. Although the clear trend in the legal profession is toward fewer and fewer notes on direct examination, cross-examination, and final summation (so recommend instructors at many law schools and trial lawyer seminars), I do the opposite, almost to an obsessive, perhaps even unnecessary extreme. But I believe in the adage that the war is won before the first battle is fought, and thus far in my career I’ve been able to orchestrate most of the trial on paper before ever entering the courtroom. Arguments, counterarguments, questions, objections—the whole gamut takes place on my yellow pad before the trial even starts. My objective, of course, is for the trial to be merely the acting out of the scenario or script I’ve already written. Granted, unusual things happen at a trial, but if I’ve done my homework, even many of these occurrences can be anticipated and prepared for. In my unremitting quest to be completely ready for trial, I find that in effect I try the case against myself.

Reducing what’s in one’s mind to writing is very tedious and time-consuming, of course. In fact, working on my yellow pad is the hardest part of trying a case for me. But in my opinion, it is the only way to try a complex lawsuit, and the only way to make a superior presentation of my case, as opposed to a good or merely adequate one.

For instance, in preparing my cross-examination, I might know, in my mind, what point I want to make, but it might take me a half an hour of sweat on my yellow pad to work out the very best way of establishing this one point on cross. Before I ask my key question, I might decide I have to ask ten preliminary questions, and in a particular sequence. Some of these preliminary questions I may rewrite three or four times because when I examine them closely I may see that the witness might be able to discern the direction in which I am taking him.

Likewise, in preparing my final summation, I might know what point I want to make, but when I try to articulate it on my yellow pad, oftentimes my pencil comes to a stop. It’s at this moment that I realize I didn’t quite understand my point as well as I thought I did, or even if I did, I certainly realize I was unable to extemporaneously articulate the point with the clarity and power I want.

The standard explanation of lawyers who religiously avoid the pain and agony of the yellow pad is that if a lawyer does all that preparation and has everything written down, he can’t be flexible, and can’t think on his feet when something not covered by his notes occurs. If that’s not a classic non sequitur, I don’t know what is. Is instant improvisation and flexibility the domain only of those who are unprepared?

Vincent Bugliosi on Judges

A word about judges.

The American public have an understandably negative view of politicians, public opinions show, and an equally negative view of lawyers. Conventional logic would seem to dictate that since a judge is normally both a politican and a lawyer, people would have a markedly low opinion of them. But on the contrary, the mere investiture of a twenty-five-dollar black cotton robe elevates the denigrated lawyer-politican to a position of considerable honor and respect in our society, as if the garment itself miraculously imbues the person with qualities not previously possessed. As an example, judges have, for the most part, remained off-limits to the creators of popular entertainment, being depicted on screens large and small as learned men and women of statue and solemnity who are as impartial as sunlight. This depiction ignores reality.

As to the political aspects of judges, the appointment of judgeships by govenors (or the President in federal courts) has always been part and parcel of the political spoils or patronage system. For example, 97 percent of President Reagan’s appointees to the federal branch were Republicans. Thus, in the overwhelming majority of cases there is an umbilical cord between the appointment and politics. Either the appointee has personally labored long and hard in the political vineyards, or he is a favored frined of one who has, oftentimes a generous financial supporter of the party in power. As Roy Mersky, professor at the University of Texas Law School, says, “To be appointed a judge, to a great extent is a result of one’s political activity.” Consequently, lawyers entering courtrooms are frequently confronted with the specter of a new judge they’ve never heard of and know absolutely nothing about. The judge may never have distinguished himself in the legal profession, but a cursory investigation almost invariably reveals a political connection. (Of course, just because there is a political connection does not mean that the judge is not otherwise competent and qualified to sit on the bench. Many times he is.) Incredibly, and unfortunately, the political connection holds true all the way up to the U.S. Supreme Court, where, for instance, the last three Chief Justices–Earl Warren, Warren E. Burger, and, to a lesser extent, William Rehnquist–like so many of their predecessors in history, have all been creatures of politics.

Although there are many exceptions, by and large the bench boasts undistinguished lawyers whose principal qualification for the most important position in our legal system is the all-important political connection. Rarely, for instance, will a governor seek out a reknown but apolitical legal scholar, such as a highly regarded law school professor, and proffer a judgeship.

It has been my experience and, I daresay, the experience of most veteran trial lawyers that the typical judge either has no or very scant trial experience as a lawyer, or is pompous and dictatorial on the bench, or worst of all, is clearly partial to one side or the other in a lawsuit. Sometimes the judge displays all three infirmities.

Vincent Bugliosi on The Defendant’s Consciousness of Innocence

And the Sea Will Tell is loaded with the author’s observations on the mechanics of America’s criminal justice system. Here is Vincent Bugliosi’s unconventional idea that while the prosecution is on alert for the a consciousness of guilt on the part of the accused, a consciousness of innocence is a legitimate and underutilized defense tactic:

Books on criminal evidence have sections called “consciousness of guilt,” wherein all types of conduct and statements of an accused–flight, resistance to arrest, escape, destruction of ecidence, silence in the face of an accusation, false or conflicting statements, ect.–have been held by courts to be admissible circumstatial evidence showing a consciousness of guilt. In addition to these conventional indications of guilt, as a prosecutor I had a passion for taking even unconventional and obscure specks of evidence and developing them into an argument showing consciousness of guilt on the part of the defendant.

Now, as a defense attorney, I find it very natural to argue the opposite side of the coin; consciousness of innocence, also illustrated by the conduct and statements of the accused. Strangely, however, the same books that have entire sections on consciousness of guilt never even mention consciousness of innocence. It’s as if the pivotal mechanisms of the criminal justive system have been established to prove guilt, not innocence, perhaps the residual progeny of the notorious common-law rule (abolished by statute in England in 1701) that in cases of felony, the accused was not even allowed to introduce witnesses in his defense. It should be noted that the very term “circumstantial evidence” has come to mean circumstantial eveidence of guilt. But there obviously can be circumstantial evidence of evidence, too.

*Cases are legion in which certain acts and statements of an accused are deemed admissable circumstantial evidence to show guilt, while the opposite of such acts or statements are not admissible to show innocence; e.g., although the prosecution can introduce evidence of escape or attempted escape, the defense generally cannot introduce evidence that the defendant had an opportunity to escape but did not. And while a defendant;s incriminating statement comes in under an exception to the hearsay rule, adefendant’s exculpatory statement is inadmissible, since the law virtually presumes a self-serving motivation for the latter. Similarly, a suspect’s silence in response to being accused of committing a crime is admissible as showing a consciousness of guilt. But if he is not silent, and denies the accusation, the denial is not admissible.

Eric Hoffer on The Trader and The Scribe

Eric Hoffer (July 25, 1898 – May 21, 1983) is an interesting writer. His writing is accessible—he writes simply and is brimming with ideas—but he’s such a product of his times that a lot of the things he says sound dated today. In The Ordeal of Change, he spends a lot of time comparing and contrasting the goods and ills of communism—something that hasn’t been an issue in my lifetime. Still, his insights are keen:

Capitalism can produce abundance. It gives full scope to the energies of the individual, and is an optimal milieu for people who can help themselves and want to be helped. But capitalism cannot do much for the helpless. It cannot turn the chronically poor into active, useful citizens. Nor does it know how to cope with people who are more interested in quality of life than in a high standard of living.

I enjoy him for the spirit in which he throws out ideas. I don’t always agree with his, and some of the things he writes are incomprehensible to me, reading in 2015, but his insights are valuable. Consider the trader, written about in In Our Time(which is sadly out of print). I’ve edited out the paragraphs that I feel are irrelevant and make no sense:

It seems strange that we know so little of the history of the trader. The trader preceded the cultivator and the herder, and he is probably more ancient than the hunter and the warrior.

The trader and the artist are probably of equal antiquity, and the most uniquely human. There are animal hunters and warriors, and some any species engage in activities reminiscent of cultivating and herding, but nowhere in the animal world is there anything remotely equivalent to the trader and the artist.

That early man, so naked to the elements and predators, should have survived at all seems miraculous. But the situation becomes doubly miraculous when we find that earliest man was the only lighthearted being in a deadly serious universe, given to playing and tinkering, and exerting himself more in the pursuit of superfluities than of necessities. He had ornaments before he had clothing, and clay figurines before clay pots. From his earliest beginning man was a luxury-loving animal, and the earliest trade was in luxuries. Trade in necessities was a late development.

The trader was probably the first individual. He became an individual not by choice but by circumstances. He was either a straggler left behind or a fugitive or a sole survivor. Earliest trade was foreign trade and the trader was a foreigner. Even at present in backward parts of the world most traders are foreigners: Indians in East Africa, Lebanese and Greeks in West Africa, Parsees in India, and Chinese in Southeast Asia. I can see the first trader, an outsider, approaching a strange human group, bearing a gift of something new and desirable, and then going on from group to group exchanging gifts.

Considering the trader’s antiquity and the vital role he played in the evolution of civilization, it is difficult to understand the scorn and disdain he evoked in other human types, particularly in the warrior and the scribe. To the warrior who made history and the scribe who recorded it, the trader was the embodiment of greed, dishonesty, cowardice, dishonor, mendacity and corruption in general. Yet it was the trader who first gave weapons to the warrior and the craft of writing to the scribe. Traders’ tags and marks of ownership preceded clay tablets and papyrus rolls. Later, when the scribe had made writing so cumbersome and complex that one needed a lifetime to master it, the Phoenician trader moved in to simplify it by introducing the phonetic alphabet.

In free societies, the tug of war between the trader and scribe has had beneficent effects. The trader cracked the scribe’s monopoly of learning by diffusing literacy through popular education, while the scribe has been in the forefront of every movement that set out to separate the trader from his wealth. As a result, both learning and riches have leaked out to wider sections of the population.

Seneca: Man, Sage, and Politician

Reading James Romm’s great biography of the life and times of Seneca, Dying Every Day: Seneca at the Court of Nero, is getting me excited to finally pay Letters from a Stoic the attention it deserves:

Wherever Seneca went in those years, he carried on work on his magnum opus, a remarkable set of short moral essays framed as letters. Ostensibly addressed to Lucilius, these letters were in fact aimed at a wide audience. But the fiction of an intimate correspondence gave Seneca latitude in the structure of the essays, as well as unusual freedom to vary voice, tone, and technique. The melding of ethical inquiry with epistolary style produced a breakthrough for Seneca. He carried on the Letters to Lucilius at far greater length than anything else he had written and with greater candor about his life and thoughts—or at least, what seems to be candor.

A typical letter begins with a moment from daily life, then goes on to explore insights arising from that moment. In one of the letters, for example, Seneca describes a trip to a friend’s vacation home, a wealthy estate house in Puteoli (modern Pozzuoli).

To reach this house from Baiae, his point of departure, Seneca needed to cross a three-mile bay. He set out on a small hired ship, although dark clouds loomed in the distance. Hoping to beat the storm, Seneca told his steerman to save time by taking a direct route rather than hugging the shore. But that only put him in deep, open water when the winds began to pick up. Halfway across, when there was no longer any point in turning back, Seneca found himself in a pitching, heaving swell. Seasickness, a condition he found intolerable, began to torment him, though he found he could not relieve his distress by vomiting.

Panicking, Seneca urged the steerman to change course and head for the nearest shore, but that was a rough coastline without anchorage. The steerman argued that the ship could not go near those rocks, but Seneca was by now in agony. He forced the crew to bring the ship as near to land as they could. And then he leaped into the sea.

Noting that he had always been a good swimmer, Seneca describes to Lucilius how he got himself to shore and hauled himself painfully onto the rocky beach. Somehow he located a faint path leading to the villa he was seeking. He now understood, he writes whimsically, that the sufferings of Odysseus, driven about in his ship for yen years as described in the Odyssey, must have stemmed more from seasickness than from sea monsters.

Later, washed and changed, with the villa’s slaves giving his body a rubdown to restore its warmth, Seneca reflected on how nausea had driven him to desperation. “I endured incredible trials because I could not endure myself,” he writes, using a typically pointed turn of phrase. Then he let his thoughts wander down their usual path, toward the search for a virtuous life, a life of moral awareness. Discomforts overwhelm the body, Seneca muses, in the same way that vice and ignorance overwhelm the soul. The sufferer may not even know he is suffering, just as a deep sleeper does not know he is asleep. Only philosophy can rouse souls from such comas. Philosophy, Lucilius, is what you must pursue with all your being. Abandon all else except philosophy, just as you would neglect all your affairs had you fallen gravely ill.

The letter lands its readers at a very different place than where it appeared to be headed. The retching, desperate man who pitches himself into the sea turns suddenly into a serious thinker. Seneca’s portrait of his own folly in taking a shortcut, and his description of embarrassing physical distress, draw us in with their frankness and closely observed detail. Once we have been hooked by Seneca the man, Seneca the sage reels us in.

But Seneca was not only a man and a sage; he was also a politician. His mastery of image making during his decade at Nero’s side, his many efforts to manipulate public opinion, make the task of reading his Letters to Lucilius a complicated one. Is it the real Seneca we see before is–a man of profound moral earnestness, whose every third thought is of philosophy–or an imago, a shape conjured by the wordsmith’s arts? Did Seneca himself, after fifteen years in which his every written word was a political act, even know the difference?