The Rise of Terrorism

From Moisés Naím’s brilliant book The End of Power: From Boardrooms to Battlefields and Churches to States, Why Being In Charge Isn’t What It Used to Be, on the decline on big powers in favor of small upstarts. This section deals with military capabilities in particular:

Smaller forces are proving successful with increasing regularity, at least in terms of advancing their political goals while surviving militarily. The Harvard scholar Ivan Arreguin-Toft analyzed 197 asymmetric wars that took place around the world in the period 1800-1998. They were asymmetric in the sense that a wide gap existed at the outset between the antagonists as measured in traditional terms—that is, by the size of their military and the size of their population. Arreguin-Toft found that the supposedly “weak” actor actually won the conflict in almost 30 percent of these cases. That fact was remarkable in itself, but even more striking was the trend over time. In the course of the last two centuries, there has been a steady increase in victories by the supposedly “weak” antagonist. The weak actor won only 11.8 percent of its conflicts between 1800 and 1849, as compared to 55 percent of its conflicts between 1950 and 1998. What this means is that a core axiom of war has been stood on its head. Once upon a time, superior firepower ultimately prevailed. Now that is no longer true.

The reason is due in part to the fact that, in today’s world, the resort to barbarism by the stronger party,—for example, indiscriminate bombing and shelling of civilian populations in World War II, the use of torture by the French in Algeria, or the target assassinations of the Vietcong under the Phoenix program in South Vietnam—are no longer politically acceptable. As Arreguin-Toft argues, some forms of barbarism—the controversial Phoenix program, for example—can be militarily effective in relatively short order against the indirect attacks of a guerrilla warfare strategy. But in the absence of a true existential threat to a stronger state, especially a democracy where military policy can come under intense public scrutiny, no such strategy is politically viable. As retired General Wesley Clark, a Vietnam veteran and former Supreme Allied Commander Europe of NATO, told me: “Today, a division commander can directly control attack helicopters 30 to 40 miles ahead of the battle, and enjoy what we call ‘full spectrum dominance’ [control of air, land, sea, space, and cyberspace]. But there are things we were doing in Vietnam that we cannot do today. We have more technology but narrower legal options.” The “successes” of an autocratic Russia’s savage tactics in Chechnya or of Sri Lanka’s brutal suppression of the Tamil Tigers are bloody examples of what it takes for superior firepower to win today over a tenacious, if militarily weaker, adversary.

The prominence of political factors in determining the outcome of asymmetric military conflicts helps explain the ongoing rise of the ultimate small actor—the terrorist. We have come a long way since terrorism’s roots in the state during the revolutionary French regime’s “Reign of Terror” from September 1793 to July 1794. Although the US State Department has designated around fifty groups as Foreign Terrorist Organizations, the number of active groups is easily double that, some with dozens of members, others with thousands. Moreover, the ability of a lone individual or small group to change the course of history with an act of violence was evident even before the Bosnian Serb nationalist Gavrilo Princip’s assassination of Archduke Ferdinand in Sarajevo helped start World War I.

What sets apart modern terrorism—as epitomized by 9/11; other Al Qaeda actions in London, Madrid, and Bali; the Chechen attacks in Moscow; and Lashkar-e-Taiba’s attack on Mumbai—is the elevation of terrorism from a matter of domestic security (i.e., for each country to handle in its own way) to a global military concern. Terrorist attacks by Osama bin Laden and his organization prompted governments from more than fifty countries to spend well over a trillion dollars safeguarding their populations from potential attack. A key French defense strategy paper of 1994 contained 20 references to terrorism; its 2008 update mentioned it 107 times, far more frequently than war itself—“to the point,” wrote scholars Marc Hecker and Thomas Rid, “that this form of conflict seems to eclipse the threat of war.”

The Value of Low Brow

From Nicholas Carr’s The Shallows: What The Internet Is Doing To Our Brains, on the value of printed material that exists solely for entertainment:

Tawdry novels, quack theories, gutter journalism, propaganda, and, of course, reams of pornography poured into the marketplace and found eager buyers at every station in society. Priests and politicians began to wonder whether, as England’s first official book censor put it in 1660, “more mischief than advantage were not occasion’d to the Christian world by the Invention of Typography.” The famed Spanish dramatist Lope de Vega expressed the feelings of many a grandee when, in his 1612 play All Citizens Are Soldiers, he wrote:

So many books—so much confusion!
All Around us an ocean of print
And most of it covered in froth.

But the froth itself was vital. Far from dampening the intellectual transformation wrought by the printed book, it magnified it. By accelerating the spread of books into popular culture and making them a mainstay of leisure time, the cruder, crasser, and more trifling works also helped spread the book’s ethic of deep, attentive reading. “The same silence, solitude and contemplative attitudes associated formerly with pure spiritual devotion,” writes Eisenstein, “also accompanies the perusal of scandal sheets, ‘lewd Ballads,’ ‘merry bookes of Italie,’ and other ‘corrupted tales in Inke and Paper.'” Whether a person is immersed in a bodice ripper or a Psalter, the synaptic effects are largely the same.

Intellectual Technologies Change How We Think

From Nicholas Carr’s The Shallows: What The Internet Is Doing To Our Brains, on how technology shapes our thoughts:

Every technology is an expression of human will. Through our tools, we seek to expand our power and control over our circumstances—over nature, over time and distance, over one another. Our technologies can be divided, roughly, into four categories, according to the way they supplement or amplify our native capacities. One set, which encompasses the plow, the darning needle, and the fighter jet, extends our physical strength, dexterity, or resilience. A second set, which included the microscope, the amplifier, and the Geiger counter, extends the range or sensitivity of our senses. A third group, spanning such technologies as the reservoir, the birth control pill, and the genetically modified corn plant, enables us to reshape nature to better serve our needs or desires.

The map and the clock belong to the fourth category, which might best be called, to borrow a term used in slightly different senses by the social anthropologist Jack Goody and the sociologist Daniel Bell, “intellectual technologies.” These include all the tools we use to extend or support our mental powers—to find and classify information, to formulate and articulate ideas, to share know-how and knowledge, to take measurements and perform calculations, to expand the capacity of our memory. The typewriter is an intellectual technology. So are the abacus and the slide rule, the sextant and the globe, the book and the newspaper, the school and the library, the computer and the Internet. Although the use of any kind of tool can influence our thoughts and perspectives—the plow changed the outlook of the farmer, the microscope opened new worlds of mental exploration for the scientist—it is our intellectual technologies that have the greatest and most lasting power over what and how we think. They are our most intimate tools, the ones we use for self-expression, for shaping personal and public identity, and for cultivating relations with others.

What Nietzsche sensed as he typed his words onto the paper clamped in his writing ball—that the tools we used to write, read, and otherwise manipulate information work on our minds even and our minds work with them—is a central theme of intellectual and cultural history. As the stories of the map and the mechanical clock illustrate, intellectual technologies, when they come into popular use, often promote new ways of thinking or extend to the general population established ways of thinking that had been limited to a small, elite group. Every intellectual technology, to put it another way, embodies an intellectual ethic, a set of assumptions about how the human mind works or should work. The map and the clock shared a similar ethic. Both placed a new stress on measurement and abstraction, on perceiving and defining forms and process beyond those apparent to the senses.

Brain Plasticity

From Nicholas Carr’s The Shallows: What The Internet Is Doing To Our Brains, a segment on how our experiences shape our brain:

One of the simplest yet most powerful demonstrations of how synaptic connections change came in a series of experiments that the biologist Eric Kandel performed in the early 1970s on a type of large sea slug called Aplysia (Sea creatures make particularly good subjects for neurological tests because they tend to have simple nervous systems and large nerve cells.)  Kandel, who would earn a Nobel Prize for his work, found that if you touch a slug’s gill, even very lightly, the gill will immediately and reflexively recoil. But if you touch the gill repeatedly, without causing any harm to the animal, the recoiling instinct will steadily diminish. The slug will become habituated to the touch and learn to ignore it. By monitoring slugs’ nervous systems, Kandel discovered that “this learned change in behavior was paralleled by a progressive weakening of the synaptic connections” between the sensory neurons that “feel” the touch and the motor neurons that tell the gill to retract. In a slug’s ordinary state, about ninety percent of the sensory neurons in its gill have connections to motor neurons. But after its gill is touched just forty times, only ten percent of the sensory cells maintain links to the motor cells. The research “showed dramatically,” Kandel wrote, that “synapses can undergo large and enduring changes in strength after only a relatively small amount of training.”

The plasticity of our synapses brings into harmony two philosophies of mind that have for centuries stood in conflict: empiricism and rationalism. In the view of empiricists, like John Locke, the mind we are born with is a blank slate, a “tabula rasa.” What we know comes entirely through our experiences, through what we learn as we live. To put into more familiar terms, we are products of nature, not nurture. In the view of rationalists, like Immanuel Kant, we are born with built-in mental “templates” that determine how we perceive and make sense of the world. All our experiences are filtered through these inborn templates. Nature predominates.

The Aplysia experiments revealed, as Kandel reports, “that both views had merit—in fact they complemented each other.” Our genes “specify” many of “the connections among neurons—that is, which neurons form synaptic connections with which other neurons and when.” Those genetically determined connections form Kant’s innate templates, the basic architecture of the brain. But our experiences regulate the strength, or “long-term effectiveness,” of the connections, allowing as Locke had argued, the ongoing reshaping of the mind and “the expression of new patterns of behavior.” The opposing philosophies of the empiricist and the rationalist find their common ground in the synapse. The New York University neuroscientist Joseph LeDoux explains in his book Synaptic Self that nature and nurture “actually speak the same language. They both ultimately achieve their mental and behavioral effects by shaping the synaptic organization of the brain.”

The brain is not the machine we once thought it to be. Through different regions are associated with different mental functions, the cellular components do not form permanent structures or play rigid roles. They’re flexible. They change with experience, circumstance, and need. Some of the most extensive and remarkable changes take place in response to damage to the nervous system. Experiments show, for instance, that if a person is struck blind, the part of the brain that had been dedicated to processing visual stimuli—the visual cortex—doesn’t just go dark. It is quickly taken over by circuits used for audio processing. And if the person learns to read Braille, the visual cortex will be redeployed for processing information delivered through the sense of touch. “Neurons seem to ‘want’ to receive input,” explains Nancy Kanwisher of MIT’s McGovern Institute for Brain Research: “When their visual input disappears, they start responding to the next best thing.” Thanks to the ready adaptability of neurons, the senses of hearing and touch can grow sharper to mitigate the effects of the loss of sight. Similar alterations happen in the brains of people who go deaf: their other senses strengthen to help make up for the loss of hearing. The area in the brain that processes peripheral vision, for example, grows larger, enabling them to see what they once would have heard.

 

The Downside of GPS

From Nicholas Carr’s The Glass Cage: How Our Computers Are Changing Us:

A GPS device, by allowing us to get from point A to point B with the least possible effort and nuisance, can make our lives easier, perhaps imbuing us, as David Brooks suggests, with a numb sort of bliss. But what it steals from us, when we turn to it too often, is the joy and satisfaction of apprehending the world around us—and of making that world a part of us. Tim Ingold, an anthropologist at the University of Aberdeen in Scotland, draws a distinction between two very different modes of travel: wayfaring and transport. Wayfaring, he explains, is “our most fundamental way of being in the world.” Immersed in the landscape, attuned to its textures and features, the wayfarer enjoys “an experience of movement in which action and perception are intimately coupled.” Wayfaring becomes “an ongoing process of growth and development, or self-renewal.” Transport, on the other hand, is “essentially destination-oriented.” It’s not so much a process of discovery “along a way of life” as a mere “carrying across, from location to location, of people and goods in such a way as to leave their basic natures unaffected.” In transport, the traveler doesn’t actually move in any meaningful way. “Rather, he is moved, becoming a passenger in his own body.”

Wayfaring is messier and less efficient than transport, which is why it has become a target for automation. “If you have a mobile phone with Google Maps,” says Michael Jones, an executive in Google’s mapping division, “you can go anywhere on the planet and have confidence that we can give you directions to get to where you want to go safely and easily.” As a result, he declares, “No human ever has to feel lost again.” That certainly sounds appealing, as if some basic problem in our existence had been solved forever. And it fits the Silicon Valley obsession with using software to rid people’s lives of “friction.” But the more you think about it, the more you realize that to never confront the possibility of getting lost is to live in a state of perpetual dislocation. If you never have to worry about not knowing where you are, then you never have to know where you are. It is to live in a state of dependency, a ward of your phone and its apps.

Problems produce friction in our lives, but friction can act as a catalyst, pushing us to a fuller awareness and deeper understanding of our situation.

The Great Unbundling

From The Big Switch: Rewiring the World, from Edison to Google, the inherent economic tragedy of moving newspapers from print to online (get ready to be okay with lower reporting standards):

“For virtually every newspaper,” says one industry analyst, “their only growth area is online.” Statistics underscore the point. Visits to newspaper Web sites shot up 22 percent in 2006 alone.

But the nature of a newspaper, both as a medium for information and as a business, changes when it loses its physical form and shifts to the Internet. It gets read in a different way, and it makes money in a different way. A print newspaper provides an array of content—local stories, national and international reports, news analyses, editorials and opinion columns, photographs, sports scores, stock tables, TV listings, cartoons, and a variety of classified and display advertising—all bundled together into a single product. People subscribe to the bundle, or buy it at a newsstand, and advertisers pay to catch readers’ eyes as they thumb through the pages. The publisher’s goal is to make the entire package as attractive as possible to a broad set of readers and advertisers. The newspaper as a whole is what matters, and as a product it’s worth more than the sum of its parts.

When a newspaper moves online, the bundle falls apart. Readers don’t flip through a mix of stories, advertisements, and other bits of content. They go directly to a particular story that interests them, often ignoring everything else. In many cases, they bypass the newspaper’s “front page” altogether, using search engines, feed readers, or headline aggregators like Google News, Digg, and Daylife to leap directly to an individual story. They may not even be aware of which newspaper’s site they’ve arrived at. For the publisher, the newspaper as a whole becomes far less important. What matters are the parts. Each story becomes a separate product standing naked in the marketplace. It lives or dies on its own economic merits.

Because few newspapers, other than specialized ones like the Wall Street Journal, are able to charge anything for their online editions, the success of a story as a product is judged by the advertising revenue it generates. Advertisers no longer have to pay to appear in a bundle. Using sophisticated ad placement services like Google AdWords or Yahoo Search Marketing, they can target their ads to the subject matter of an individual story or even to the particular readers it attracts, and they only pay the publisher a fee when a reader views an ad or, as is increasingly the case, clicks on it. Each ad, moreover, carries a different price, depending on how valuable a viewing or a clickthrough is to the advertiser. A pharmaceutical company will pay a lot for every click on an ad for a new drug, for instance, because every new customer it attracts will generate a lot of sales. Since all page views and ad clickthroughs are meticulously tracked, the publisher knows precisely how many times each ad is seen, how many times it is clicked, and the revenue that each view or clickthrough produces.

The most successful articles, in economic terms, are the ones that not only draw a lot of readers but deal with subjects that attract high-priced ads. And the most successful of all are those that attract a lot of readers who are inclined to click on the high-priced ads. An article about new treatments for depression would, for instance, tend to be especially lucrative, since it would attract expensive drug ads and draw a large number of readers who are interested in new depression treatments and hence likely to click on ads for psychiatric drugs. Articles about saving for retirement or buying a new car or putting an addition onto a home would also tend to throw off a large profit, for similar reasons. On the other hand, a long investigative article on government corruption or the resurgence of malaria in Africa would be much less likely to produce substantial ad revenues. Even if it attracts a lot of readers, a long shot in itself, it doesn’t cover a subject that advertisers want to be associated with or that would produce a lot of valuable clickthroughs. In general, articles on serious and complex subjects, from politics to wars to international affairs, will fail to generate attractive ad revenues.

Such hard journalism also tends to be expensive to produce. A publisher has to assign talented jorunalists to a long-term reporting effort, which may or may not end in a story, and has to pay their salaries and benefits during that time. The publisher may also have to shell out for a lot of expensive flights and hotel stays, or even set up an overseas bureau. When bundled into a print edition, hard journalism can add considerably to the overall value of a newspaper. Not least, it can raise the presitge of the paper, making it more attractive to subscribers and advertisers. Onine, however, most hard journalism becomes difficult to justify economically. Getting a freelance writer to dask off a review of high-definition television sets—or, better yet, getting readers to contribute their own reviews for free—would produce much more attractive returns.

In a 2005 interview, a reporter for the Rocky Mountain News asked Craig Newmark what he’d do if he ran a newspaper that was losing its classified ads to sites like Craigslist. “I’d be moving to the Web faster,” he replied, and “hiring more investigative journalists.” It’s a happy thought, but ignores the economics of online publishing. As soon as a newspaper is unbundled, an intricate and, until now, largely invisible system of subsidization quickly unravels. Classified ads, for instance, can no longer help to underwrite the salaries of investigative journalists or overseas correspondents. Each piece of content has to compete separately, consuming costs and generating revenues in isolation. So if you’re a beleaguered publisher, losing readers and money and facing Wall Street’s wrath, what are you going to do as you shift your content online? Hire more investigative journalists? Or publish more articles about consumer electronics? It seems clear that as newspapers adapt to the economics of the Web, they are far more likely to continue to fire reporters than hire new ones.

Digital Sharecropping

The description of many online companies’ business model from The Big Switch: Rewiring the World, from Edison to Google is less than flattering:

Look more closely at YouTube. It doesn’t pay a cent for the hundreds of thousands of videos it broadcasts. All the production costs are shouldered by the users of the service. They’re the directors, producers, writes, and actors, and by uploading their work to the YouTube site they’re in effect donating their labor to the company. Such contributions of “user-generated content,” as it’s called, have become commonplace on the Internet, and they’re providing the raw material for many Web businesses. Millions of people freely share their words and ideas through blogs and blog comments, which are often collected and syndicated by corporations. The contributors to open-source software projects, too, donate their labor, even though the products of their work are often commericalized by for-profit companies like IBM, Red Hat, and Oracle. The popular online encyclopedia Wikipedia is written and edited by volunteers. Yelp, a group of city sites, relies on reviews of restaurants, shops, and other local attractions contributed by members. The news agency Reuters syndicates photos and videos submitted by amateurs, some of whom are paid a small fee but most of whom get nothing. Social networking sites like MySpace and Facebook, and dating sites like PlentyOfFish, are essentially agglomerations of the creative, unpaid contributions of their members. In a twist on the old agricultural practice of sharecropping, the site owners provide the digital real estate and tools, let the members do all the work, and then harvest the economic rewards.

The 19th Century Ice Trade

From Nicholas Carr’s superbly engaging The Big Switch: Rewiring the World, from Edison to Google, which is a history of electricity’s path to becoming a utility and what it’s parallels in the emergent cloud computing industry, an example of the transient nature of business and technology:

But while electrification propelled some industries to rapid growth, it wiped out others entirely. During the 1800s, American companies had turned the distribution of ice into a thriving world-wide business. Huge sheets were sawn from lakes and rivers in northern states during the winter and stored in insulated icehouses. Packed in hay and tree bark, the ice was shipped in railcars or the holds of schooners to customers as far away as India and Singapore, who used it to chill drinks, preserve food, and make ice cream. At the trade’s peak, around 1880, America’s many “frozen water companies” were harvesting some 10 million tons of ice a year and earning millions in profits. Along Maine’s Kennebec River alone, thirty-six companies operated fifty-three icehouses with a total capacity of a million tons. But over the next few decades, cheap electricity devastated the business, first by making the artificial production of ice more economical and then by spurring homeowners to replace their iceboxes with electric refrigerators. As Gavin Weightman writes in The Frozen-Water Trade, the “huge industry simply melted away.”

Overlooked Atrocities

Today I was thumbing through Matthew White’s crazy historical retrospective Atrocities: The 100 Deadliest Episodes in Human History and thought the introduction had some little-discussed but interesting points about history and warfare:

If we study history to avoid making the mistakes of the past, it helps to know what those mistakes were, and that includes all of the mistakes, not just the ones that support certain pet ideas. It’s easy to solve the problem of human violence if we focus only on the seven atrocities that prove our point, but a list of the hundred worst presents more of a challenge. A person’s grand unified theory of human violence should explain most of the multicides on this list or else he might need to reconsider. In fact, the next time someone declares that he knows the cause of or solution to human violence, you can probably open this book at random and immediately find an event that is not explained by his theory.

Despite my skepticism about any common thread running through all one hundred atrocities, I still found some interesting tendencies. Let me share with you the three biggest lesson I learned while working on this list:

1.) Chaos is deadlier than tyranny. More of these milticides result from the breakdown of authority rather than the exercise of authority. In comparison to a handful of dictators such as Idi Amin and Saddam Hussein who exercised their absolute power to kill hundreds of thousands, I found more and deadlier upheavals like the Time of Troubles, the Chinese Civil War, and the Mexican Revolution where no one exercised enough control to stop the death of millions.

2.) The world is very disorganized. Power structures tend to be informal and temporary, and many of the big names in this book (for example, Stalin, Cromwell, Tamerlane, Caesar) exercised supreme authority without holding a regular job in the government. Most wars don’t start neatly with declarations and mobilizations and end with surrenders and treaties. They tend to build up from escalating incidents of violence, fizzle out when everyone is too exhausted to continue, and are followed by unpredictable aftershocks. Soldiers and nations happily change sides in the middle of wars, sometimes in the middle of battles. Most nations are not as neatly delineated as you might expect. In fact, some nations at war (I call them quantum states) don’t quite exist and don’t quite not exist; instead they hover in limbo until somebody wins the war and decides their fate, which is then retroactively applied to earlier versions of the nation.

3.) War kills more civilians than soldiers. In fact, the army is usually the safest place to be during a war. Soldiers are protected by thousands of armed men, and they get the first choice of food and medical care. Meanwhile, even if civilians are not systematically massacred, they are usually robbed, evicted, or left to starve; however, their stories are usually left untold. Most military histories skim lightly over the suffering of the ordinary, unarmed civilians caught in the middle, even though theirs is the most common experience of war.

The Wisdom of Edward Bernays

Little aphorisms from Edward Bernays, who could of invented stand-up comedy as well as public relations. This is from Larry Tye’s great biography of him, The Father of Spin: Edward L. Bernays and The Birth of Public Relations:

Not all of Bernays’s writing was that high-minded, although much of it was equally prophetic. Often he zeroed in on practical issues, sharing lessons he’d learned during his many years on the job. He broke his advice down into easy-to-swallow maxims, many of which have become part of the American lexicon.

There was this on politics: In most elections you can count on 40 percent of voters siding with you and 40 against; what counts is the 20 percent in the middle. Winning over the undecided 20 percent is what public relations is all about. And, to Tip O’Neill’s contention that all politics is local, Bernays added that all PR is too.

He said any PR strategy must address the four M‘s: mind power, manpower, mechanics, and money.

He had a theory on stubbornness: “It is sometimes possible to change the attitudes of millions but impossible to change the attitude of one man.”

On how to justify high fees: “On the basis of a Latin phrase, quantum meruit [as much as one deserves], the man or the corporation is much more likely to do what you suggest if you charge a high fee than if you charge very little.”

On a person’s age:  There are, he’d read and believed, five of them—chronological, mental, societal, physiological, and emotional. And they don’t always match. When he was ninety-two, for instance, Bernays insisted that his physiological functions worked as well as those of a sixty-three-year-old, and he said he had a report from his doctor to prove it.

On why thank-you notes still are a good idea: The fact that most people no longer write them is all the more reason to write them. Doing so makes you special and makes the recipient remember you.

On the effectiveness of telegrams: “Everyone over thirty remembers the telegram was a message of some big or important news, and a great many of us are still under its tyranny.”

On why he read Playboy: “For the same reason I read National Geographic, I like to see places that I will never visit.”

On the best way to win someone over: It’s easier to gain acceptance for your viewpoint by quoting respected authorities, outlining the reasons for your outlook, and referring to tradition than by telling someone he’s wrong.

The best way to land a job: Analyze the field, narrow your choices to one or two firms, draft a blueprint for increasing their business, present the plan to a top executive, and write enough letters to make that person remember you, but not enough to make him want to forget you. Ask for the salary you think you’re worth, and remember, you’re not just looking for any job, you’re looking for a career.

The best press releases:  Each sentence should have no more than sixteen words and just one idea.

The best place to find things: the public library.

The best defense against propaganda:  more propaganda.

On the finesse needed to practice PR: It’s like shooting billiards, where you bounce the ball off cushions, as opposed to pool, where you aim directly for the pockets.