Modern history

Epilogue

ACE BOOKS, NEW YORK

BROAD STREET REVISITED

SOMEWHERE IN THE WORLD, RIGHT ABOUT NOW, A VILLAGER is moving her family to a city somewhere, or an urban dweller is giving birth, or a farmer is dying—and with that local, isolated act, the global scales will tip decisively. We will enter a new era: a planet whose human population is more than 50 percent urban. Some experts believe we are on a path that will take us all the way to 80 percent, before we reach a planetary stabilization point. When John Snow and Henry Whitehead roamed the urban corridors of London 1854, less than 10 percent of the planet’s population lived in cities, up from 3 percent at the start of the century. Less than two centuries later, the urbanites have become an absolute majority. No other development during that period—world wars, the spread of democracy, the use of electricity, the Internet—has had as transformative and widespread an impact on the lived experience of being human. The history books tend to orient themselves around nationalist story lines: overthrowing the king, electing the presidents, fighting the battles. But the history book of recent Homo sapiens as a species should begin and end with one narrative line: We became city dwellers.

If you time-traveled back to the London of September 1854 and described to some typical Londoners the demographic future that awaited their descendants, no doubt many would react with horror at the prospect of a “city planet,” as Stewart Brand likes to call it. Nineteenth-century London was an overgrown, cancerous monster, doomed to implode sooner or later. Two million people crowded into a dense urban core was a kind of collective madness. Why would anyone want to do the same with twenty million?

To date, those fears have proved unfounded. Modern urbanization has thus far offered up more solutions than problems. Cities continue to be tremendous engines of wealth, innovation, and creativity, but in the 150 years that have passed since Snow and Whitehead watched the death carts make their rounds through Soho, they have become something else as well: engines of health. Two-thirds of women living in rural areas receive some kind of prenatal care, but in cities, the number is more than ninety percent. Nearly eighty percent of births in cities take place in hospitals or other medical institutions, as opposed to thirty-five percent in the countryside. For those reasons, as you move from rural areas to urban ones, infant mortality rates tend to drop. The vast majority of the world’s most advanced hospitals reside in metropolitan centers. According to the coordinator of the United Nations Global Report on Human Settlements, “Urban areas offer a higher life expectancy and lower absolute poverty and can provide essential services more cheaply and on a larger scale than rural areas.” For most of the world’s nations, living in a city now extends your life expectancy instead of shortening it. Thanks to the government interventions of the seventies and eighties, air quality in many cities is as good as it has been since the dawn of industrialization.

Cities are a force for environmental health as well. This may be the most surprising new credo of green politics, which has in the past largely associated itself with a back-to-nature ethos that was explicitly antiurban in its values. Dense urban environments may do away with nature altogether—there are many vibrantly healthy neighborhoods in Paris or Manhattan that lack even a single tree—but they also perform the crucial service of reducing mankind’s environmental footprint. Compare the sewage system of a midsized city like Portland, Oregon, with the kind of waste management resources that would be required to support the same population dispersed across the countryside. Portland’s 500,000 inhabitants require two sewage treatment plants, connected by 2,000 miles of pipes. A rural population would require more than 100,000 septic tanks, and 7,000 miles of pipe. The rural waste system would be several times more expensive than the urban version. As the environmental scholar Toby Hemenway argues: “Virtually any service system—electricity, fuel, food—follows the same brutal mathematics of scale. A dispersed population requires more resources to serve it—and to connect it together—than a concentrated one.” From an overall ecosystems perspective, if you’re going to have 10 million human beings trying to share an environment with other life-forms, it’s much better to crowd all 10 million of them into a hundred square miles than it is to spread them out, edge-city style, over a space ten or a hundred times that size. If we’re going to survive as a planet with more than 6 billion people without destroying the complex balance of our natural ecosystems, the best way to do it is to crowd as many of those humans into metropolitan spaces and return the rest of the planet to Mother Nature.

By far, the most significant environmental cause that cities support is simple population control. People have more babies in the country, for a number of reasons. Economically, having more children makes sense in agrarian environments: more hands to help in the fields and around the house, without the space constraints of urban living. Rural life—particularly in the Third World—doesn’t offer the same ready access to birth control and family-planning clinics. Cities, on the other hand, trend in the opposite direction, offering increased economic opportunity for women, expensive real estate, availability of birth control. Those incentives have turned out to be so powerful that they have reversed one of the dominant demographic trends of the last few centuries of life on earth: the population explosion that has been the subject of countless doomsday scenarios, from Malthus to Paul Ehrlich’s influential early-1970s manifesto The Population Bomb. In countries that organized into modern metropolitan cities long ago, birthrates have dropped below the “replacement level” of 2.1 children per woman. Italy, Russia, Spain, Japan—all these countries are seeing birthrates around 1.5 per woman, which means that their populations will begin shrinking in the coming decades. The same trend is occurring in the Third World: birthrates were as high as 6 children per woman in the 1970s; now they are 2.9. As urbanization continues worldwide, current estimates project that the earth’s human population will peak at around 8 billion in 2050. After that, it’s a populationimplosion that we’ll have to worry about.

THIS IS THE WORLD THAT SNOW AND WHITEHEAD HELPED make possible: a planet of cities. We no longer doubt that metropolitan centers with tens of millions of people can be a sustainable proposition, the way Victorian Londoners worried about the long-term viability of their sprawling, cancerous metropolis. In fact, the runaway growth of metropolitan centers may prove to be essential in establishing a sustainable future for humans on the planet. That reversal of fortune has much to do with the shifting relationship between microbe and metropolis that the Broad Street epidemic helped set in motion. “Cities were once the most helpless and devastated victims of disease, but they became great disease conquerors,” Jane Jacobs wrote, in one of many classic passages fromDeath and Life of the Great American City.

All the apparatus of surgery, hygiene, microbiology, chemistry, telecommunications, public health measures, teaching and research hospitals, ambulances and the like, which people not only in cities but also outside them depend upon for the unending war against premature mortality, are fundamentally products of big cities and would be inconceivable without big cities. The surplus wealth, the productivity, the close-grained juxtaposition of talents that permit society to support advances such as these are themselves products of our organization into cities, and especially into big and dense cities.

Perhaps the simplest way to explain why Broad Street was such a watershed event is to borrow Jacobs’ phrase and say it this way: Broad Street marked the first time in history when a reasonable person might have surveyed the state of urban life and come to the conclusion that cities would someday become great conquerers of disease. Until then, it looked like a losing battle all the way.

Ultimately, the transformation that Broad Street helped usher in revolved around density, capitalizing on the advantages of dense urban living while minimizing the dangers. Crowding two hundred people per acre, building cities with populations in the millions sharing the same water supply, struggling to find a way to get rid of all that human and animal waste—this was a lifestyle decision that seemed fundamentally at odds with both personal and environmental health. But the nations that first organized themselves around metropolitan settlements—as turbulent as those transformations were—are now the most affluent places on the planet, with life expectancies that are nearly double that of predominantly rural nations. A hundred and fifty years after Broad Street, we see density as a positive force: an engine of wealth creation, population reduction, environmental sustainability. We are now, as a species, dependent on dense urban living as a survival strategy.

But the forecasts that predict a city-planet where eighty percent of us live in metropolitan areas are just that: forecasts. It is possible that this epic transformation could be undone in the coming decades or centuries. The rise of sustainable metropolitan environments was not a historical inevitability: it was the result of specific technological, institutional, economic, and scientific developments, many of which played a role in the extended story of Broad Street. It’s entirely possible that new forces could emerge—or old foes return—that would imperil this city-planet of ours. But what might they be?

It is unlikely that these antiurban forces will come in the form of some new incentive that lures people back to the countryside, like the fanciful dream of telecommuting prophesied by the futurists a decade ago, when the Internet was first entering mainstream culture.

There’s a reason why the world’s wealthiest people—people with near-infinite options vis-à-vis the choice of where to make their home—consistently choose to live in the densest areas on the planet. Ultimately, they live in these spaces for the same reason that the squatter classes of São Paulo do: because cities are where the action is. Cities are centers of opportunity, tolerance, wealth creation, social networking, health, population control, and creativity. No doubt, the Internet and its descendants will continue exporting some of these values to rural communities in the decades to come. But of course, the Internet will continue enhancing the experience of urban life as well. The sidewalk flaneurs get as much out of the Web as the ranchers do, if not more.

The two great looming threats of our new century—global warming and our finite supply of fossil fuels—may well have massively disruptive effects on existing cities in the coming decades. But they are not likely to disrupt the macro pattern of urbanization in the long run, unless you believe the environmental crisis is likely to end in some global cataclysm that sends us back to agrarian or hunter-gatherer living. Most of the world’s urban centers lie within a few dozen meters of sea level, and if the ice caps do indeed melt at the rate they are currently forecast to, many of our metropolitan descendants will be relocating by the midsection of the twenty-first century. But there’s no reason to think they’ll be relocating to rural or suburban areas. Most likely, they’ll simply retreat to higher ground, and new dense metropolitan areas will form around them. The wealthiest cities of the world will follow Venice’s lead and simply try to engineer their way around the problem. The poorest cities will follow New Orleans’ lead—at least so far—and just move to other nearby cities. Either way the poplation stays urban.

Neither does the end of oil foretell the end of cities. The reason why cities have taken on the “green” stamp of approval in recent years is not that they are literally green with foliage. (Air quality has improved markedly, and parks are better funded than ever, but they remain concrete jungles for the most part.) We now see cities as environmentally responsible communities because their energy footprints are so much smaller than other forms of human settlement. In a sense, the environmentalists are learning something that the capitalists learned a few centuries ago: There are efficiencies to urban living that outweigh all the annoyances. City dwellers spend less money heating and cooling their homes; they have fewer children; they recycle their waste more economically; and most important, they consume less energy moving around day to day, thanks to the shorter commutes and mass transit that density enables. “By the most significant measures, New York is the greenest community in the United States, and one of the greenest cities in the world,” The New Yorker’s David Owen writes. “The most devastating damage humans have done to the environment has arisen from the heedless burning of fossil fuels, a category in which New Yorkers are practically prehistoric. The average Manhattanite consumes gasoline at a rate that the country as a whole hasn’t matched since the mid–nineteen-twenties, when the most widely owned car in the United States was the Ford Model T. Eighty-two per cent of Manhattan residents travel to work by public transit, by bicycle, or on foot. That’s ten times the rate for Americans in general, and eight times the rate for residents of Los Angeles County. New York City is more populous than all but eleven states; if it were granted statehood, it would rank fifty-first in per-capita energy use.” In other words, a serious crisis of non-renewable energy resources is likely to accelerate the urbanization trend, not derail it.

None of this is intended to belittle the long-term problems caused by global warming and our dependence on fossil fuels. Both trends are likely to trigger disastrous consequences if left unchecked, and the sooner we get serious about solutions to both problems, the better. But in both cases, one of the primary solutions may well prove to be to encourage people to move to metropolitan areas. A warmer planet is still a city-planet, for better or worse.

Yet that doesn’t mean continued urbanization is inevitable. It just means that the potential threats will come from somewhere else. Most likely, if some new force derails our mass migration to the cities, it will take the form of a threat that specifically exploits density to harm us, just as Vibrio cholerae did two hundred years ago.

IN THE IMMEDIATE AFTERMATH OF THE 9/11 ATTACKS, MANY commentators observed that there was a certain dark irony in the technological method of the terrorists: they had used what were effectively Stone Age tools—knives—to gain control of advanced American machines—four Boeing 7-series planes—and then employed that technology as a weapon against its creators. But while the planes were clearly instrumental to the attack, the advanced technology that caused the greatest loss of life lay elsewhere: the terrorists also exploited the technical knowledge that enabled 25,000 people to occupy a building 110 stories high. (Consider that a dead-on collision with the five-story Pentagon produced only seventy-nine casualties on the ground.) The heat of jet fuel and the impact of a 400-mph collision were lethal weapons that morning, but without the terrifying potential energy released by those collapsing floors, the body count would have been lower by an order of magnitude.

The 9/11 attackers were, ultimately, exploiting the tremendous advance in the technologies of density that we have enjoyed since the birth of skyscrapers in the late nineteenth century. There were four hundred people per acre in Soho in 1854, in London’s most densely populated neighborhood. The Twin Towers sat on approximately one acre of real estate, and yet they harbored a population of 50,000 on a workday. That level of density offers a long list of potential benefits, but it is also an open invitation for mass killing—and, what’s worse, mass killing that doesn’t require an army to carry it out. You just need enough ammunition to destroy two buildings, and right there you’ve got a body count that rivals the ten years of American losses in the Vietnam War.

Density is the crucial ingredient often left out in discussions of asymmetric warfare. It is not merely that technology has given smaller and smaller organizations access to increasingly deadly weapons—though that is surely half the story—but that the patterns of human settlement over the past two hundred years have made those weapons far more deadly than they would be if one could somehow time-travel back to 1800 and set them off. Even if you could have hijacked an airplane back in John Snow’s day, you’d have been hard pressed to find an urban area crowded enough to kill a hundred civilians on the ground. Today, the planet is covered with thousands of cities that offer far more enticing targets. If terrorist-sponsored asymmetric warfare were the only threat facing human beings, we would be far better off as a species covering the planet with suburban sprawl and emptying the cities altogether. But we don’t have that option. So we’re either going to have to acclimate to a certain predictable presence of terrorist threats—the way the Victorian Londoners acclimated to the terrible plagues that would sweep through their city every few years—or we’re going to have to follow John Snow’s lead and figure out a reliable way to eliminate the threat.

Certain threats, however, may not be tolerable. One of the most menacing that the twenty-first-century city faces is a holdover from the Cold War: nuclear weapons. The doomsday scenarios are familiar enough: A megaton hydrogen bomb—too big for “suitcase bombs” but much smaller than today’s twenty-five-megaton state-of-the-art weapons—detonated at the site of the Broad Street pump vaporizes the entire area from the western edge of Hyde Park to Waterloo Bridge. A weekday attack would effectively wipe out the entire British government, reducing the Houses of Parliament and 10 Downing Street to radioactive ash. Most of London’s landmarks—Buckingham Palace, Big Ben, Westminster Abbey—would simply cease to exist. A wider zone extending out to Chelsea and Kensington and to the eastern edge of the old City would suffer 98 percent loss of life. Move a few miles farther out—up to Camden Town, out to Notting Hill or the East End—and half the population dies, with most buildings damaged beyond recognition. Anyone who happens to see the blast directly is blinded for life; most survivors suffer hideous radiation sickness that makes them envy the dead. As you move out from Ground Zero, the fallout leaves a vast wake of elevated cancer occurrences and genetic defects.

Then there are the secondary effects, the collateral damage. The entire government would have to be replaced overnight; the damage to the financial centers in the city would be catastrophic for the world economy. The detonation site itself would be uninhabitable for decades. Every resident of a major world city—every New Yorker and Parisian, every person in every street in Tokyo and Hong Kong—would find his habitat transformed: from safety in numbers to mass terror. The great cities of the world would start to look like giant bull’s-eyes: millions of potential casualties conveniently stacked up in easily demolished high-rises. One such attack would probably not impede the metropolitan migration—after all, Hiroshima and Nagasaki didn’t stop Tokyo from becoming the world’s largest city. But several detonations might well tip the balance. Turn our metropolitan centers into genuine nuclear targets and you risk a whole other kind of “nuclear winter”: a season of mass exodus unrivaled in human history.

It would be bad news, in other words. And this bad news is likely to arrive courtesy of a walk-on part on the world-historical stage, somebody driving a rigged SUV into Soho and pulling the trigger. There are 20,000 nuclear weapons in the world capable of inflicting this level of damage. That we know about. On a planet of more than 6 billion people, there have to be thousands and thousands of lost souls ready and willing to detonate one of those weapons in a crowded urban center. How long before those two sets intersect?

That driver with the rigged SUV isn’t going to be deterred by the conventional logic of détente-era nuclear politics. Mutually assured destruction isn’t much of a deterrent to him. Mutually assured destruction, in fact, sounds like a pretty good outcome. Game theory has always had trouble accounting for players with no rational self-interest, and the theories of nuclear deterrence are no exception. And once the bomb goes off, there’s no second line of defense—no vaccines or quarantines to block off the worst-case scenario. There will be maps, but they’ll be maps of incineration and fallout and mass graves. They won’t help us understand the threat the way Snow’s map helped us understand cholera. They will merely document the extent of the tragedy.

THE PERILS OF DENSITY GROW MORE EXPLOSIVE—OR MORE infectious, as the case may be—as the wages of fear are increasingly doled out in twenty-first-century currency: chemical or biological weapons, a freelancer virus or bacterium terrorizing the planet for no particular cause other than its fundamental drive to reproduce. When people still worry about the long-term sustainability of dense human settlement, it is more often than not these self-replicating weapons that conjure up the doomsday scenarios. Tightly bound networks of humans and microbes make a great case study in the power of exponential growth. Infect ten people with the Ebola virus in Montana and you might end up killing a hundred others, depending on when the initial victims were taken to the high-density environment of a hospital. But infect ten people with Ebola in downtown Manhattan and you could kill a million, or more. Traditional bombs obviously grow more deadly as the populations they target increase in size, but the upward slope in that case is linear. With epidemics, the deadliness grows exponentially.

In September 2004, health officials in Thailand began a program of vaccinating poultry workers with the conventional flu shots that are routinely doled out in Western countries at the start of flu season every year. For months, health experts around the world had been calling for precisely this intervention. This, in itself, was a telling phenomenon. Conventional flu vaccines are effective against only the type A and type B strains of influenza—the kind that sidelines you for a week with a fever and a stuffy head, but that is rarely fatal in anyone except the very young or the very old. The risk of a global pandemic emerging from these viruses is slim at best, which is why, historically, public-health officials in the West have not concerned themselves with the question of whether poultry workers on the other side of the world have received their flu shots. The virus that the public-health officials were worried about—H5N1, also known as the avian flu—is entirely unfazed by conventional flu shots. So why were so many global health organizations calling for vaccines in Asia? If they were worried about avian flu, why prescribe a vaccine that was known to be ineffective against it?

The answer to that question is a measure of how far we have come since the Broad Street epidemic in our understanding both of the pathways that disease takes and the underlying genetic code that instructs bacteria and viruses. But it is also a measure of continuity: how the very same issues that Snow and Whitehead confronted on the streets of London have returned to haunt us, this time on the scale of the globe and not the city. The specific threats are different now, and in some ways they are more perilous, and the tools at our disposal are far more advanced than Snow’s statistical acumen and shoe-leather detective work. But confronting these threats requires the same kind of thinking and engagement that Snow and Whitehead so brilliantly applied to the Broad Street outbreak.

In all the speech-making, posturing, and sober analysis about avian flu that has swept the globe in the past decade, one utterly amazing fact stands out: as far as we know, the virus that has caused such international panic does not exist yet. To be sure, H5N1 is a viciously lethal virus, with fatality rates in humans approaching 75 percent. But in its current incarnation, it is incapable of starting a pandemic, because it lacks the ability to pass directly from human to human. It can spread like wildfire through a population of chickens or ducks, and the birds can in turn infect humans. But there the chain of infection ends: so long as the overwhelming majority of humans on the planet are not in direct contact with live poultry, H5N1 is incapable of causing a global outbreak.

So why are health officials in London and Washington and Rome worried about poultry workers in Thailand? Why, indeed, are these officials worried about avian flu in the first place? Because microbial life has an uncanny knack for mutation and innovation. All the world needs is for a single strain of H5N1 to somehow mutate into a form that is transmissible between humans, and that virus could unleash a pandemic that could easily rival the 1918 influenza pandemic, which killed as many as 100 million people worldwide.

That new capability might come from some random mutation in the H5N1 DNA. For the H5N1, it would be like winning a genetic lottery where the odds were a trillion-to-one against you, but in a world with untold trillions of H5N1 viruses floating around, it’s not impossible to imagine. But the more likely scenario is that H5N1 will borrow the relevant genetic code directly from another organism, in a process known as transgenic shift. Recall that DNA transmission among single-celled bacteria and viruses is far more promiscuous than the controlled, vertical descent of all multicellular life. A virus can swap genes with other viruses willingly. Imagine a brunette waking up one morning with a shock of red hair, after working side by side with a redheaded colleague for a year. One day the genes for red hair just happened to jump across the cubicle and express themselves in a new body. It sounds preposterous because we’re so used to the way DNA works among the eukaryotes, but it would be an ordinary event in the microcosmos of bacterial and viral life.

Most conventional flu viruses already possess the genetic information that allows them to pass directly from human to human. Because H5N1 is so closely related to the conventional flu virus, it would be a relatively simple matter for it to swipe a few lines of pertinent code and immediately enjoy its new capacity for human-to-human transmission. Certainly it would be easier than randomly stumbling on the correct sequence via mutation.

And so this is why the whole world has suddenly taken an interest in whether Thai poultry workers get their flu shots: because the world wants to ensure that H5N1 stays as far away as possible from ordinary flu viruses. If the two viruses did encounter each other inside a human host, a far more ominous strain of H5N1 might emerge. It could be as infectious as the influenza bug that swept the globe in 1918, but several times more lethal. And it would find itself inhabiting a planet that was massively more interconnected and densely settled than it was in 1918.

To appreciate how deadly transgenic shift can be, you need only look at the Broad Street epidemic. In 1996, two scientists at Harvard, John Mekalanos and Matthew K. Waldor, made an astonishing discovery about the roots of Vibrio cholerae’s killer instinct. There are two key components to the bacteria’s assault on a human body: the TCP pilus that allows it to replicate with such exponential fury in the small intestine, and the cholera toxin that actually triggers the rapid dehydration of the host. Mekalanos and Waldor discovered that the gene for cholera toxin is actually supplied by an outside source: a virus called CTX phage. Without the genes contributed by that virus, V. cholerae literally doesn’t know how to be a pathogen. It learns to be a killer by borrowing genetic information from an entirely different species. The trade between the phage and the bacterium is a classic example of coevolutionary development, two organisms cooperating at the genetic level in order to further both of their reproductive interests: the CTX phage multiplies inside the V. cholerae, and in return the virus offers up a gift that allows the bacteria to greatly increase the odds of finding another host to infect. As unlikely as it sounds, V. cholerae is not a born killer. It needs the CTX phage to switch over to the dark side.

So we have good reason to fear genetic commingling between H5NI and the ordinary human flu virus. But we should also be comforted by how far we have advanced in our ability to anticipate these cross-species transmissions. When John Snow identified the waterborne nature of cholera in the middle of the nineteenth century, he was using the tools of science and statistics to find a way around the fundamental perceptual limits of space: the creature he was seeking was literally too small to see. So he had to detect it indirectly: in patterns of lives and deaths that played out in the streets and houses of a bustling metropolitan center. Today we have conquered that spatial dimension: we can visually inspect the kingdom of bacteria at will; we can even zoom all the way down to the molecular strands of DNA, even glimpse the atomic connections that bind them together. So now we confront another fundamental perceptual limit—not of space, but of time. We use the same methodological tools that Snow used, only now we’re using them to track a virus we can’t see because it doesn’t exist yet. Those flu vaccinations in Thailand are a preemptive strike against a possible future. No one knows when H5N1 will learn to pass directly from human to human, and it remains at least a theoretical possibility that it will never develop that trait. But planning for its emergence makes sense, because if such a strain does appear and starts spreading around the globe, there won’t be the equivalent of a pump handle to remove.

This is why we’re vaccinating poultry workers in Thailand, why the news of some errant bird migration in Turkey can cause shudders in Los Angeles. This is why the pattern recognition and local knowledge and disease mapping that helped make Broad Street understandable have never been more essential. This is why a continued commitment to public-health institutions remains one of the most vital roles of states and international bodies. If H5N1 does manage to swap just the right piece of DNA from a type A flu virus, we could well see a runaway epidemic that would burn through some of the world’s largest cities at a staggering rate, thanks both to the extreme densities of our cities and the global connectivity of jet travel. Millions could die in a matter of months. Some experts think a pandemic on the order of 1918 is a near inevitability. Would a hundred million dead—the great majority of them big-city dwellers—be enough to derail the urbanization of the planet? It’s unlikely, as long as new pandemics didn’t start rolling in every flu season, like hurricanes. But think of the lingering trauma that 9/11 inflicted on every New Yorker—wondering if it was still safe to stay in the city. Almost everyone opted to stay, of course, and the city’s population has—wonderfully—continued to swell, thanks largely to immigration from the developing world.

But imagine if 500,000 New Yorkers had died of the flu in September 2001, instead of 2,500 in a collapsing skyscraper. Just the deaths alone would give the year the ignominious status of the single most dramatic drop in population in the city’s history, and no doubt the deaths would be exceeded by all the migrations to the relative safety of the countryside. My wife and I are passionately committed to the idea of raising our kids in an urban environment, but if 500,000 New Yorkers were killed in the space of a few months, I know we’d find another home. We’d do it with great regret, and with the hope that, when things settled down a few years later we’d move back. But we would move, all the same.

IT IS CONCEIVABLE, THEN, THAT A LIVING ORGANISM—whether the product of evolution or genetic engineering—could threaten our great transformation into a city-planet. But there is reason for hope. We have a window of a few decades where DNA-based microbes will retain the capability of unleashing a cascading epidemic that kills a significant portion of humanity. But at a certain point—perhaps ten years from now, perhaps fifty—the window may well close, and the threat may subside, just as other, more specific, biological threats have subsided in the past: polio, smallpox, chicken pox.

If this scenario comes to pass, the pandemic threat will ultimately be defeated by a different kind of map—not maps of lives and deaths on a city street, or bird flu outbreaks, but maps of nucleotides wrapped in a double helix. Our ability to analyze the genetic composition of any life-form has made astonishing progress over the past ten years, but in many ways we are at the very beginning of the genomic revolution. We have already seen amazing advances in our understanding of the way genes build organisms, but theapplication of that understanding—particularly in the realm of medicine—is only starting to bear fruit. A decade or two from now, we may possess tools that will allow us to both analyze the genetic composition of a newly discovered bacterium and, using computer modeling, build an effective vaccine or antiviral drug in a matter of days. At that point, the primary issue will be production and delivery of the drugs. We’ll know how to make a cure for any rogue virus that shows up; the question will be whether we can produce enough supplies of the cure to stop the path of the disease. That might well require a new kind of urban infrastructure, a twenty-first-century equivalent of Bazalgette’s sewers: production plants located in every metropolitan center, ready to churn out millions of vaccines if an epidemic appears. It will take the creation of public-health institutions in the developing world—institutions that simply do not exist yet—along with a renewed commitment to public health in the industrialized world, particularly the United States. But we’ll have the tools at our disposal to deal with the emerging threats, if we’re smart enough to deploy them.

The twentieth-century approach to battling viruses has largely operated at the same temporal scale as microbial evolution itself. It has been a classic Darwinian arms race. We take a sample of last year’s most prolific flu virus and use it as the basis for a vaccine that we then spread through the immune system of the general public; and the viruses evolve new ways around those vaccines, and so we come up with new vaccines that we hope will deal with the new bugs. But the genomic revolution means that our defense mechanisms are now starting to operate at a much faster clip than evolution. We’re no longer limited to jury-rigging vaccines out of last year’s model. We’re able to project forward, anticipate future variations, and, increasingly, address the specific threat posed by the most active virus on the ground. Our understanding of the building blocks of life is advancing at nearly exponential rates—thanks in part to the exponential advance in computation power we call Moore’s Law. But the building blocks themselves are not getting more complex. Type A influenza possesses only eight genes. Thanks to the transgenic shift of microbial life, those eight genes are capable of an astonishing amount of variation; but those possibilities are ultimately finite, and they will be no match for the modeling prowess of circa-2025 technology. Right now we’re in an arms race with the microbes, because, effectively, we’re operating on the same scale that they are. The viruses are both our enemy and our arms manufacturer. But as we enter an age of rapid molecular analysis and prototyping, the whole approach changes. The complexity of our understanding of microbial diseases is already advancing much faster than the complexity of the microbes themselves. Sooner or later, the microbes won’t be able to compete.

But perhaps the arms race will not purely be a figure of speech. The flu virus on its own might not be able to grow complex enough to challenge the technology of genomic science, but what if the technology of genomic science were used to “weaponize” a virus? Genetic engineering may ultimately win out over evolution, but isn’t it a different matter if the viruses are themselves the product of genetic engineering? Wouldn’t the ominous trends of asymmetric warfare—increasingly advanced technology in the hands of smaller and smaller groups—be even more ominous where biological weapons are concerned? If suicide bombers with homemade explosives can effectively hold the American military hostage, imagine what they could do with a weaponized virus.

The crucial difference, though, is that there are vaccines for biological weapons, while there are no vaccines for explosives. Any DNA-based agent can effectively be neutralized after its release, by any number of different mechanisms: early detection and mapping, quarantine, rapid vaccination, antiviral drugs. But you can’t neutralize an explosive once it has been detonated. So suicide bombers are probably destined to be a part of human civilization for as long as there are political or religious ideologies that encourage people to blow themselves up in crowded places. DNA-based weapons do not have the same future, however, because for every terrorist trying to engineer a biological weapon there are a thousand researchers working on a cure. It’s entirely likely, of course, that we will see the release of an infectious agent engineered in a rogue lab somewhere, and it’s at least conceivable that the attack could unleash a pandemic that could kill thousands or millions—particularly if such an attack took place in the next decade or so, before our defensive tools have matured. But there’s good reason to believe that defensive tools will ultimately win out in this domain as well, because they will be built on a meta-understanding of genetics itself, and because the resources put into their development will dramatically outnumber the resources devoted to developing weapons—assuming, that is, that the world’s nation-states continue the ban on the creation of biological weapons. Biological terrorism may well be in our future, and it could turn out to be one of the most hideous chapters in the history of human warfare. But in the long run, it shouldn’t threaten our transformation into a city-planet, if we continue to encourage scientific research into defensive vaccines and other treatment, and remain vigilant in our opposition to state-sponsored biological weapons research.

Here, too, the legacy of Snow’s map is essential to the battle. The peculiar menace of a biological attack is that we may not know it is under way until weeks after the infectious agent is first released. The greatest risk of a deliberately planned urban epidemic is not that we won’t have a vaccine, it’s that we won’t recognize the outbreak until it’s too late for the vaccine to stop the spread of disease. Combating this new reality will take a twenty-first-century version of John Snow’s map: making visible patterns in the daily flow of lives and deaths that constitute the metabolism of a city, the rising and falling fortunes of the sick and the healthy. We’ll have exceptional tools at our disposal to defend ourselves against a biological attack, but we’ll have to be able to see the attack first, before we can apply those defensive measures. Before we can mobilize all the technology that would have bewildered Snow—the genomic sequencers and antiviral mass-production facilities—we’ll use a technology that Snow would have recognized instantly. We’ll use a map. Only, this map won’t be hand-illustrated from data collected via door-to-door surveys. It will draw on the elaborate network of sensors sniffing the air for potential threats in urban centers, or hospital first-responders reporting unusual symptoms in their patients, or public water facilities scanning for signs of contamination. Almost two centuries after William Farr first hit upon the idea of amassing weekly statistics on the mortality of the British population, the technique he pioneered has advanced to a level of precision and scope that would have astonished him. The Victorians could barely see microbial life-forms swimming in a petri dish in front of them. Today, a suspicious molecule floats by a sensor in Las Vegas, and within hours the authorities at the CDC in Atlanta are on the case.

There is less reason for optimism where nuclear weapons are concerned. A technique that effectively neutralizes the threat posed by influenza viruses could come from any number of active lines of research: from our understanding of the virus itself, from our understanding of the human immune system, even our understanding of how the respiratory system works. There are thousands of scientists and billions of dollars spent every year exploring new ways to fight lethal epidemic diseases. But no one is working on a way to neutralize a nuclear explosion, presumably for the entirely rational reason that it is impossible to neutralize a nuclear explosion. We have made some advances in detection—all nuclear devices give off a radioactive signal that can be tracked by sensors—but detection is hardly a fail-safe option. (If we were relying purely on our ability to detect emerging viruses, the long-term future for epidemic disease would look equally grim.) There is some promising research into medicines that would block the effects of radiation poisoning, which could well save millions of lives in the event of a metropolitan detonation, but millions more would still perish from the initial explosion itself.

If you look solely at the danger side of the equation, both epidemic disease and nuclear explosions seem to present a mounting threat in the coming decades: thanks to urban density and global jet travel, it’s probably easier now for a rogue virus to spread around the globe, while the breakup of the Soviet Union and the increase in technological expertise has made it easier to both acquire radioactive materials and build the bomb itself. (As I write, the world is wrestling with the implications of Iran’s renewed commitment to a nuclear program.) But if you look at the opposing side of the equation—our ability to neutralize the threat—the story is very different. Our ability to render a virus harmless is growing at exponential rates, while our ability to undo the damage caused by the detonation of a nuclear device is, literally, nonexistent, with no sign that it will ever be technically possible.

On some level, the nuclear problem may turn out to be one that we never solve, and the ultimate question will turn out to be how often a rogue nation or terrorist cell manages to get its hands on one of these devices. Perhaps urban nuclear explosions will turn out to be like hundred-year storms: a bomb goes off once a century, millions die, the planet shudders in horror, and slowly goes about its business. If that’s the pace, then as horrible as such a catastrophe would be, the long-term sense of urban sustainability would likely remain intact. But if the trends of asymmetric warfare continue, and the suicide bombers start detonating suitcase nukes every ten years—at that point, all bets are off.

AND SO OUR CONVERSION TO A CITY-PLANET IS BY NO means irreversible. The very forces that propelled the urban revolution in the first place—the scale and connectedness of dense urban living—could be turned against us. Rogue viruses or weapons could once again turn urban areas into sites of mass death and terror. But if we are to keep alive the model of sustainable metropolitan life that Snow and Whitehead helped make possible 150 years ago, it is incumbent on us to do, at the very least, two things. The first is to embrace—as a matter of philosophy and public policy—the insights of science, in particular the fields that descend from the great Darwinian revolution that began only a matter of years after Snow’s death: genetics, evolutionary theory, environmental science. Our safety depends on being able to predict the evolutionary path that viruses and bacteria will take in the coming decades, just as safety in Snow’s day depended on the rational application of the scientific method to public-health matters. Superstition, then and now, is not just a threat to the truth. It’s also a threat to national security.

The second is to commit ourselves anew to the kinds of public-health systems that developed in the wake of the Broad Street outbreak, both in the developed world and the developing: clean water supplies, sanitary waste-removal and recycling systems, early vaccination programs, disease detection and mapping programs. Cholera demonstrated that the nineteenth-century world was more connected than ever before; that local public-health problems could quickly reverberate around the globe. In an age of megacities and jet travel, that connectedness is even more pronounced, for better and for worse.

In many ways the story of the past few years is not an uplifting one, where these two objectives are concerned. Intelligent design “theory” continues to challenge the Darwinian model, in the courts and in public opinion; the United States appears to be spending more time and money proposing new nuclear weapons than eliminating the ones we have; public-health spending is down per capita; as I write, Angola is suffering through the worst outbreak of cholera in a decade.

But if our current prospects seem bleak, we need only think of Snow and Whitehead on the streets of London so many years ago. The scourge of cholera then seemed intractable, too, and superstition seemed destined to rule the day. But in the end, or at least as close to the end as we’ve gotten so far, the forces of reason won out. The pump handle was removed; the map was drawn; the miasma theory was put to rest; the sewers were built; the water ran clean. This is the ultimate solace that the Broad Street outbreak offers us in our current predicament, with all its unique challenges. However profound the threats are that confront us today, they are solvable, if we acknowledge the underlying problem, if we listen to science and not superstition, if we keep a channel open for dissenting voices that might actually have real answers. The global challenges that we face are not necessarily an apocalyptic crisis of capitalism or mankind’s hubris finally clashing with the balanced spirit of Gaia. We have confronted equally appalling crises before. The only question is whether we can steer around these crises without killing ten million people, or more. So let’s get on with it.

If you find an error please notify us in the comments. Thank you!