HIDDEN IN A DEEP AND SECLUDED VALLEY, THE VILLAGE OF Broughton has two brooks, two streets, and not enough acreage to warrant the attention of the larger world. On most maps the village lies in the terra incognita of gray-green space between Huntingdon and Peterborough. Indeed, except for the local church spire, which rises above the valley wall like the hand of a drowning man, Broughton would be a rural Atlantis, secreted away on a few thousand acres of Oxford clay in the green and pleasant English countryside.
Like many medieval villages, Broughton began life as a forest clearing. Three hundred years before villager John Gylbert was born, the tree line came right up to the front door, but by 1314—the year John turned nineteen—the enveloping forest had been cut down, replaced by neat checkerboard squares of gold and green farmland and pasture. Coming up the road from Huntingdon on a summer morning, Broughton would rise up before the medieval traveler like a thatched-roof island adrift on a sunlit sea of swaying oats and barley. In John’s time Broughton had some 268 residents, down slightly from its medieval high of 292, but not significantly down. The size of the local animal population is unrecorded, but cows, chickens, pigs, and horses, just beginning to replace oxen at the plow, were ubiquitous in Broughton. Animals roamed the village lanes and gardens like curious sightseers, peering into doors, sunning themselves in rosebeds, eyeing the old men in front of the alewife’s house. In the evening, while two-footed Broughton drank, cooked, argued, and made love in one room, four-footed Broughton slept, ate, and defecated in another room—or sometimes the same room.
As far as medieval Broughton can be said to have left behind a collective biography of itself, it resides in the annual round of births, deaths, marriages, misdemeanors, bills of sale, and suits noted in the local court records. These show that while John Gylbert was growing up in the first decade of the fourteenth century, Broughton was anglicizing itself. In 1306 William Piscator became William Fisser or Fisher (the English equivalent of Piscator); a few years later Richard Bercarius became Richard Sheppared (the English equivalent of Bercarius), and Thomas Cocus, Thomas Coke. John was probably born Johannes, and his friend Robert Crane, Robertuses. Only the eponymously named John de Broughton resisted the anglicizing trend, perhaps because de Broughton, a humbly born man who had come up a bit in the world, could not resist that fancy-sounding French “de.”
Local court records also show that Broughton, like many small villages, had its share of scandals. Between 1288 and 1299 John’s great-aunt Alota was arrested four times for brewing substandard ale. The records do not give a reason for the arrests, but it was not uncommon for an alewife to spike her product with hen excrement to hasten fermentation. Alota’s husband, Reginald, also makes an appearance in the court records; in 1291 Reginald was charged with committing adultery with “a woman from Walton.” As far as is known, Alota made no public comment about the case, but it is perhaps significant that after her next arrest, Alota appeared in court on the arm of another village man, John Clericus, who lived a few doors down from the Gylberts.
John Gylbert’s name also appears in village court documents. In early February 1314 John was fined for drinking ale and playing alpenypricke—a kind of hurling game—with Robert Crane and Thomas Coke in a wood near Broughton when he should have been at Ramsey Abbey, working. Broughton was part of the abbey manor, which in the ethos of feudalism meant that its villagers owed the monks a portion of their labor.
As an abbey villein, or serf, John was required to spend two days each week toiling in the monks’ demesne, or personal farmland. In return for his labor, on work days John would receive analebedrep,a lunch served with ale, or, if the monks were in an ungenerous mood, anwaterbedrep,a lunch served with water. But even thealebedrep,which came with thick slices of warm bread and the smiles of the servant girls, was meager compensation for the bite of the February wind on the abbey fields and the kick of a heavy iron plow against an aching shoulder. At harvest time, when John’s abbey obligations doubled, he would spend ten hours in the monks’ fields under a blazing August sun, walk back to Broughton in the gathering twilight, work into the night on the Gylberts’ farm, then fall asleep on a straw mat listening to the heavy breathing of the oxen in the next room.
In Broughton, John’s future surrounded him like a death foretold. It was there in his father’s lame leg and in his uncle’s deformed back (spinal deformations, arthritis, and osteoarthritis were rife among the medieval peasantry), and it was there, too, in the worn faces of the village’s thirty-year-olds. John would work hard, die young—probably before forty—and, as sure as the sun rose into the cozy English sky above Broughton each morning, the day after his death an abbey official would be at the door to claim his best horse or cow from his widow as a heriot, or death tax.
Thus it had always been. But at least in the boom years of the thirteenth century, a peasant had a reasonable chance of being rewarded for his hard labor. Good weather—and good soil—made it relatively easy to grow surplus crops, and the booming towns provided an eager market, not only for the extra wheat and barley, but also for peasant handicrafts. If a man owned a little land, as peasants increasingly did in the thirteenth century, he could also count on its value rising. By John Gylbert’s time, all these compensations were vanishing.
Between 1250 and 1270 the long medieval boom sputtered to an end. One of the great ironies of the Black Death is that it occurred just as the medieval global economy, the vehicle ofY. pestis’s liberation, was nearing collapse. However, in Europe, it was the implosion of the vastly larger domestic economy, particularly the agricultural economy, that people felt most keenly. The implosion was continentwide, but in England, a nation of meticulous record keepers, it was documented with great diligence. Around 1300 the acreage under plow decreased, while the land still in use either declined in productivity or stagnated. After centuries of heady advancement, the medieval peasant’s mistakes had caught up with him. Some of the good land brought into service during the Great Clearances of the twelfth century had been overfarmed, and some of the more marginal land, which never should have been cleared in the first place, was giving out entirely.
Paradoxically, the decline in productivity was accompanied by a long-term decline in the price of staples like wheat and barley. As the economy faltered, living standards fell and large pockets of grinding poverty began to appear. Many despairing peasants simply gave up. First individual farms were abandoned, then whole villages. In 1322 officials in west Derbyshire reported that six thousand acres and 167 cottages and houses lay empty. Urban trade and commerce also declined. In the early fourteenth century, rents in central London were cheaper than they had been in decades, and the serpentine London lanes were full of gaunt-faced beggars and panhandlers. In the postboom collapse, even imports of claret, that staple of the English well-to-do, fell. In the villages and towns of France, Flanders, and Italy, the story was much the same. By 1314 millions of people were living in abject poverty, and millions more were only a step away from it.
Europe’s abrupt descent into semidestitution invites a Malthusian interpretation of the Black Death. During the twelfth and thirteenth centuries, population expanded faster than resources, and as sure as night follows day, in the fourteenth century the continent paid for its heedless growth with economic ruin and demographic disaster. However, the facts tell a more nuanced story. In a traditional Malthusian scenario—say, a tarabagan community in a surge year—population continues to grow recklessly until disaster slips up on it like a mugger in the night. In Europe that did not happen; the baby boom and economic boom both ended around the same time—somewhere between 1250 and 1270. After the stall, living standards fell in many regions and stagnated in others, indicating that the balance between resources and people had become very tight, but since demographic disaster was averted for nearly a century before the plague, a Malthusian reckoning may not have been inevitable. “Many . . . went hungry and many were undoubtedly malnourished,” says historian David Herlihy, “but somehow people managed to survive. . . . Circa 1300, the community was successfully holding its numbers.”
Rather than a reckoning, the image of postboom Europe that comes to mind is that of a man standing up to his neck in water. Drowning may not be inevitable, but the man’s position is so fraught, even a very slight rise in the next tide could kill him. As Dr. Herlihy asserts, a crowded Europe may well have been able to hang on for “the indefinite future,” but, like the man in the water, after the land gave out and the economy collapsed, the continent had no margin for error. Just to continue keeping its head above water, everything else had to go right, and in the early fourteenth century, a great many things began to go terribly wrong, beginning with the climate.
The Swiss farmers in the Saaser Visp Valley may have been the first people in Europe to notice that the weather was changing. Sometime around 1250 the resurgent Allalin glacier began to reclaim the farmers’ traditional pasturelands. Or the Greenlanders may have been the first to notice the change, alerted by the sudden chill in the August nights and the appearance of ice in places it had never been seen before. “The ice now comes . . . so close to the reefs none can sail the old route without risking his life,” wrote the Norwegian priest Ivar Baardson. Or the first Europeans to realize that the Little Optimum was over may have been the fishermen on the Caspian Sea, where torrential rains produced a rise in the water level at the end of the thirteenth century.
In the European heartland, the Little Optimum gave way to the Little Ice Age around 1300.* People noticed that the winters were growing colder, but it was the summers, suddenly cool and very wet, that alarmed them. By 1314 a string of poor and mediocre harvests had sent food prices skyrocketing. That fall, every peasant in every sodden field knew: one more cold, wet summer, and people would be reduced to eating dogs, cats, refuse—anything they could get their hands on. As the summer of 1315 approached, prayers were offered up for the return of the sun, but, like a truculent child, the cold and wet persisted. March was so chilly, some wondered if spring would ever return to the meadows of Europe. Then, in April, the gray skies turned a wicked black, and the rain came down in a manner no one had ever seen before: it was cold, hard, and pelting; it stung the skin, hurt the eyes, reddened the face, and tore at the soft, wet ground with the force of a plow blade. In parts of southern Yorkshire, torrential downpours washed away the topsoil, exposing underlying rock. In other areas, fields turned into raging rivers. Everywhere in Europe in the bitter spring of 1315, men and animals stood shivering under trees, their heads and backs turned against the fierce wind and rain. “There was such an inundation of waters, it seemed as though it was the Flood,” wrote the chronicler of Salzburg.
Flanders experienced some of the worst downpours. Day after day, the crackle and boom of thunder echoed above Antwerp and Bruges like a rolling artillery barrage. Occasionally a bolt of lightning would strike, illuminating the network of cascading urban rivers below. Along the riverbanks, rows of soot-stained rectangular houses leaned into the narrow Flemish streets like drunks in blackface. Everywhere, ceilings and floors leaked, fires refused to light, bread molded, children shivered, and adults prayed. Occasionally the rain would stop and people would point to a golden tear in the gray sky and say, “Thank God, it’s over!” Then the next day, or the day after that, the sky would mend itself and the rain would begin all over again.
All through the terrible summer of 1315, angry walls of rain swept off the turbulent Atlantic: bursting dikes, washing away villages, and igniting flash floods that killed thousands. In Yorkshire and Nottingham, great inland seas developed over the lowlands. Near the English village of Milton, a torrential rain inundated the royal manor. In some areas, farmland was ruined for years to come; in other places, it was ruined forever.
Poorer peasants, who had been pushed onto the most marginal farmland during the Great Clearances of the twelfth century, suffered the greatest devastation. In three English counties alone, sixteen thousand acres of plow land were abandoned. “Six tenants are begging,” wrote a resident of one Shopshire village. By the end of summer, the six would become hundreds of thousands. Everywhere in Europe in the early autumn of 1315, the poor huddled under trees and bowers, listening to the rain beat a tattoo against leaf and mud. They walked the fields, “grazing like cattle”; stood along the roads, begging; searched behind alehouses and taverns for moldy pieces of food. Visiting a friend, a French notary encountered “a large number of both sexes . . . barefooted, and many, even excepting the women, in a nude condition.” To the north in Flanders, one man wrote that “the cries that were heard from the poor would move a stone.”
The harvest of 1315 was the worst in living memory. The wheat and rye crops were stunted and waterlogged; some oat, barley, and spelt was redeemable, but not very much. The surviving corn was laden with moisture and unripened at the ears. In the lower Rhine “there began a dearness of wheat [and] from day to day prices rose.” The French chronicles also mention the“chierté”(dearness) of food prices“especiaument à Paris.”In Louvain the cost of wheat increased 320 percent in seven months; in England, wheat that sold for five shillings a quarter in 1313 was priced at forty shillings just two years later. Across the English countryside that autumn, the poor did their sums; a year’s worth of barley, the cheapest grain, cost a family sixty shillings, the average laborer’s annual wage was half that amount. The price of beans, oats, peas, malt, and salt rose comparably. Even when food was available, washed-out bridges and roads often prevented it from being transported.
The early winter months of 1316 brought more suffering. As food grew costlier, people ate bird dung, family pets, mildewed wheat, corn, and finally, in desperation, they ate one another. In Ireland, where the thud of shovels and the tearing of flesh from bone echoed through the dark, wet nights, the starving “extracted the bodies of the dead from the cemeteries and dug out the flesh from their skulls and ate it.” In England, where they consider the Irish indecorous, only prisoners ate one another. “Incarcerated thieves,” wrote the monk John de Trokelowe, “. . . devoured each other when they were half alive.” As the hunger intensified, the unspeakable became spoken about. “Certain people . . . because of excessive hunger devoured their own children,” wrote a German monk; another contemporary reported, “In many places, parents, after slaying their children, and children their parents, devoured the remains.”
Many historians think the accounts of cannibalism are overblown, but no one doubts that human flesh was eaten.
In the spring of 1316, public order began to break down. In Broughton, Agnyes Walmot, Reginald Roger, Beatrice Basse, and William Horseman were exiled for stealing food. In Wakefield, Adam Bray had his son John arrested for removing a bushel of oats from the family farm. In dozens of other English villages, there were violent disputes over gleaning. Traditionally, corn discarded by harvesters became the property of the very poor, but with destitution everywhere, even wealthy peasants were on their knees in the sodden fields. That summer, more than one man had his throat slit over the leavings of a failed crop. As the violence mounted, men began to take up arms; the knife, the sword, the club, and the pike became the new tools of the peasant. Food or anything redeemable for food was stolen, and the stealing went on at sea as well as on land. With incidents of piracy mounting daily, in April 1316 an alarmed Edward II, the English king, instructed his sailors to “repulse certain malefactors who have committed manslaughter and other enormities on the sea upon men of this realm and upon men from foreign parts coming to this realm with victuals.”
All through May and June 1316, the rain continued. In Canterbury desperate crowds gathered under a brooding channel sky to pray for “a suitable serenity of the air,” but to no avail. In Broughton torrential downpours pressed the wheat and barley against the sodden earth with such force, the stalks looked as if they had been ironed. In Yorkshire the waterlogged fields of Bolton Abbey, tormented by eighteen months of unceasing rain, gave out entirely. The abbey’s 1316 rye crop was 85.7 percent below normal. The second failed harvest in succession broke human resistance. There was the “most savage, atrocious death,” “the most tearful death,” “the most inexpressible death.” Emaciated bodies winked out from half-ruined cottages and forest clearings, floated facedown in flooded fields, coursed through urban rivers, protruded from mud slides, and lay half hidden under washed-out bridges. In Antwerp burly stevedores serenaded the waking city with cries of “Bring out your dead!” In Erfurt, Germany, rain-slicked corpses were tossed into a muddy ditch in front of the town wall. In Louvain the collection carts “carried pitiable little bodies to the new cemetery outside the town . . . twice or thrice a day.” In Tournai Gilles li Muisis, a local abbot, complained that “poor beggars were dying one after the other.”
As if in sympathy with the human suffering, Europe’s animals began to die in great numbers; some sheep and cattle succumbed to liverfluke; some, possibly, to anthrax. But rinderpest—a disease that produces discharges from the nose, mouth, and eyes, chronic diarrhea, and an overpowering urge to defecate—may have been the most common killer. In the watery June and July 1316, the music of summer included the agonizing bleats of dying animals vainly trying to relieve themselves in muddy pastures.
Strange diets, putrid food, and a generally lower resistance to disease also produced a great many hard human deaths. Of ergotism, which seems to have been especially common, one English monk wrote, “It is a dysentery-type illness, contracted on account of spoiled food . . . from which follow[s] a throat ailment or acute fever.” However, this description does not do justice to the full horrors of ergotism, which was called St. Anthony’s fire in the Middle Ages. First, the ergot fungus, a by-product of moldy wheat, attacks the muscular system, inducing painful spasms, then the circulatory system, interrupting blood flow and causing gangrene. Eventually the victim’s arms and legs blacken, decay, and fall off; LSD-like hallucinations are also common. If the Irish Famine of 1847 is a reliable indicator, vitamin deficiencies were also rife. Between 1315 and l322, when the rain finally stopped, many people must have become demented from pellagra (a niacin deficiency) or been blinded by xerophthalmia (a vitamin A deficiency). Typhus epidemics may have killed many thousands more.
The fortunate died of starvation, a condition whose end stage symptoms include brown and brittle skin, the abundant growth of facial and genital hair, and ebbing away of the desire for life.
John Gylbert, whose name vanishes from Broughton’s records after 1314, may have died such a death. After months of wandering through mist and rain with a patriarch’s beard and dead man’s eyes, one day John may have sat down in a field, looked up into the sky, and, like thousands of other Europeans of his generation, concluded that there was no point in ever getting up again.
The Great Famine, the collective name for the crop failures, was a tremendous human tragedy. A half-million people died in England; perhaps 10 to 15 percent of urban Flanders and Germany perished; and a large but unknowable segment of rural Europe also succumbed.
Devastating as the Great Famine was, however, it was only a harbinger of things to come.
People who lived through the Black Death took the connection between plague and malnutrition as a given, the way we do the connection between cigarettes and lung cancer. The Florentine Giovanni Morelli attributed the city’s 50 percent plague mortality rate to the severe famine that struck central Italy the year before. Not twenty out of a hundred had bread in the countryside, he wrote. “Think how their bodies were affected.” The Frenchman Simon Couvin also described malnutrition as a handmaiden of plague. “The one who was poorly nourished by unsubstantial food fell victim to the merest breath of the disease,” he observed. However, many modern historians question the link between plague and malnutrition. For every Florence, they point to a counter example where Black Death losses were moderate or light, despite a recent history of famine. Critics also point to another inconsistency. In the years between the Great Famine and the plague, diets actually improved somewhat. If people were eating better, they ask, how could nutrition have been a predisposing factor in the Black Death?
However, it may be that critics have failed to find a connection between plague and malnutrition because they have been looking in the wrong places. The regional outbreaks of the disease that occurred after the Black Death—the epidemics of 1366–67, 1373, 1374, 1390, and 1400—all took place in periods of dearth. More centrally, the profound malnutrition of the Great Famine years may have left millions of Europeans more vulnerable to the Black Death. “A famine of . . . three years is of sufficient length to have devastating long-term effects on the future well being of human infants,” says Princeton historian William Chester Jordan, who points out that malnutrition often impedes proper immune system development, leaving the young with lifelong susceptibility to disease.
“By inference,” declares Professor Jordan, author of a study on the Great Famine, “the horrendous mortality of the Black Death should reflect the fact that poor people who were in their thirties and forties during the plague had been young children in the period 1315–1322 and were developmentally more susceptible to the disease than those who had been adults during the Famine or were born after the Famine abated.”
Dr. Jordan’s conclusions are based on animal research, but a recent study by a British researcher, Dr. S. E. Moore, indicates that fetal malnutrition is also a factor in human immune system development. Studying a group of young African adults, Dr. Moore found that subjects born in “the nutritionally debilitating hungry season” (winter and early spring) were four times more likely to die of infectious disease than adults born in the “plentiful harvest season.” In the conclusion of her report, Dr. Moore writes, “Other evidence from the literature also favors the hypothesis that intrauterine growth retardation (caused in this case by maternal food shortages) slows cell division during sensitive periods in the development of the immune system. This would provide a mechanism by which early insults could be ‘hard wired’ such that they [would have] a permanent impact.”
The historical evidence also suggests a link between the Great Famine and the Black Death. A connection between the two events should be reflected in plague deaths. Areas that lost large numbers of children in the famine should have suffered less during the Black Death because they had fewer vulnerable adults in the population—adults with congenitally defective immune systems. In medieval Flanders, the mortality pattern fits this paradigm. The region, which lost a great many children to epidemics during the Great Famine, experienced fewer plague deaths than many neighboring regions.
The imbalance between food and population was not the only disease risk factor in the medieval environment. Long before the weather turned and the land gave out and the grain became covered with mold and fungi, the continent was producing more garbage than it could dispose of. Circa 1200, the medieval city was drowning in filth, and in the postboom decades, the situation may have worsened as thousands of dispossessed peasants flooded into urban Europe, animals in tow. By the third decade of the fourteenth century, the amount of refuse on the medieval street was so great, it was literally driving men to murder. One morning in 1326 an irate London merchant confronted a peddler who had just tossed some eel skins into the lane outside his shop.
Pick up the eels, the merchant demanded.
No, replied the peddler.
Fists flew, a knife flashed; a moment later the peddler lay dead on the street.
As the state of public sanitation worsened, public outrage grew. There was a tremendous hue and cry about outdoor slaughterhouses and backed-up street gutters, and an even greater hue and cry about the swarms of black rats that lived off the filth. A fourteenth-century English-French dictionary illustrates just how ubiquitous the rodent was in the Middle Ages. “Sir,” goes one passage, “. . . I make bold that you shall be well and comfortably lodged here, save that there is a great pack of rats and mice.” Medieval people were also quite aware that the rat was a dangerous animal. Antirodent remedies like “hellebore in the weight of two pence” and “cakes of paste and powdered aconite” were quite popular and widely used. However, what people did not suspect is thatRattus rattus,the black rat, was involved in human plague.
This is not quite the ahistorical judgment it sounds like. Premodern peoples had keen powers of observation. During an outbreak of pestilence in Antiquity, the Roman governor-general of Spain offered handsome bounties to local rat hunters. The folklore of medieval and early modern India and China also contains several references to the connection betweenRattus rattusandY. pestis. An example is the Indian legend of the beautiful Princess Asaf-Khan of Punjab.
Walking through a courtyard one day, Asaf-Khan is said to have seen an infected rat staggering drunkenly. “Throw him to the cat,” she ordered. A slave picked up the wobbly rat by the tail and threw it to the princess’s pet cat; the cat promptly pounced on the animal, then just as promptly dropped it and fled. A few days later, the cat was found dead outside the princess’s bedroom. The following day the slave who picked up the rat died; then, one by one, the rest of Asaf-Khan’s slaves died, until only the princess was left alive.
A few centuries later a Chinese poet, Shi Tao-nan, wrote an ode to the relationship betweenRattus rattus, Y. pestis,and man.
Dead rats in the east,
Dead rats in the west! . . .
Men fall away like . . . walls. . . .
Nobody dares weep over the dead . . .
The coming of the devil plague
Suddenly makes the lamp dim.
Then, it is blown out,
Leaving man, ghost and corpses in [a] dark room.
Europeans first became aware of the biological relationship betweenY. pestisandRattus rattusduring the Third Pandemic of the late nineteenth century, when the rat (along with the flea) was identified as a key agent in human plague. In subsequent years, a great deal has been learned aboutRattus,including its age and origin. The black rat first evolved in Asia, probably India, sometime before the last Ice Age. At a weight of four to twelve ounces, it is only half the size of its first cousin, the Norwegian brown rat—also an important vector in human plague—butRattusmore than makes up for its unprepossessing physical stature with incredible powers of reproduction. It has been estimated that two black rats breeding continuously for three years could produce 329 million offspring, as long as no offspring died and all were paired (fortunately, all very big ifs).
Rattusalso has some other remarkable qualities that make it a formidable disease vector. One is great agility. A black rat can leap almost three feet from a standing position, fall from a height of fifty feet without injury, climb almost anything—including a sheer wall, squeeze through openings as narrow as a quarter of an inch, and penetrate almost any surface. The word “rodent” derives from the Latin verbrodere,which means “to gnaw,” and thanks to a powerful set of jaw muscles and the ability to draw its lips into its mouth (which allows the incisors, or cutting teeth, to work freely),Rattuscan gnaw through lead pipe, unhardened concrete, and adobe brick.
A wary nature also makesRattusa wily vector; the black rat usually travels by night, builds an escape route in its den, and reconnoiters carefully. This last behavior seems, at least in part, learned. During a foraging expedition, one young rat was observed taking a reconnaissance lesson from its mother. It would scamper ahead a few feet, stop until the mother caught up, then wait as she examined the floor ahead. Only after receiving a reassuring maternal nudge would the young rat advance. Rats also have another rather unusual, humanlike trait: they laugh. Young rats have been observed laughing—or purring, the rodent equivalent of laughter—when playing and being tickled.Rattusis, by nature, a very sedentary animal—usually. A city rat may wonder what lies on the other side of the street, but studies show it won’t cross the street to find out. Urban rats live their entire lives in a single city block. The rural rat’s range is a not much larger—a mile or so. However, ifRattuswere phobic about long-distance travel, it would still be an obscure Asian oddity, like the Komodo dragon lizard. Rats do travel, and often for reasons that highlight the role of trade and ecological disaster in plague.
For example, on occasion an entire black rat community will abandon a home range and migrate hundreds of kilometers. Research suggests that what makes the rats override their sedentary impulses is a craving for grain germ—and perhaps more particularly, for the vitamin E in the grain germ. Under normal conditions, rat migrations are infrequent, but under conditions of ecological disaster one imagines that they might become quite common.
For distances beyond the multikilometer range,Rattusrelies on its long-time companion, man. The stowaway rat is the original undocumented alien. In modern studies, it has been found in planes, in suit-jacket pockets, in the back of long-haul trailers, and in sacks carried by Javanese pack horses. Trade has also been a boon toRattusin another, more subtle but very significant way. In the wild, when rat populations grow unstably large, nature can prune them back with a prolonged period of bad weather and scarce food. The advent of camel caravans, pack horses, ships—and, later, trains and planes and trucks—has weakened this pruning mechanism. Once commercial man appeared, the highly adaptable rat was able to escape to places where food was abundant.
The date ofRattus’s arrival in Europe is a source of controversy. Some scholars believe the black rat first appeared in the West during the Crusades, which would mean sometime in the twelfth century. However, this view ignores the Plague of Justinian and the Roman statues of aRattus-like creature, which date back to at least the first century a.d. More credible is the theory of French biologist Dr. F. Audoin-Rouzeau, who datesRattus’s arrival to sometime before the birth of Christ. Given the rat’s affinity for trade, its entry point may have been the deserts of the Silk Road or the high mountain passes of Central Asia, where agents of Rome and China met occasionally, or the trading station the Romans maintained on the Indian coast.
Two significant dates inRattus’s European history are the sixth century, when the Plague of Justinian decimated the rodent—and human—population of the Mediterranean Basin, and the year 1000, when a resurgent Christendom began to produce enough food and waste to support a large demographic rebound. Three hundred years later, overcrowding, town walls, and primitive sanitation had turned the medieval city into a haven forRattus.*
Pigs, cattle, chickens, geese, goats, and horses roamed the streets of medieval London and Paris as freely as they did the lanes of rural Broughton. Medieval homeowners were supposed to police their housefronts, including removing animal dung, but most urbanites were as careless as William E. Cosner, a resident of the London suburb of Farringdon Without. A complaint lodged against Cosner charges that “men could not pass [by his house] for the stink [of] . . . horse dung and horse piss.”
On the meanest of medieval streets, the ambience of the barnyard gave way to the ambience of the battlefield. Often, animals were abandoned where they fell, left to boil in the summer sun, to be picked over by rats and ransacked by neighborhood children, who yanked bones from decaying oxen and cows and carved them into dice. The municipal dog catcher, who rarely picked up after a dog cull (kill), and the surgeon barber, who rarely poured his patients’ blood anywhere except on the street in front of his shop, also contributed to the squalid morning-after-battle atmosphere.
Along with the dog catcher and surgeon barber,Rattus’s other great urban ally was the medieval butcher. In Paris, London, and other large towns, animals were slaughtered outdoors on the street, and since butchers rarely picked up after themselves either, in most cities the butchers’ district was a Goya-esque horror of animal remains. Rivers of blood seeped into nearby gardens and parks, and piles of hearts, livers, and intestines accumulated under the butchers’ bloody boots, attracting swarms of rats, flies, and street urchins.
The greatest urban polluter was probably the full chamber pot. No one wanted to walk down one or two flights of stairs, especially on a cold, rainy night. So, in most cities, medieval urbanites opened the window, shouted, “Look out below!” three times, and hoped for the best. In Paris, which had 210,000 residents, the song of the chamber pot echoed through the city from morning to night, intermingling with the lewd guffaws of the prostitutes on the Ile de la Cité and the mournful bleats of the animals going to slaughter at St. Jacques-la-Boucherie on the Right Bank.
No premodern city was clean, but the great urban centers of Antiquity employed a number of ingenious sanitation techniques. The Etruscans, for example, created extensive underground drainage systems to remove garbage and excrement, and the Roman aqueducts carried enough water from the countryside to supply each resident of the city with three hundred gallons per day. The Middle Ages also produced some sanitary wonders, including the privy system in the monastery at Durham, England, which an admiring visitor described thusly: “Every seat and partition in the dormer [dormitory] was of wainscot close, on either side very decent, so that one [monk] could not see the other. . . . [And] there were as many seats of privies as there were little windows in the walls to give light to every one on said seats.” The system also had an underground “water course” that drained waste dropped through the privies into a nearby stream.
Though many medieval cities had public sanitation systems, none came close to rivaling Durham’s efficiency. The typical urban system began with shallow open gutters in small residential streets; these led into a network of larger central gutters, which, in turn, fed into a central dumping point—usually a large river like the Thames or Seine. Where available, local streams were diverted to provide flushing power, but since urban streams were not widely available, most systems relied on gravity and rainwater. In theory, storms were supposed to flush waste through the downward-sloping gutters to a river dumping point. But dry weather was unkind to theory; large piles of fecal matter, urine, and food would accumulate in the gutters, providing a feast for rats. Storms, when they did come, were not much help. Even a good rain rarely pushed waste much farther than an adjoining neighborhood. However, enough waste matter eventually got to the end points in the system to make the urban river an insult to the senses and an affront to propriety. After a visit to the malodorous Thames, a horrified Edward III expressed outrage at the “dung, lay-stalls and other filth” on the banks.
London supplemented its sewer system with municipal sanitation workers. Every ward in the city had a cadre of inspectors, the Dickensian-named “beadles” and “under-beadles,” who probed, peered, sniffed, and questioned their way along the medieval street. Was waste being cleared from housefronts? Were alleys being kept clean? Better-off Londoners often built indoor privies, or garderobes, over alleyways, suspending them “on two beams laid from one house to the other.” For the garderobe’s owner, the privy meant liberation—no more chamber pots on cold nights—but for his neighbors, it meant piles of dung in the alley, a medley of frightful odors, and swarms of flies (rats do not usually feed on human waste). Beadles and under-beadles also investigated acts of sanitary piracy. The year before the plague arrived in England, two malefactors were arrested for piping their waste into the cellar of an unsuspecting neighbor.
Under the beadles were the rakers, the people who did the actual cleaning up. Rakers swept out gutters, disposed of dead animal carcasses, shoveled refuse from the streets and alleys, and hauled it to the Thames or other dumping points, like the Fleet River.
The beadles and rakers not only had the dirtiest job in medieval London, but the most thankless as well. In 1332 a beadle in Cripplegate Ward was attacked by an assailant who, to add insult to injury, stole the beadle’s cart; a few years later, two women in Billingsgate heaped such abuse on a team of rakers, municipal authorities ordered the women arrested. Indeed, judging from contemporary accounts, medieval London seems to have been engaged in a low-level civil war over sanitation. On one side were miscreants, like the foul-mouthed Billingsgate ladies and William E. Cosner, the garbage king of Farringdon Without. On the other side, the king, Edward III, who thundered, “Filth [is] being thrown from houses by day and night”; the nervous mayor, who tried to assuage these royal outbursts with a flurry of widely ignored sanitation ordinances; the much-abused beadles, under-beadles, and rakers; and irate private citizens like the murderous shop owner.
Granaries, fields of oats and barley, and large stocks of domestic animals also made the rat a ubiquitous figure in the medieval countryside, and the architecture of rural Europe may have made the peasant especially vulnerable to its sharp teeth. Most peasant huts were constructed of wattle and daub, a sort of medieval version of wallboard. First, wattle, or twigs, were woven into a lattice design; then the mudlike daub was smeared over the lattice. The combination was so permeable, one unfortunate English peasant was killed when a poorly aimed spear burst through his cottage wall one morning at breakfast.
An early-twentieth-century outbreak of plague in the Egyptian village of Sadar Bazaar highlights another rat-friendly aspect of peasant life. A rat count in the village revealed that families who slept with their domestic animals had more rats per household—the exact number was 9.6—than families who did not: 8.2.
The Greeks, who worshipped the body, considered cleanliness a cardinal virtue, and the Romans considered hygiene so important, their public baths looked like temples. At the baths of Diocletian, “the meanest Roman could purchase, with a small copper coin, the daily enjoyment of [a] scene which might excite the envy of the Kings of Asia,” wrote Edward Gibbon. However, early Christians, who thought self-abnegation a cardinal virtue, considered bathing, if not a vice, then a temptation. Who knows what impure thoughts might arise in a tub of warm water? With this danger in mind, St. Benedict declared, “To those who are well, and especially to the young, bathing shall seldom be permitted.” St. Agnes took the injunction to heart and died without ever bathing.
Religious suspicions about bathing softened during the late Middle Ages, though not enough to dramatically improve standards of personal hygiene. Catherine of Siena, who was born in 1347, also never bathed, though Catherine’s greatest achievement may have been her (reported) ability to go months at a time without a bowel movement. St. Francis of Assisi, who considered God’s water too precious to squander, was another infrequent bather. The laity continued to resist the bathtub for less high-minded reasons. Whatever one medieval Miss Manners might say about bathing as a way of being “civil and mannerly toward others,” it was easier to wash only your face and hands in the morning, just as it was easier to dump a full chamber pot out the window rather than walk down several flights of stairs. Undressing and changing clothing were also infrequent. Thus, another useful phrase in the fourteenth-century English-French dictionary was, “Hi, the fleas bite me so!”
No doubt, the principal insect vector in the Black Death wasX. cheopis,the rat flea, but given the state of the medieval body, it is extremely likely thatPulex irritans,the human flea, also played an important role in the medieval plague.
From Caffa to Vietnam and Afghanistan, no human activity has been more closely associated with plague than war, and few centuries have been as violent as the fourteenth. In the decades before the plague, the Scots were killing the English; the English, the French; the French, the Flemings; and the Italians and the Spanish, each other. More to the point, in those savage decades, the nature of battle changed in fundamental ways. Armies grew larger, battles bloodier, civilians were attacked more frequently, and property was destroyed more routinely—and each change helped to make the medieval battlefield and the medieval soldier more efficient agents of disease.
Different historians date what is sometimes called the Military Revolution of the Later Middle Ages to different events, but as good a place to start as any is a meadow outside the Flemish village of Coutrai on a steamy July day in 1302. Arriving at the meadow that morning, a large French cavalry force on its way to Coutrai to relieve a group of besieged comrades (Flanders was a French domain in the fourteenth century) found the way blocked by several battalions of resolute Flemish bowmen and pikemen, dressed in wash-bowl helmets and fishnet armor.
Shortly after noon the French commander, Robert of Artois, ordered an attack on the Flemings, and his cavalry—pennants flapping in the summer wind—began advancing across the meadow high grass with all the stately grandeur appropriate to warrior-knights of the “august and sovereign house of France.” After forging a small brook midway between the two camps, the French broke into a run. A moment later, an enormousthwang!sounded, and the cloudless July sky filled with a thousand steel-tipped Flemish arrows; a few seconds later, there was an even louderthud!as several hundred French war horses smashed into the Flemish line at twenty miles an hour. According to conventional medieval military theory, the impact of the charge should have knocked the Flemings to the ground like bowling pins, making them vulnerable to trampling by horse and impalement by rider, but at Coutrai the gods of war rescrambled the rules. Instead of plunging through the Flemish line, the French broke against it like a wave against a sea wall—and dissolved, Humpty Dumpty–style, into a jumble of falling horses and falling men.
The discovery that infantry, well armed and resolute, could defeat cavalry, the queen of the battlefield—a discovery reaffirmed in several subsequent battles—revolutionized medieval military strategy, and like most revolutions, the infantry revolution produced several unanticipated consequences. First, medieval captains upgraded the role of infantry; then, discovering that foot soldiers were much cheaper to field—five or six bowmen and pikemen cost about the same as a single cavalry man—the captains expanded the size of the medieval army; and as armies grew much larger, battles grew much bigger and bloodier. This was partly a matter of numbers, but it also reflected the growing violence of warfare. For one thing, the largely peasant infantry was far less apt to observe the rules of chivalry, particularly in combat with enemy nobles. Since stress, including combat stress, weakens immune system function, arguably one consequence of bigger, more violent wars was a larger pool of disease-vulnerable people. Less arguably, larger armies produced larger concentrations of dirty men and debris, which attracted larger concentrations of rats and fleas.
Thechevauchee,the second major military development of the fourteenth century, was created to resolve the great military dilemma of the age: how does an army break a siege? “A castle can hardly be taken within a year, and even if it does fall, it means more expenses for the king’s purse and for his subjects than the conquest is worth,” wrote Pierre Dubois, an influential fourteenth-century military thinker. Dubois’s solution to the siege problem, outlined inDoctrine of Successful Expeditions and Shortened Wars,was indirection. Attack civilians, Dubois argued, and your opponent will be forced to abandon his fortified position and come out and defend his people. Thus was born thechevauchee. The idea of sending large raiding parties on search-and-destroy missions through the enemy countryside was not quite as new as Dubois pretended. The practice had been tried before, including by the Normans against the English in 1066. Civilians had also been targeted before. “If sometimes the humble and innocent suffer harm and lose their goods, it cannot be otherwise,” declared Honore Bouvet with a Gallic shrug.
However, the Anglo-French Hundred Years’ War—the largest, bloodiest conflict of the Middle Ages—transformed thechevaucheeinto a common and devastating weapon. The war began in 1337, and in the decade before the plague arrived in 1347, the English, who became masters of thechevauchee,employed it with lethal effect against Dubois’s fellow countrymen. All through the 1340s, flying wedges of English horsemen crisscrossed the French countryside, torching farms and villages, raping and murdering civilians, and looting cattle. In a letter to a friend, the Italian poet Petrarch, a recent visitor to wartime France, expressed astonishment at the level of destruction. “Everywhere were dismal devastation, grief and desolation, everywhere wild and uncultivated fields, everywhere ruined and deserted homes. . . . [I]n short everywhere remained the sad vestiges of the Angli [the English].”
Even more heartfelt is the account of English terrorism by the French King Jean II. “Many people [have been] slaughtered, churches pillaged, bodies destroyed and souls lost, maids and virgins deflowered, respectable wives and widows dishonored, towns, manors, and buildings burnt, . . . the Christian faith . . . chilled, and commerce . . . perished. . . . So many other evils and horrible deeds have followed these wars that they cannot be said, numbered or written.”
While the present is, at best, an imperfect guide to the past, several studies on modern conflict provide some additional insight into how war may have made the medieval world more vulnerable to plague. The subject of the first report, a U.S. Army study, is the old Soviet army, which fought in Afghanistan in the 1980s. Russian combat casualties in the conflict were quite low—under 3 percent—but the Soviet army suffered horrendous rates of illness, especially infectious illness. Three out of four soldiers who fought in Afghanistan—75 to 76 percent of the entire Soviet army in the country—had to be hospitalized for disease. Some soldiers were stricken by bubonic plague, but malaria, cholera, diphtheria, infectious dysentery, amoebic dysentery, hepatitis, and typhus were, if anything, more common.
What caused such a disastrous disease rate? The answer sheds some light on the unchanging nature of soldiering. According to the report, one important factor was military hygiene. The average Russian soldier changed his underwear once every three months, washed his uniform and blankets at about the same rate, drank untreated water, left his garbage unpicked up, defecated near his tent rather than at a field latrine, and, even when involved in kitchen work, only washed his hands after a bowel movement when an officer made him. Significantly, the report says that combat stress may also have played a role in the high disease rate. Stress, as noted earlier, impairs immune system function, lowering resistance to disease. This observation, of course, also applies to civilians menaced by marauding armies.
A second report comes from Vietnam, where approximately twenty-five thousand people, most native Vietnamese, were struck by plague between l966 and l974. According to U.S. Army medical authorities, one important factor in the outbreak was the siegelike conditions in large parts of the country. Not atypical is the fire base a team of American doctors visited in the winter of 1967. Twenty-one people were suffering from plague at the base, and an inspection quickly revealed why. Vietcong mortar and artillery attacks had driven life underground. The soldiers—and, in many cases, their families—lived in dirt bunkers whose warm, moist environments perfectly mimicked the rodent burrow; personal hygiene was appalling. No one washed, the bathing facilities being above ground, and no one used the field latrines, for the same reason. The Genoese, who were besieged at Caffa, and the French, who were besieged at Calais on the eve of the Black Death, would have found the fetid Vietnamese bunkers, piled high with human waste, half-eaten rations, and bloodstained battle dressings, quite familiar. Working under appalling conditions, the army doctors managed to save seventeen plague victims, but in four cases the disease was too advanced even for treatment with modern antibiotics.
The uprooting of populations—another feature of war that dates back to thechevaucheeand beyond—was also of great assistance toY. pestisin Vietnam. In 1969 more than six hundred people—the majority of them children—were stricken by plague in the village of Dong Ha. Again, U.S. Army doctors found rats and fleas everywhere, but this time the infestation was caused by a resource imbalance, not a siege. Little Dong Ha had suddenly become a receiving point for refugees fleeing south from the DMZ (demilitarized zone) and east from Khe Sanh, but the village lacked the sanitary resources to cope with a large influx of dirty people.
Of course, it is impossible to say with any precision how, and to what degree, these three factors—war, famine, and inadequate sanitation—helped to pave the way for the Black Death, but what can be said with some certainty is that by the timeY. pestisleft Caffa in late l346 or early 1347, Europe was already up to its chin in water and the tide was still rushing in.