CHAPTER SEVEN

MASTER OF THEM ALL

concerning differences among Europe’s monastic brotherhoods; the unlikely contribution of the brewing of beer to the forging of iron; the geometry of crystals; and an old furnace made new

THE RUINS OF RIEVAULX Abbey sit on a plain surrounded by gently rolling moors not far from the River Wye in the northeast of England. In the years between its founding in 1132 and dissolution in 1536, the abbey’s monks farmed more than five thousand acres of productive cropland. In addition to the main building, now a popular tourist stop, Rievaulx included more than seventy outbuildings, spread across a hundred square miles of Yorkshire. Some were granges: satellite farms. Others were cottage factories. And half a dozen were iron foundries, which is why Rievaulx Abbey stands squarely astride one of the half-dozen or so parallel roads that led to the steam revolution, and eventually to Rocket. The reason is the monastic brotherhood that founded Rievaulx Abbey, and not at all coincidentally, dominated ironworking (and a dozen other economic activities) in Europe and Britain throughout the medieval period: the Cistercians.

During the eleventh century, the richest and most imitated monastery in Europe was the Benedictine community at Cluny, in Burgundy. The Cluniacs, like all monastic orders, subscribed, in theory anyway, to the sixth-century Rule of Saint Benedict, an extremely detailed manual for a simple life of prayer and penance. In fact, they were “simple” in much the same way that the Vanderbilt mansions in Newport were “cottages.” A Cluniac monk was far likelier to be clothed in silk vestments than in the “woolen cowl for winter1 and a thin or worn one for summer” prescribed by the Rule. More important for the monastery of Molesme, near Dijon, was the Cluniac tendency to pick and choose pieces of Benedictine doctrine, and to apply more enthusiasm and discipline to their prayers than to their labors.

This was a significant departure from the order’s de facto founder, Saint Benedict, who defined labor as one of the highest virtues, and he wasn’t referring to the kind of work involved in constructing a clever logical argument. So widespread was his influence that virtually all the technological progress of the medieval period was fueled by monasticism. The monks of St. Victor’s Abbey2 in Paris even included mechanica—the skills of an artisan—in their curriculum. In the twelfth century a German Benedictine and metalworker named Theophilus Presbyter wrote an encyclopedia of machinery entitled Di diversis artibus; Roger Bacon, the grandfather of experimental science, was a Franciscan, a member of the order founded by Saint Francis of Assisi in part to restore the primacy of humility and hard work.

The Benedictines of Cluny, however, prospered not because of their hard work but because of direct subsidies from secular powers including the kings of France and England and numerous lesser aristocrats. And so, in 1098, the monks of Molesme cleared out, determined to live a purer life by following Benedict’s call for ora et labora: prayer and (especially) work. The order, now established at “the desert of Cîteaux” (the reference is obscure), whence they took the name “Cistercians,” was devoted to the virtues of hard work; and not just hard, but organized. The distinction was the work3 of one of the order’s first leaders, an English monk named Stephen Harding, a remarkably skillful executive who instinctively seemed to have understood how to balance the virtues of flexibility and innovation with those of centralization; by instituting twice-yearly convocations of dozens (later hundreds) of the abbots who ran local Cistercian monasteries all over Europe, he was able to promote regular sharing of what a twenty-first-century management consultant would call “best practices”—in everything from the cultivation of grapes to the cutting of stone—while retaining direct supervision of both process and doctrine. The result was amazing organizational sophistication, a flexible yet disciplined structure that spread from the Elbe to the Atlantic.

Thanks to the administrative genius of Harding and his successors, the order eventually comprised more than eight hundred monasteries, all over Europe, that contained the era’s most productive farms, factories—and ironmongeries. Iron was an even larger contributor to the Cistercians’ reputation than their expertise in agriculture or machinery, and was a direct consequence of Harding’s decision that because some forms of labor were barred to his monastic brothers, others, particularly metalworking, needed to be actively encouraged. The Cistercian monastery in Velehrad (today a part of the Czech Republic) may have been using waterwheels for ironmaking as early as 1269. By 1330, the Cistercians operated at least a dozen smelters and forges in the Burgundy region, of which the largest (and best preserved today) is the one at Fontenay Abbey: more than 150 feet long by nearly thirty feet wide, still bearing the archaeological detritus of substantial iron production.

Which brings us back to Rievaulx. In 1997, a team of archaeologists4 and geophysicists from the University of Bradford, led by Rob Vernon and Gerry McDonnell, came to north Yorkshire in order to investigate twelfth-century ironmaking techniques. This turns out to be a lot more than traditional pick-and-shovel archaeology; since the earth itself has a fair amount of residual iron (and therefore electrical conductivity), calculating the amount and quality of iron produced at any ruin requires extremely sophisticated high-tech instruments, with intimidating names like magnetometers and fluxgate gradiometers, to separate useful information from the random magnetism5 found at a suspected ironworking site. What Vernon and McDonnell found caused quite a stir in the world of technological history, which was that the furnaces in use during the thirteenth century at one of Rievaulx Abbey’s iron smelters were producing iron at a level of technical sophistication equivalent to that of eighteenth-century Britain. Evidence from the residual magnetism in the slag piles and pits in the nearby village of Laskill revealed that the smelter in use was not only a relatively sophisticated furnace but was, by the standards of the day, huge: built of stone, at least fifteen feet in diameter, and able to produce consistently high-quality iron in enormous quantities. In the line that figured in almost every news account of the expedition, Henry VIII’s decision to close the monasteries in 1536 (a consequence of his divorce from Catherine of Aragon and his break with Rome) “delayed the Industrial Revolution by two centuries.”

Even if the two-century delay was a journalistic exaggeration—the Cistercians in France, after all, were never suppressed, and the order was famously adept at diffusing techniques throughout all its European abbeys—it deserves attention as a serious thesis about the birth of the world’s first sustained era of technological innovation. The value of that thesis, of course, depends on the indispensability of iron to the Industrial Revolution, which at first glance seems self-evident.

First glances, however, are a poor substitute for considered thought. Though the discovery at Laskill is a powerful reminder of the sophistication of medieval technology, the Cistercians’ proven ability to produce substantial quantities of high-quality iron not only fails to prove that they were about to ignite an Industrial Revolution when they were suppressed in the early sixteenth century, it actually demonstrates the opposite—and for two reasons. First, the iron of Laskill and Fontenoy was evidence not of industrialization, but of industriousness. The Cistercians owed their factories’ efficiency to their disciplined and cheap workforce rather than any technological innovation; there’s nothing like a monastic brotherhood that labors twelve hours a day for bread and water to keep costs down. The sixteenth-century monks were still using thirteenth-century technology, and they neither embraced, nor contributed to, the Scientific Revolution of Galileo and Descartes.

The second reason is even more telling: For centuries, the Cistercian monasteries (and other ironmakers; the Cistercians were leaders of medieval iron manufacturing, but they scarcely monopolized it) had been able to supply all the high-quality iron that anyone could use, but all that iron still failed to ignite a technological revolution. Until something happened to increase demand for iron, smelters and forges, like the waterpower that drove them, sounded a lot like one hand clapping. It would sound like nothing else for—what else?—two hundred years.

THE SEVERN RIVER, THE longest in Britain, runs for more than two hundred miles from its source in the Cambrian Mountains of Wales to its mouth at Bristol Channel. The town of Ironbridge in Shropshire is about midway between mouth and source, just south of the intersection with the Tern River. Today the place is home not only to its eponymous bridge—the world’s first to be made of iron—but to the Ironbridge Institute, one of the United Kingdom’s premier institutions for the study of what is known nationally as “heritage management.” The Iron Gorge, where the bridge is located, is one of the United Nations World Heritage Sites, along with the Great Wall of China, Versailles, and the Grand Canyon.* The reason is found in the nearby town of Coalbrookdale.

Men were smelting iron in Coalbrookdale6 by the middle of the sixteenth century, and probably long before. The oldest surviving furnace at the site is treated as a pretty valuable piece of world heritage itself. Housed inside a modern glass pyramid at the Museum of Iron, the “old” furnace, as it is known, is a rough rectangular structure, maybe twenty feet on a side, that looks for all the world like a hypertrophied wood-burning pizza oven. It is built of red bricks still covered with soot that no amount of restoration can remove. When it was excavated, in 1954, the pile of slag hiding it weighed more than fourteen thousand tons, tangible testimony to the century of smelting performed in its hearth beginning in 1709, when it changed the nature of ironmaking forever.

Ironmaking involves a lot more than just digging up a quantity of iron ore and baking it until it’s hot enough to melt—though, to be sure, that’s a big part of it. Finding the ore is no great challenge; more than 5 percent of the earth’s crust is iron, and about one-quarter of the planet’s core is a nickel-iron alloy, but it rarely appears in an obligingly pure form. Most of the ores that can be found in nature are oxides: iron plus oxygen, in sixteen different varieties, most commonly hematite and magnetite, with bonus elements like sulfur and phosphorus in varying amounts. To make a material useful for weapons, structures, and so on, those oxides and other impurities must be separated from the iron by smelting, in which the iron ore is heated by a fuel that creates a reducing atmosphere—one that removes the oxides from the ore. The usual fuel is one that contains carbon, because when two carbon atoms are heated in the bottom of the furnace in the presence of oxygen—O2—they become two molecules of carbon monoxide. The CO in turn reacts with iron oxide as it rises, liberating the oxygen as carbon dioxide—CO2—and metallic iron.

Fe2O3 + 3CO → 2Fe + 3C02

  There are a lot of other chemical reactions involved, but that’s the big one, since the first step in turning iron ore into a bar of iron is getting the oxygen out; the second one is putting carbon in. And that is a bit more complicated, because the molecular structure of iron—the crystalline shapes into which it forms—changes with heat. At room temperature, and up to about 900°C, iron organizes itself into cubes, with an iron atom at each corner and another in the center of the cube. When it gets hotter than 900°C, the structure changes into a cube with the same eight iron atoms at the corners and another in the center of each face of the cube; at about 1300°C, it changes back to a body-centered crystal. If the transformation takes place in the presence of carbon, carbon atoms take their place in the crystal lattices, increasing the metal’s hardness and durability by several orders of magnitude and reducing the malleability of the metal likewise. The percentage of carbon that bonds to iron atoms is the key: If more than 4.5 percent of the final mixture is carbon, the final product is hard, but brittle: good, while molten, for casting, but hard to shape, and far stronger in compression than when twisted or bent. With the carbon percentage less than 0.5 percent, the iron is eminently workable, and becomes the stuff that we call wrought iron. And when the percentage hits the sweet spot of between about 0.5 percent and 1.85 percent, you get steel.

This is slightly more complicated than making soup. The different alloys of carbon and iron, each with different properties, form at different times depending upon the phase transitions between face-centered and body-centered crystalline structures. The timing of those transitions, in turn, vary with temperature, pressure, the presence of other elements, and half a dozen other variables, none of them obvious. Of course, humans were making iron for thousands of years before anyone had anything useful to say about atoms, much less molecular bonds. They were making bronze, from copper and tin, even earlier. During the cultural stage that archaeologists call “the” Iron Age—the definite article is deceptive; Iron Age civilizations appeared in West Africa and Anatolia sometime around 1200 BCE, five hundred years later in northern Europe*—early civilizations weaned themselves from the equally sturdy bronze (probably because of a shortage of easily mined tin) by using trial and error to combine the ore with heat and another substance, such as limestone (in the jargon of the trade, a flux), which melted out impurities such as silicon and sulfur. The earliest iron furnaces were shafts generally about six to eight feet high and about a foot in diameter, in which the burning fuel could get the temperature up to about 1200°C, which was enough for wrought, but not cast, iron.

By the sixteenth century, iron making began to progress beyond folk wisdom and trial and error. The first manuals of metallurgy started to appear in the mid-1500s, most especially De re metallica by the German Georg Bauer, writing under the name Agricola, who described the use of the first European blast furnaces, known in German as Stückofen, which had hearths roughly five feet long and three feet high, with a foot-deep crucible in the center:

A certain quantity of iron ore7 is given to the master [who] throws charcoal into the crucible, and sprinkles over it an iron shovelful of crushed iron ore mixed with unslaked lime. Then he repeatedly throws on charcoal and sprinkles it with ore, and continues until he has slowly built up a heap; it melts when the charcoal has been kindled and the fire violently stimulated by the blast of the bellows….

Agricola’s work was so advanced that it remained at the cutting edge of mining and smelting for a century and a half. The furnaces he described replaced the earlier forges, known as bloomeries, which produced a spongelike combination of iron and slag—a “bloom”—from which the slag could be hammered out, leaving a fairly low-carbon iron that could be shaped and worked by smiths, hence wrought iron.

Though relatively malleable, early wrought iron wasn’t terribly durable; okay for making a door, but not nearly strong enough for a cannon. The Stückofen, or its narrower successor, the blast furnace, however, was built to introduce the iron ore and flux at the top of the shaft and to force air at the bottom. The result, once gravity dropped the fuel through the superheated air, which was “blasted” into the chamber and rose via convection, was a furnace that could actually get hot enough to transform the iron. At about 1500°C, the metal undergoes the transition from face-centered to body-centered crystal and back again, absorbing more carbon, making it very hard indeed. This kind of iron—pig iron, supposedly named because the relatively narrow channels emerging from the much wider smelter resembled piglets suckling—is so brittle, however, that it is only useful after being poured into forms usually made of loam, or clay.

Those forms could be in the shape of the final iron object, and quite a few useful items could be made from the cast iron so produced. They could also, and even more usefully, be converted into wrought iron by blowing air over heated charcoal and pig iron, which, counterintuitively, simultaneously consumed the carbon in both fuel and iron, “decarbonizing” it to the <1 percent level that permitted shaping as wrought iron (this is known as the “indirect method” for producing wrought iron). The Cistercians had been doing so from about 1300, but they were, in global terms, latecomers; Chinese iron foundries had been using these techniques two thousand years earlier.

Controlling the process that melted, and therefore hardened, iron was an art form, like cooking on a woodstove without a thermostat. It’s worth remembering that while recognizably human cultures had been using fire for everything from illumination to space heating to cooking for hundreds of thousands of years, only potters and metalworkers needed to regulate its heat with much precision, and they developed a large empirical body of knowledge about fire millennia before anyone could figure out why a fire burns red at one temperature and white at another. The clues for extracting iron from highly variable ores were partly texture—a taffylike bloom, at the right temperature, might be precisely what the ironmonger wanted—partly color: When the gases in a furnace turned violet, what would be left behind was a pretty pure bit of iron.*

Purity was important: Ironmakers sought just the right mix of iron and carbon, and knew that any contamination by other elements would spoil the iron. Though they were ignorant of the chemical reactions involved, they soon learned that mineral fuels such as pitcoal, or its predecessor, peat, worked poorly, because they introduced impurities, and so, for thousands of years, the fuel of choice was charcoal. The blast furnace at Rievaulx Abbey used charcoal. So did the one at Fontenay Abbey. And, for at least a century, so did the “old” furnace at Coalbrookdale. Unfortunately, that old furnace, like all its contemporaries, needed a lot of charcoal: The production of 10,000 tons of iron demanded nearly 100,000 acres of forest, which meant that a single seventeenth-century blast furnace could denude more than four thousand acres each year.

Until 1709, and the arrival of Abraham Darby.

ABRAHAM DARBY WAS BORN in a limestone mining region of the West Midlands, in a village with the memorable name of Wren’s Nest. He was descended from barons and earls, though the descent was considerable by the time Abraham was born in 1678. His father, a locksmith and sometime farmer, was at least prosperous enough to stake his son to an apprenticeship working in Birmingham for a “malter”—a roaster and miller of malt for use in beer and whisky. Abraham’s master, Jonathan Freeth, like the Darby family, was a member of the Religious Society of Friends. By the time he was an adult, Darby had been educated in a trade and accepted into a religious community, and it is by no means clear which proved the more important in his life—indeed, in the story of industrialization.

Darby’s connection with the Society of Friends—the Quakers—proved its worth fairly early. A latecomer to the confessional mosaic of seventeenth-century England, which included (in addition to the established Anglican church) Mennonites, Anabaptists, Presbyterians, Baptists, Puritans, (don’t laugh) Muggletonians and Grindletonians, and thousands of very nervous Catholics, the Society of Friends was less than thirty years old when Darby was born and was illegal until passage of the Toleration Act of 1689, one of the many consequences of the arrival of William and Mary (and John Locke) the year before. Darby’s Quaker affiliation was to have a number of consequences—the Society’s well-known pacifism barred him, for example, from the armaments industry—but the most important was that, like persecuted minorities throughout history, the Quakers took care of their own.

So when Darby moved to Bristol in 1699, after completing his seven years of training with Freeth, he was embraced by the city’s small but prosperous Quaker community, which had been established in Bristol since the early 1650s, less than a decade after the movement broke away from the Puritan establishment. The industrious Darby spent three years roasting and milling barley before he decided that brass, not beer, offered the swiftest path to riches, and in 1702, the ambitious twenty-five-year-old joined a number of other Quakers as one of the principals of the Bristol Brass Works Company.

For centuries, brass, the golden alloy of copper and zinc, had been popular all over Britain, first as a purely decorative metal used in tombstones, and then, once the deluge of silver from Spain’s New World colonies inundated Europe, as the metal of choice for household utensils and vessels. However, the manufacture of those brass cups and spoons was a near monopoly of the Netherlands, where they had somehow figured out an affordable way of casting them.

The traditional method for casting brass used the same kind of forms used in the manufacture of pig iron: either loam or clay. This was fine for the fairly rough needs of iron tools, but not for kitchenware, which was why the process of fine casting in loam—time-consuming, painstaking, highly skilled—made it too costly for the mass market. This was why the technique was originally developed for more precious metals, such as bronze. Selling kitchenware to working-class English families was only practicable if the costs could be reduced—and the Dutch had figured out how. If the Bristol Brass Works was to compete with imports, it needed to do the same, and Darby traveled across the channel in 1704 to discover how.

The Dutch secret turned out to be8 casting in sand rather than loam or clay, and upon his return, Darby sought to perfect what he had learned in Holland, experimenting rigorously with any number of different sands and eventually settling, with the help of another ironworker and Quaker named John Thomas, on a material and process that he patented in 1708. It is by no means insignificant that the wording of the patent explicitly noted that the novelty of Darby’s invention was not that it made more, or better, castings, but that it made them at a lower cost: “a new way of casting iron bellied pots9 and other iron bellied ware in sand only, without loam or clay, by which such iron pots and other ware may be cast fine and with more ease and expedition and may be afforded cheaper than they can by the way commonly used” (emphasis added).

Darby realized something else about his new method. If it worked for the relatively rare copper and zinc used to make brass, it might also work for far more abundant, and therefore cheaper, iron. The onetime malter tried to persuade his partners of the merits of his argument, but failed; unfortunately for Bristol, but very good indeed for Coalbrookdale, where Darby moved in 1709, leasing the “old furnace.” There, his competitive advantage, in the form of the patent on sand casting for iron, permitted him to succeed beyond expectation. Beyond even the capacity of Coalbrookdale’s forests to supply one of the key inputs of ironmaking; within a year, the oak and hazel forests around the Severn were clearcut down to the stumps. Coalbrookdale needed a new fuel.

Abraham Darby wasn’t the first to recognize the potential of a charcoal shortage to disrupt iron production. In March 1589, Queen Elizabeth granted one of those pre–Statute on Monopolies patents to Thomas Proctor and William Peterson, giving them license “to make iron, steel, or lead10by using of earth-coal, sea-coal, turf, and peat in the proportion of three parts thereof to one of wood-coal.” In 1612, another patent, this one running thirty-one years, was given to an inventor named Simon Sturtevant for the use of “sea-coale or pit-coale” in metalworking; the following year, Sturtevant’s exclusive was voided and an ironmaster named John Rovenson was granted the “sole priviledge to make iron11 … with sea-cole, pit-cole, earth-cole, &c.”

Darby wasn’t even the first in his own family to recognize the need for a new fuel. In 1619, his great-uncle (or, possibly, great-great-uncle; genealogies for the period are vague), Edward Sutton, Baron Dudley paid a license fee to Rovenson for the use of his patent and set to work turning coal plus iron into gold. In 1622, Baron Dudley patented something—the grant, which was explicitly exempted when Edward Coke’s original Statute on Monopolies took force a year later, recognized that Dudley had discovered “the mystery, art, way, and means,12 of melting of iron ore, and of making the same into cast works or bars, with sea coals or pit coals in furnaces, with bellows”—but the actual process remained, well, mysterious. Forty years later, in 1665, Baron Dudley’s illegitimate son, the unfortunately named Dud Dudley, described, in his self-aggrandizing memoir, Dud Dudley’s Metallum martis, their success in using pitcoal to make iron. He did not, however, describe how they did it, and the patents of the period are even vaguer than the genealogies. What can be said for certain is that both Dudleys recognized that iron production was limited by the fact that it burned wood far faster than wood can be grown.*

In the event, the younger Dudley13 continued to cast iron in quantities that, by 1630, averaged seven tons a week, but politics started to occupy more of his attention. He served as a royalist officer during the Civil War, thereby backing the losing side; in 1651, while a fugitive under sentence of death for treason, and using the name Dr. Hunt, Dudley spent £700 to build a bloomery. His partners, Sir George Horsey,14 David Ramsey, and Roger Foulke, however, successfully sued him, using his royalist record against him, taking both the bloomery and what remained of “Dr. Hunt’s” money.

Nonetheless, Dudley continued to try to produce high-quality iron with less (or no) charcoal, both alone and with partners. Sometime in the 1670s, he joined forces with a newly made baronet named Clement Clerke, and in 1693 the “Company for Making Iron with Pitcoal” was chartered, using a “work for remelting and casting15 old Iron with sea cole [sic].” The goal, however, remained elusive. Achieving it demanded the sort of ingenuity and “useful and reliable knowledge” acquired as an apprentice and an artisan. In Darby’s case, it was a specific and unusual bit of knowledge, dating back to his days roasting malt.

As it turned out, the Shropshire countryside that had been providing the furnace at Coalbrookdale with wood was also rich in pitcoal. No one, however, had used it to smelt iron because, Dudley and Clerke’s attempts notwithstanding, the impurities, mostly sulfur, that it caused to become incorporated into the molten iron made for a very brittle, inferior product. For similar reasons, coal is an equally poor fuel choice for roasting barley malt, since while Londoners would—complainingly—breathe sulfurous air from coal-fueled fireplaces, they weren’t about to drink beer that tasted like rotten eggs. The answer, as Abraham Darby had every reason to know, was coke.

Coke is what you get when soft, bituminous coal is baked in a very hot oven to draw off most of the contaminants, primarily sulfur. What is left behind is not as pure as charcoal, but is far cleaner than pitcoal, and while it was therefore not perfect for smelting iron, it was a lot cheaper than the rapidly vanishing store of wood. Luckily for Darby,16 both the ore and the coke available in Shropshire were unusually low in sulfur and therefore minimized the usual problem of contamination that would otherwise have made the resulting iron too brittle.

Using coke offered advantages other than low cost. The problem with using charcoal as a smelting fuel, even when it was abundant, was that a blast furnace needs to keep iron ore and fuel in contact while burning in order to incorporate carbon into the iron’s molecular lattice. Charcoal, however, crushes relatively easily, which meant that it couldn’t be piled very high in a furnace before it turned to powder under its own weight. This, in turn, put serious limits on the size of any charcoal-fueled blast furnace.

Those limits vanished with Darby’s decision in 1710 to use coke, the cakes of which were already compressed by the baking process, in the old furnace at Coalbrookdale. And indeed, the first line in any biography of Abraham Darby will mention the revolutionary development that coke represents in the history of industrialization. But another element of Darby’s life helps even more to illuminate the peculiarly English character of the technological explosion that is the subject of this book.

The element remains, unsurprisingly, iron.

IN THE DAYS BEFORE modern quality control, the process of casting iron was highly problematic, since iron ore was as variable as fresh fruit. Its quality depended largely on the other metals bound to it, particularly the quantity of silicon and quality of carbon. Lacking the means to analyze those other elements chemically, iron makers instead categorized by color. Gray iron contains carbon in the form of graphite (the stuff in pencils) and is pretty good as a casting material; the carbon in white iron is combined with other elements (such as sulfur, which makes iron pyrite, or marcasite) that make it far more brittle. The classification of iron, in short, was almost completely empirical. Two men did the decisive work in establishing a scale that was so accurate that it established pretty much the same ten grades used today. One was Abraham Darby; the other was a Frenchman: René Antoine de Réaumur.

Réaumur was a gentleman scientist very much in the mold of Robert Boyle. Like Boyle, he was born into the aristocracy, was a member of the “established”—i.e. Catholic—church, was educated at the finest schools his nation could offer, including the University of Paris, and, again like Boyle at the Royal Society, was one of the first members of the French Académie. His name survives most prominently in the thermometric system he devised in 1730, one that divided the range between freezing and boiling into eighty degrees; the Réaumur scale stayed popular throughout Europe until the nineteenth century, and the incorporation of the Celsius scale into the metric system.* His greatest contribution to metallurgical history17 was his 1722 insight that the structure of iron was a function of the properties of the other metals with which it was combined, particularly sulfur—an insight he, like Darby, used to classify the various forms of iron.

Unlike Darby, however, he was a scientist before he was an inventor, and long before he was an entrepreneur or even on speaking terms with one. It is instructive that when the government of France, under Louis XV’s minister Cardinal de Fleury, made a huge investment in the development of “useful knowledge” (they used the phrase), Réaumur was awarded a huge pension for his discoveries in the grading of iron—and he turned it down because he didn’t need it.

Scarcely any two parallel lives do more to demonstrate the differences between eighteenth-century France and Britain: the former a national culture with a powerful affection for pure over applied knowledge, the latter the first nation on earth to give inventors the legally sanctioned right to exploit their ideas. It isn’t, of course, that Britain didn’t have its own Réaumurs—the Royal Society was full of skilled scientists uninterested in any involvement in commerce—but rather that it also had thousands of men like Darby: an inventor and engineer who cared little about scientific glory but a whole lot about pots and pans.

IF THE CAST IRON used for pots and pans was the most mundane version of the element, the most sublime was steel. As with all iron alloys, carbon is steel’s critical component. In its simplest terms, wrought iron has essentially no minimum amount of carbon, just as there is no maximum carbon content for cast iron. As a result, the recipe for either has a substantial fudge factor. Not so with steel. Achieving steel’s unique combination of strengths demands a very narrow range of carbon: between 0.25 percent and a bit less than 2 percent. For centuries* this has meant figuring out how to initiate the process whereby carbon insinuates itself into iron’s crystalline structure, and how to stop it once it achieves the proper percentage. The techniques used have ranged from the monsoon-driven wind furnaces of south Asia to the quenching and requenching of white-hot iron in water, all of which made steelmaking a boutique business for centuries: good for swords and other edged objects, but not easy to scale up for the production of either a few large pieces or many smaller ones. By the eighteenth century, the most popular method for steelmaking was the cementation process, which stacked a number of bars of wrought iron in a box, bound them together, surrounded them with charcoal, and heated the iron at 1,000° for days, a process that forced some of the carbon into a solid solution with the iron. The resulting high carbon “blister” steel was expensive, frequently excellent, but, since the amount of carbon was wildly variable, inconsistent.

Inconsistently good steel was still better than no steel at all. A swordsman would be more formidable with a weapon made of the “jewel steel” that the Japanese call tamahagane than with a more ordinary alloy, but either one is quite capable of dealing mayhem. Consistency gets more important as precision becomes more valuable, which means that if you had to imagine where consistency in steel manufacturing—uniform strength in tension, for example—mattered most, you could do a lot worse than thinking small. Smaller, even, than kitchenware. Something about the size of, say, a watch spring.

BENJAMIN HUNTSMAN WAS, LIKE Abraham Darby, born into a Quaker farming family that was successful enough to afford an apprenticeship for him with a clockmaker in the Lincolnshire town of Epworth. There, in a process that should by now seem familiar, he spent a seven-year apprenticeship learning a trade so that he was able to open his own shop, in the Yorkshire town of Doncaster, with his own apprentice, by the time he was twenty-one. Two years later, he was not only making and selling his own clocks, but was given the far from honorary duty of caring for the town clock.

In legend, at least, Huntsman entered the history of iron manufacturing sometime in the 1730s out of dissatisfaction with the quality of the steel used to make his clock springs. Since the mainspring of an eighteenth-century clock provided all of its power as it unwound, and any spring provided less drive force as it relaxed, the rate at which it yielded its energy had to be the same for every clock; an even smaller balance spring was needed to get to an “accuracy” of plus or minus ten minutes a day. Given the number of pieces of steel needed to compensate for a machine whose driving force changed with each second, consistency was more highly prized in clockmaking than in any other enterprise.

After nearly ten years of secret experiments18* Huntsman finally had his solution: smelting blister steel in the clay crucibles used by local glass makers until it liquefied, which eliminated almost all variability from the end product. So successful was the technique, soon enough known as cast, or crucible, steelmaking, that by 1751 Huntsman had moved to Sheffield, twenty miles north of Doncaster, and hung out a shingle at his own forge. The forge was an advertisement for Huntsman’s ingenuity: His crucible used tall chimneys to increase the air draft, and “hard” coke to maintain the cementation temperature. It was also a reminder of the importance of good luck: His furnaces could be made19 with locally mined sandstone that was strongly resistant to heat, and his crucibles with exceptionally good local clay.

Up until then, Huntsman’s life was very much like that of a thousand other innovators of eighteenth-century Britain. He departed from the norm,20 however, in declining to patent his process, instead relying on the same secrecy he had used in his experiments to be a better protection than legal sanction—supposedly to such a degree that he ran his works only at night. It didn’t, however, work. A competitor named Samuel Walker was the first to spy out Huntsman’s secret, though not the last; he also attracted industrial spies like the Swede Ludwig Robsahm and Frenchman Gabriel Jars. In 1761 and 1765 respectively they produced reports on the Huntsman process for their own use, and crucible steel became the world’s most widely used steelmaking technique until the middle of the nineteenth century.

NEITHER HUNTSMAN’S MAINSPRINGS NOR Darby’s iron pots were enough to build Rocket, and certainly neither represented enough demand to ignite a revolution. The real demand for iron throughout the mid-1700s was in the form of arms and armor, especially once the worldwide conflict known variously as the Seven Years War or the French and Indian War commenced in 1754. And in Britain, arms and armor meant the navy.

Between the Battle of La Hogue in 1692 and Trafalgar in 1805, the Royal Navy grew from about 250 to 950 ships while nearly doubling the number of guns carried by each ship of the line. Every one of those ships required a huge weight of iron, for everything from the cannon themselves to the hoops around hundreds of barrels, and most of that iron was purchased, despite the advances of Darby and others, from the Bergslagen district of Sweden; in 1750, when Britain consumed21 50,000 tons, only 18,000 was produced at home. The reasons were complex, primarily Scandinavia’s still abundant forests in close proximity to rich veins of iron ore, but they resulted in a net outflow of £1.5 million annually to Sweden alone. Improving the availability and quality of British iron therefore had both financial and national security implications, which was why, in the 1770s, one of the service’s senior purchasing agents was charged with finding a better source for wrought iron. His name was Henry Cort.

FIFTY YEARS AFTER CORT’S death, The Times could still laud him as “the father of the iron trade.”22 His origins are a bit murky; he was likely from Lancaster, the son of a brickmaker, and by the 1760s he was working as a clerk to a navy agent: someone charged with collecting the pensions owed to the survivors of naval officers, prize money, and so on. In due course, the Royal Navy promoted him to a position as one of its purchasing agents, where Cort was charged with investigating other options for securing a reliable source of wrought iron.

The choice was simple: either import it from Sweden or make it locally, without charcoal. Darby’s coke-fired furnace solved half the problem, but replacing charcoal when turning it into wrought iron—the so-called fining process—demanded a new technique. An alternate finery that dates from the 1730s came to be known as the stamp-and-pot system, in which pig iron was cooled, removed from the furnace and “stamped” or broken into small pieces, and then placed in a clay pot that was heated in a coal-fired furnace until the pot broke, after which the iron was reheated and hammered into a relatively pure form of wrought iron.23

This limited the quantity of iron to the size of the clay pot, which was unacceptable for naval purposes. Cort’s insight was to expand the furnace itself, so that another hearth door opened onto what he called a puddling furnace. In puddling, molten pig iron was stirred by an iron “rabbling” bar, with the fuel separate from the iron in order to remove carbon and other impurities; as the carbon left the pig iron, the melting temperature increased, forcing still more carbon out of the mix. It was a brutal job, requiring men with the strength of Olympic weightlifters, the endurance of Tour de France cyclists, and a willingness to spend ten hours a day in the world’s hottest (and filthiest) sauna stirring a pool filled with a sludge of molten iron using a thirty-pound “spoon.” Temperatures in the coolest parts of the ironworks were typically over 130°; iron was transported by the hundredweight in unsteady wheelbarrows, and the slightest bit of overbalancing meant broken bones. Ingots weighing more than fifty pounds each had to be placed in furnaces at the end of puddler’s shovels. Huge furnace doors and grates were regularly opened and closed by chains with a frightening tendency to wrap themselves around arms and legs. In the words of historian David Landes, “The puddlers were the aristocracy24 of the proletariat, proud, clannish, set apart by sweat and blood…. Numerous efforts were made to mechanize the puddling furnace, all of them in vain. Machines could be made to stir the bath, but only the human eye and touch could separate out the solidifying decarburized metal.” What the puddlers pulled, taffylike, from the furnace was a “puddle ball” or “loop” of pure iron ready for the next step.

The next step in replacing imported metal with local was forging wrought iron into useful shapes: converting ingots into bars and sheets. Until the late eighteenth century, the conversion was largely a matter of hammering iron that had been softened by heat into bars. A slitting mill run by waterpower then cut the bars into rods, which could then, for example, be either drawn into wire, or—more frequently—cut into nails. At Fontley, near Fareham, seventy miles from London, Cort was the first to invent grooved rollers, through which the softened iron ingots could be converted into bars and sheets, in the manner of a traditional pasta machine. In 1783, he received a patent for “a peculiar method of preparing,25 welding, and working various sorts of iron.”

Cort’s title as “the father of the iron trade” was not free and clear, even during his lifetime, since his ingenuity in the improvement of iron purity was not matched by his decidedly impure financing methods. The source of the funds26 used to purchase the Fontley forge and slitting mills appears to have been money embezzled from the Royal Navy; once this was discovered, Cort lost everything, including his patents.*

Henry Cort’s life and works, however, also bear examination by those uninterested in embezzlement, or even metallurgy. Most histories—including, to a degree, this one—that touch on the industrialization of iron production tend to give a lot of attention to men like Darby and Cort while scanting everyone else. This is a bit like assuming that the visible part of the iceberg is the most important part. The portion of an iceberg that floats above the water line is not only a small fraction of the whole, but is “chosen” for its position as much by chance as by any real difference from the portion that remains underwater. The “underwater” story of iron purification is a substantial one: Not only had grooved rollers27 of a slightly different sort been patented by John Purnell in 1766, but puddling (under a different name) was included in Peter Onions’s 1783 patent. Other versions of puddling appeared in William Wood’s patent of 1728, the 1763 patent of Watt’s partner John Roebuck, the 1776 patent of Thomas and George Cranage (who worked with Darby at Coalbrookdale), John Cockshutt’s 1771 patent, and most telling, the four-stage technique that earned John Wright and Richard Jesson patent number 1054 in which the iron was “cleansed of sulphurous matter”28 inside a rolling barrel.

All of which should serve as a reminder that while the industrialization of Europe was not a function of impersonal demographic forces, neither was it the work of a dozen brilliant geniuses. In any year, the lure of wealth and glory tempted at least a few hundred English inventors, but only a few achieved both.

However, no alloy of copper, or gold, or silver produced anything like the fame and fortune of the iron trade. In his 1910 poem, Rudyard Kipling allegorized the phenomenon:

Gold is for the mistress—silver for the maid;

Copper for the craftsman, cunning at his trade

“Good!” said the Baron, sitting in his hall

“But iron—cold iron—is master of them all.”

* And, to be fair, the “Struve Geodetic Arc,” which consists of thirty-four cairns, obelisks, and rocks-with-holes-drilled-in-them along a fifteen-hundred-mile chain of survey triangulations running from Ukraine to Belarus, and commemorating the nineteenth-century measurement of a longitude meridian. The work of the astronomer Friedrich Georg Wilhelm Struve, it was indeed a noble and memorable achievement, but it draws considerably fewer tourists than, say, Stonehenge.

* The conventional archaeological sequence that leads from stone to bronze to iron “ages” has its critics. The fact that one seems inevitably to follow the other is puzzling on the face of it, given that iron is far easier to find than copper. Many metallurgists have suggested that the fact that primitive iron oxidizes so rapidly may explain the perceived lateness of its arrival.

* Though, needless to say, early iron smelters wouldn’t have known this (or, probably, cared), the violet color was an indication that carbon monoxide is being burned, thus showing the presence of a reducing atmosphere that could remove oxygen from the ore.

* This is the problem with the phrase “renewable energy.” Wood is a renewable resource, but for making lumber or paper, not energy. The same adjective can even be applied to fossil fuels like oil and coal, if your time scale for renewability is measured in tens of millions of years. As a good rule of thumb, any resource that captures solar energy in the form of chemical bonds—coal, natural gas, oil, even biofuels—can always be consumed faster than it can be “renewed.”

* Some readers may recall seeing the once ubiquitous advertisements for Charles Minard’s map of Napoleon’s invasion of Russia, which earned undying fame as, in the words of Edward Tufte, “probably the best statistical graphic ever drawn.” The map uses the Réaumur scale to show the temperature confronting the Grande Armée during its retreat from Moscow.

* The earliest steel artifacts are more than five thousand years old, but were probably happy accidents of iron manufacture; the earliest reliable steelmaking dates to about the fourth century BCE, in both Asia and the Mediterranean.

* Sadly, no records survived Huntsman’s fear of industrial espionage, but literally dozens of progressively purer ingots of steel have been discovered where he buried them, testimony to trials at different temperatures, in different environments.

* To the end of his life, Cort contended that he had been the victim of fraud by the Navy Office and other parts of the government; the controversy continues to this day, fueled by charges and countercharges, with as much passion as any debate about the JFK assassination.

If you find an error please notify us in the comments. Thank you!