concerning the discovery of fatty earth; the consequences of the deforestation of Europe; the limitations of waterpower; the experimental importance of a Scotsman’s ice cube; and the search for the most valuable jewel in Britain
THE GREAT SCIENTIST AND engineer William Thomson, Lord Kelvin, made his reputation on discoveries in basic physics, electricity, and thermodynamics, but he may be remembered just as well for his talent for aphorism. Among the best known of Kelvin’s quotations is the assertion that “all science is either physics or stamp collecting” (while one probably best forgotten is the confident “heavier-than-air flying machines are impossible”). But the most relevant for a history of the Industrial Revolution is this: “the steam engine has done much more for science1 than science has done for the steam engine.”
For an aphorism to achieve immortality (at least of the sort certified by Bartlett’s Familiar Quotations), it needs to be both true and simple, and while Kelvin’s is true, it is not simple, but simplistic. The science of the eighteenth century didn’t provide the first steam engines with a lot of answers, but it did have a new, and powerful, way of asking questions.
It is hard to overstate the importance of this. The revolution in the understanding of every aspect of physics and chemistry was built on a dozen different changes in the way people believed the world worked—the invariability of natural law, for example (Newton famously wrote, “as far as possible, assign the same causes [to] respiration in a man, and in a beast; the descent of stones in Europe and in America; the light of our culinary fire and of the sun”) or the belief that the most reliable path to truth was empirical.
But scientific understanding didn’t progress by looking for truth; it did so by looking for mistakes.
This was new. In the cartoon version of the Scientific Revolution, science made its great advances in opposition to a heavy-handed Roman Catholic Church; but an even larger obstacle to progress in the understanding and manipulation of nature was the belief that Aristotle had already figured out all of physics and had observed all that biology had to offer, or that Galen was the last word in medicine. By this standard, the real revolutionary manifesto of the day was written not by Descartes, or Galileo, but by the seventeenth-century Italian poet and physician Francesco Redi, in his Experiments on the Generation of Insects, who wrote (as one of a hundred examples), “Aristotle asserts that cabbages produce caterpillars2 daily, but I have not been able to witness this remarkable reproduction, though I have seen many eggs laid by butterflies on the cabbage-stalks….” Not for nothing was the motto of the Royal Society nullius in verba: “on no one’s word.”
This obsession with proving the other guy wrong (or at least testing his conclusions) is at the heart of the experimental method that came to dominate natural philosophy in the seventeenth century.* Of course, experimentation wasn’t invented in the seventeenth century; four hundred years earlier, while Aquinas was rejiggering Aristotle for a Christian world, the English monk Roger Bacon was inventing trial-and-error experimentation—in Europe, anyway; experimentation was widely practiced in medieval Islamic cities from Baghdad to Córdoba. Bacon was, however, a decided exception. The real lesson of medieval “science” is that the enterprise is a social one, that it was as difficult for isolated genius to sustain progress as it would be for a single family to benefit from evolution by natural selection. Moreover, even when outliers like Friar Bacon, and to a lesser degree the era’s alchemists, engaged in trial-and-error tests, they rarely recorded their results (this might be the most underappreciated aspect of experimentation) and even more rarely shared them. A culture of experimentation depends on lots of experimenters, each one testing the work of the others, and doing so publicly. Until that happened, the interactions needed for progress were too few to ignite anything that might be called a revolution, and certainly not the boiler in Rocket’s engine.
It took a massive shift in perspective to create such a culture, one in which a decent fraction of the population (a) trusted their own observations more than those made by Pliny, or Avicenna, or even Aristotle, and (b) distrusted the conclusions made by their contemporaries, at least until they could replicate them. In the traditional and convenient shorthand, this occurred when “scientific revolutionaries” like Galileo, Kepler, Copernicus, and Newton started thinking of the world in purely material terms, describing the world as a sort of machine best understood by reducing it to its component parts. The real transformation, however, was epistemological: Knowledge—the same stuff that Locke was defining as a sort of property—was, for the first time in history, conditional. Answers, even when they were given by Aristotle, were not absolute. They could be replaced by new, and better, answers. But a better answer cannot be produced by logic alone; spend years debating whether the physics of Democritus or Leucippus was superior, and you’ll still end up with either one or the other. A new and improved version demanded experiment.
If the new mania for scientific experimentation began sometime in the sixteenth century, with Galileo—even earlier, if you want to begin with René Descartes—it took an embarrassingly long time to contribute much in the way of real-world technological advances. Francis Bacon might have imagined colleges devoted to the material betterment of mankind, in which brilliant researchers produced wonders that might allay hunger, cure disease, or speed ships across the sea; but the technology that mostly occupied the Scientific Revolution of the sixteenth and seventeenth centuries was improving scientific instruments themselves (and their close relations, navigational instruments). Science did build better telescopes, clocks, and experimental devices like von Guericke’s hemispheres, or Hooke’s vacuum machine, but remarkably little in the way of useful arts. The chasm that yawned between Europe’s natural philosophers and her artisan classes remained unbridged.
Describing how that bridge came to be built has been, for decades, the goal of an economic historian at Northwestern University named Joel Mokyr, who knows more than is healthy about the roots and consequences of the Industrial Revolution. In a series of books, papers, and articles, Professor Mokyr has described the existence of an intellectual passage from the Scientific Revolution of Galileo, Copernicus, and Newton to the Industrial Revolution, which he has named the “Industrial Enlightenment”—an analytical construct that is extraordinarily useful in understanding the origins of steam power.
The beauty of Mokyr’s analysis is that it replaces an intuitive notion—that the Industrial Revolution must have been somehow dependent upon the Scientific Revolution that preceded it—with an actual mechanism: in simple terms, the evolution of a market in knowledge.
The sixteenth and seventeenth century’s Scientific Revolution was a sort of market, though the currency in which transactions occurred was usually not gold but recognition: Gaspar Schott saw Otto Gericke’s vacuum experiments and wrote about them; Boyle read his account and published his own. Huygens, Papin, and Hooke all published their own observations and experiments. They had an interest in doing so; as a class, they generally sought pride rather than profit for their labors, and were therefore paid with notoriety, along with some acceptable sinecure:3 professorships, pensions, patronage. They even sometimes, as with Hooke’s attempt to turn his discovery of the Law of Elasticity into a balance spring mechanism for a marketable timepiece, showed decided commercial impulses. But the critical thing was that a structure within which scientists could trade their newly created knowledge had been evolving for nearly a century before it was widely adopted by more commercially minded users.
Their need for it, however, was enormous. Prior to the eighteenth century, innovations tended to stay where they were, since finding out about them came at a very steep price; in the language of economists, they carried high information costs. For centuries, a new and improved dyeing technique developed by an Italian chemist would not be available, at any affordable cost, to a weaver in France, both because the institutions necessary for communicating them, such as transnational organizations like the Royal Society, did not exist, and because the value of the innovation was enhanced by keeping it secret.
For a century, that was how things stood. Europe’s first generation of true scientists produced a flood of testable theories about nature—universal gravitation, magnetism, circulation of the blood, the cell—and tools with which to understand them: calculus, the microscope, probability, and hundreds more. But this flood of what Mokyr calls propositional knowledge did not diffuse cheaply into the hands of the artisans who could put them to use, since the means of doing so depended on a sophisticated publishing industry producing books in Europe’s vernacular languages rather than the Latin of scientific discourse, and on a literate population to read them.
An even bigger problem was this: as the seventeenth century wound down, scientific knowledge was becoming a public good, partly because of what we might call the Baconian program. Francis Bacon’s vision of investigators and experimenters working in a common language for the common good had inspired an entire generation; and, to be fair, the extraordinary number of related discoveries in mathematics, physics, and chemistry had indeed benefited everyone. But partly it was a matter of class. Scientists in the seventeenth and eighteenth centuries, though a highly inventive bunch, were members of a fraternity that depended on allegiance to the idea of open science—so much so that even Benjamin Franklin, clearly a man with a strong commercial sense, did not as a matter of course take out patents on his inventions. The result was what happens when work is imperfectly aligned with rewards: Science remained disproportionately the activity of those with outside income. Predictably, Bacon’s New Atlantis model, which worked so well for the diffusion of scientific innovations, had built in a limit on the population of innovators.
By the start of the eighteenth century, however, things were changing, and changing fast. Artisans like Thomas Newcomen and itinerant experimentalists like Denis Papin were both corresponding with Robert Hooke. Engineers like Thomas Savery were demonstrating inventions in front of the physicists and astronomers of the Royal Society. Most important, mechanics, artisans, and millwrights, who had been taught not only to read but to measure and calculate, started to apply the mathematical and experimental techniques of the sciences to their crafts. Useful knowledge (the historian Ian Inkster calls it useful and reliable knowledge, or URK) became, in Mokyr’s words, “the buzzword of the eighteenth century.”4
The same mechanisms that spread the discoveries of the Scientific Revolution throughout Europe—correspondence between researchers, and publications like the Royal Society’s Philosophical Transactions—proved just as useful in the diffusion of applied knowledge. But because Europe generally, and Britain specifically, had a lot more artisans than scientists, the demand for commercially promising applications was far greater than those with a purely scientific bent. New ways of buying and selling applied knowledge emerged to meet the demand. J. T. Desaguliers, the same critic5 who had sniffed at Thomas Newcomen’s mathematical training, spent decades giving a hugely popular series of lectures all across rural England and later collected them in his 1724 Course of Mechanical and Experimental Philosophy. By the 1730s, millwrights, carpenters, and blacksmiths were able to purchase what we would today call a continuing education in pubs and coffeehouses in the craft they had learned as apprentices. By 1754, the drawing master William Shipley could found the Royal Society of Arts (at a time that made no distinction between fine, decorative, or applied arts) on a manifesto that argued, “the Riches, Honour, Strength,6 and Prosperity of a Nation depend in a great Measure on Knowledge and Improvement of useful Arts [and] that due Encouragement and Rewards are greatly conducive to excite a Spirit of Emulation and Industry….” Britain’s artisans were now buyers at their own knowledge market, and they were doing so to fatten not their reputations, but their wallets.
One of the criticisms often made of economists is that they see all of human behavior as a kind of market. But neither steam engines in general, nor Rocket in particular, makes much sense without referring to an entire series of markets: one for transportation of Manchester cotton, another for the iron on which the engine ran, still another for the coal it burned, and so on. The most important of all, however, was the Industrial Enlightenment’s de facto market in what would one day be called “best practices” from the craft world. By the first decades of the eighteenth century, a market had emerged in which an English ironmonger could learn German forging techniques, and a surveyor could acquire the tools of descriptive geometry.
But markets do more than bring buyers and sellers together. They also reduce transaction costs. One of those costs, in the early decades of the eighteenth century, was incurred due to the fact that an awful lot of the newest bits of useful knowledge were hard to compare, one with the other, because they described the same phenomenon using different words (and different symbols). As the metaphorical shelves of the knowledge market filled with innovations, buyers demanded that they be comparable, which led directly to standardization of everything from mathematical notation to temperature scales. In this way, the Industrial Enlightenment’s knowledge economy lowered the barriers to communication between the creators of theoretical models and masters of prescriptive knowledge, for which the classic example is Robert Hooke’s 1703 letter to Thomas Newcomen advising him to drive his piston by means of vacuum alone.
The dominoes look something like this: A new enthusiasm for creating knowledge led to the public sharing of experimental methods and results; demand for those results built a network of communication channels among theoretical scientists; those channels eventually carried not just theoretical results but their real-world applications, which spread into the coffeehouses and inns where artisans could purchase access to the new knowledge.
Put another way, those dominoes knocked down walls between theory and practice that had stood for centuries. The emergence of a market in which knowledge could be acquired for application in the world of commerce had also increased the population capable of producing that knowledge. It would occur in the study of medicine, of chemistry, and even of mathematics, but nowhere was it more relevant to the future of industrialization than in the study of the science of heat.
TWO YEARS BEFORE HIS death in 1704,7 John Locke collaborated with William Grigg, the son of one of Locke’s oldest friends, to produce an interlineary translation—that is, alternating lines of Latin and English—of Aesop’s fables. One of those fables, “De sole et vento,” or “The Sun and the Wind,” famously recounts the contest between the two title characters over which could successfully cause a traveler to remove his coat. It is among the earliest, and is certainly one of the best known, accounts of the debate between heat and cold. Or, as we would call it today, thermodynamics.
Though the equations of thermodynamics are obviously essential to understanding the machine that Newcomen and Calley demonstrated in front of Dudley Castle, they were just as obviously unnecessary for building it. What the ironmonger and glazier didn’t know about the physics of the relationship between water and steam would fill libraries, while what they did know was mostly wrong. This is in no way a criticism of the inventors; what everyone knew, at the time, was mostly wrong.
Even the seventeenth century’s newfound affinity for experimental science hadn’t done much to correct misapprehensions about the nature of heat. When Francis Bacon (to be fair, more a philosopher of science than a scientist) attempted, in 1620, an exhaustive description of the sources of heat, he included not only obvious candidates like the sun, lightning, and the “the violent percussion of flint and steel,” but also vinegar, ethanol (“spirits of wine”), and even intense cold. He also failed to produce anything like a testable theory; while he did nod toward equating heat with motion, he failed to realize that heat was a measurable quantity—the first thermometers that used any sort of scale date from the early eighteenth century; imagine, if you can, drawing a map without knowing the number of inches to the mile, and you can see the obstacle this presented. Galileo, Descartes, and especially Robert Boyle also tried explain how motion was related to heat, particularly friction. They each failed, which is not surprising; nearly three centuries later, Lord Kelvin himself was still unsure whether heat energy could be equated with mechanical energy.
The reason is that seventeenth-century heat theorists were hamstrung by the two existing models from the world of natural philosophy. The first was the notion that heat8 was an “elastic fluid” or gas; the other, that it was a consequence of exciting the motion of an object’s constituent parts, which were known as “atoms,” though those who used the term didn’t mean the same thing as a modern chemistry textbook. Isaac Newton had demonstrated that the best escape from the prison of Aristotelian ideas about motion was an entirely new set of invariant laws, but Newton, curious though he was, showed only small interest in the scientific nature of heat. As a result, the first really useful theory of heat and combustion was being articulated elsewhere.
In 1678, less than a decade before Newton introduced the world to the laws of motion and universal gravitation in the first edition of Principia Mathematica (and two decades before Savery received patent number 376), the alchemist Johann Joachim Becher departed the patchwork quilt of grand duchies, principalities, and free cities that the Thirty Years War had made of Germany. By then, he had already served as a court physician to the Elector of Bavaria; as a secret agent in the pay of the Austrian Emperor; and as a special emissary for Prince Rupert, the onetime commander of royalist cavalry during the English Civil War. It was in the last capacity that he journeyed to Scotland and Cornwall, to examine and report on the coal and tin mines of Britain. He also had a personal motive: to discuss his discovery with the new Royal Society.
Becher called the substance he had “discovered” terra pingua, which confusingly translates as “fatty [by which he meant inflammable] earth.” This was a thoroughly respectable attempt to reconcile the established belief that the world was made up of the ancient four elements—fire, water, earth, and air—with the observation that the phenomenon of combustion seems to involve them all; that in some way, the process that burns wood is similar to the one that causes iron to rust, if only because the absence of air prevents both. Becher’s discovery, renamed in 1718 as phlogiston, replaced the Aristotelian elements with a different foursome: water; terra mercuralis, or fluid earth (i.e. mercury, and similar substances); inert earth, or terra lapida (that is, salts); and Becher’s terra pingua, thus covering all possible forms of matter, and demoting fire from an element to a phenomenon. The theory that explained the behavior of Becher’s inflammable earth still has, in some circles, the flavor of charlatanism, and to be sure, Becher wasn’t completely free of the taint; he had spent years trying to sell a method for turning sand into gold. However tempting it is to poke fun at the scientific ignorance of our ancestors, though, in the case of the phlogiston theory, it is a temptation that should be resisted. Though phlogiston theory is wrong, it is considerably more scientific than is generally understood, and it was an early and necessary step on the way to a proper understanding of thermodynamics, and of the way in which Rocket transformed heat into movement.
At the core of the theory is the idea that anything that can be burned must contain a material—phlogiston—that is released by the process of burning. Once burned, the dephlogisticated substance becomes calx (an example would be wood ash), while the air surrounding it, which was known to be essential to combustion, became phlogisticated. Thus, burning wood in a sealed chamber could never result in complete combustion, because the dephlogisticated air necessary for burning became saturated with phlogiston. The reason that wood ash weighs less than wood, therefore, is because of the loss of phlogiston to the air when it is partially burned.
However, any theory of heat transfer that depended upon the swap of a substance demanded that it go somewhere. Phlogiston theory worked fine for those things that weighed less after heating something else, but it was vulnerable to an encounter with any substance that didn’t. Magnesium, for example, seems to gain weight when heated (actually, it becomes magnesium oxide). Heat can be transferred even when “condensed phlogiston” doesn’t change at all. A red-hot hunk of iron will cause water to steam even if it weighs the same after it is cooled by that same water.
By the middle of the eighteenth century, despite some truly passionate devotees, most especially the English chemist Joseph Priestley, phlogiston theory was displaced, largely by the work of the French scientist Antoine Lavoisier. Which is why phlogiston theory deserves a bit more respect than it is generally given. It is a goofy theory, to be sure, with funny-sounding names for its fundamental concepts (though no funnier-sounding than quarks, Higgs bosons, or other notions from the world of quantum physics). But it isa theory, in a way that the four elements of antiquity were not. Phlogiston was incorrect in its particulars: The relationship between fire and rust is that both are examples of what happens when oxygen, which would not be discovered for another century, reacts with another substance. But it was also testable, in the sense made famous by the philosopher of science Karl Popper. Phlogiston theory could be proved false, and eventually was. The first to do so was a pioneering physicist and chemist at the University of Glasgow, a key figure in the evolution of the steam engine, named Joseph Black.
BLACK WAS A THOROUGHGOING Scot, despite his Bordeaux birthplace, an incidental consequence of his family’s involvement in the wine trade, and his early schooling in Belfast. He matriculated at both Glasgow and Edinburgh universities, and subsequently served as professor of chemistry at first one and then the other, ending up at Edinburgh in 1766. Long before that, he had demonstrated a remarkable gift for experimental design, and what was, for the time, painstaking care in experimentation itself, particularly into the nature of heat.
The gift for designing experiments was much on display in Black’s research into the nature of what a later science of chemistry knows as carbon dioxide. He was not, by all accounts, much interested in testing phlogiston theory when he began; instead, as a physician, he was looking for a way to dissolve kidney stones. His investigations accordingly began with an investigation into the then well known process by which chalk, or calcium carbonate, turns into the caustic quicklime, which was the name then used for calcium oxide. Black chose to work with a similar substance: magnesium carbonate, then known as magnesia alba. Since the transformation required combustion, at very high heat, phlogiston theory suggested that the reason was the absorption of the fiery substance by the chalk. Black, by careful experiment, showed that the magnesia alba weighed less after heating, but regained precisely the same amount when cooled in the presence of potash, from which he reasoned that the substance that departed the original substance—CO2—had returned. He did not, of course, put it quite that way, since oxygen itself still awaited discovery some decades later. Instead, he wrote, “Quick-lime [i.e. calcium oxide] therefore does not attract air when in its most ordinary form, but is capable of being joined to one particular species only, which is dispersed thro’ the atmosphere, either in the shape of an exceedingly subtle powder, or more probably in that of an elastic fluid [which I have called] ‘fixed air.’”9 Or, as your high school chemistry teacher would explain it, calcium oxide becomes calcium carbonate in the presence of carbon dioxide.
This discovery alone, which was the first test that phlogiston theory failed, would have purchased for Professor Black a place in the history of science. But what earned him a place in the story of steam power were his subsequent experiments on the nature of heat itself. Or, more accurately, on the nature of ice.
Water, as we have seen, is a most curious substance: In both its gaseous and solid states, it occupies more volume than it does as a liquid. It is also (practically uniquely) present on earth as a solid, a liquid, and a gas. By 1760, Black had become fascinated by the properties of water in its solid version, and even more fascinated by the transition from one phase to another. He was particularly intrigued by the fact that frozen water, whether in the form of ice or snow, did not melt immediately upon coming into contact with high heat, but did so gradually. For another curious fact is that a glass with ice in it will stay the same temperature—a little above 32°F or 0°C—whether it has six unmelted ice cubes in it or one. The temperature starts to rise only when all the ice is melted. In the same way, a pot of water brought to a boil does not thereafter increase in temperature, no matter how hot the fire underneath it. These are by no means intuitive results, but Black observed them again and again, once again finding phlogiston theory insufficient to explain the phenomena. Instead, he came up with an idea of his own, called latent heat, which he defined as the amount of heat gained or lost by a particular substance before it changes from one physical state to another—gas to liquid, solid to liquid. To Black, latent heat was the best way to explain the fact that water, when it nears its boiling point, does not suddenly turn to steam with, in his words, “a violence equal to that of gunpowder.”10
The experiments that confirmed this hypothesis were simple, and ingenious. Black took a quantity of water and, using a thermometer, took its temperature. He then placed the water over heat11 and measured both the amount of time it took for the water to boil and the amount of time it took, once boiling, to boil away completely. By comparing the two, he established the amount of heat the water continued to absorb after its own temperature stopped rising. Many years later, Black described his discovery:
I, therefore, set seriously about making experiments,12 conformable to the suspicion that I entertained concerning the boiling of fluids. My conjecture, when put into form, was to this purpose. I imagined that, during the boiling, heat is absorbed by the water, and enters into the composition of the vapour produced from it, in the same manner as it is absorbed by ice in melting, and enters into the composition of the produced water. And, as the ostensible effect of the heat, in this last case, consists, not in warming the surrounding bodies, but in rendering the ice fluid; so, in the case of boiling, the heat absorbed does not warm surrounding bodies, but converts the water into vapour. In both cases, considered as the cause of warmth, we do not perceive its presence: it is concealed, or latent, and I give it the name of LATENT HEAT …
Thus, Black calculated that a pound of liquid water had a latent heat of vaporization of 960°F; its latent heat of fusion—the amount of heat ice absorbs before completely melting—he measured at 140°F.* That is a lot of latent heat. Water absorbs nearly three times the amount of heat before vaporizing as the same quantity of ethanol, one of the many reasons that your waiter can flambé brandy, but not orange juice. Again, Black’s experimental and quantitative mind used a different sort of arithmetic: He heated a pound of gold13 to 190° and placed it in a pound of water at a temperature of 50°; when he took the temperature of the combined elements and found it to be only 55°, he concluded that water had nearly twenty times more capacity for heat than did gold.
It took a pretty big fire, therefore, to boil the water in the atmospheric engine Thomas Newcomen had erected in front of Dudley Castle that day in 1712. Joseph Black had discovered a new way of measuring how big, but the relevant metric for Newcomen and Calley wasn’t degrees Fahrenheit. It was fuel.
This simple fact was, in its way, as revolutionary as Coke’s Statute or Newton’s Laws of Motion. For millennia, advances in the design of machines to do work had been driven entirely by measures of their output: a tool that plows more furrows, or spins more wool, or even pumps more water, was ipso facto a better machine. Prior to the seventeenth century, the choices for performing such work—defined, as it would be in an introductory physics class, as the transfer of energy by means of a force—had been made from the following menu:
· Muscle, either human or animal;
· Water; or
Muscle power is, needless to say, older than civilization. It’s even older than humanity, though humans are considerably more efficient than most draft animals in converting sunlight into work; an adult human is able to convert roughly 18 percent of the calories he consumes into work, while a big hayburner like a horse or ox is lucky to hit 10 percent—one of the reasons for the popularity of slavery throughout history. The remaining two were, by the seventeenth century, relatively mature technologies. More than 3,500 years ago, Egyptians were using waterwheels both for irrigation and milling, while at the other end of Asia, first-century Chinese engineers were building waterwheels linked to a peg and cord that operated an iron smelter’s bellows; the following century, another Chinese waterwheel used a similar mechanism for raising and dropping a hammer for milling rice. The first-century historian known as Strabo the Geographer described a water-driven mill for grinding grain in the palace of Mithridates of Pontus that was built in 53 BCE. A century later, the Roman architect and engineer Vitruvius (the same one who inspired Leonardo’s “Vitruvian Man”) designed, though possibly never built, a water mill that used helical gearing to turn the rotation of the wheel into the vertical motion of the grinding stone, “the first great achievement14 in the design of continuously powered machinery.”
Europeans had been putting water to work for hundreds of years before they started harnessing the wind, possibly during the tenth century, and certainly by the twelfth. This was probably due to the fact that, even more than waterwheels, the utility, and therefore the ubiquity, of windmills was a function of geography. They were, for example, common in northern Europe,15 because of the flat topography, and their comparative advantage in a climate where rivers freeze in the winter, but rare enough in the Mediterranean that Don Quixote could still be astonished by the appearance of one.
Windmills and waterwheels were, and are, used for everything from pumping water to sawing wood to operating bellows to smoothing (the term of art is fulling) wool. As we will see, it was a century before steam engines were used for a function that was not previously, and usually simultaneously, performed by wind or water mills. Their most important function, however, from antiquity forward, was milling grain; in the case of muscle power, the same grain used for feeding the draft animals themselves.
Whatever their productivity in milling grain, wind and water mills suffered from two fundamental liabilities, one obvious, the other less so. The first was the fact that water mills, especially, are site-specific; work could be performed by them only alongside rivers and streams, not necessarily where the work was needed. The second, however, proved to be the more significant: The costs of wind and water power were largely fixed, which profoundly reduced the incentives to improve them, once built, any more than someone who bought a car that came with a lifetime supply of gas would seek to drive economically. One can make a water mill more powerful, but one cannot, in any measurable way, reduce its operating expenses. The importance of this can scarcely be underestimated as a spur to the inventive explosion of the eighteenth century. So long as wind, water, and muscle drove a civilization’s machines, that civilization was under little pressure to innovate. Once those machines were driven by the product of a hundred million years of another sort of pressure, innovation was inevitable. One is even tempted to say that it heated up.
COAL IS SUCH A critical ingredient for the Industrial Revolution that a significant number of historians have ascribed Britain’s industrial preeminence almost entirely to its rich and relatively accessible deposits. Newcomen’s engine, after all, ran on coal, and was used to mine it. One would scarcely expect to read a history of the steam engine, or the Industrial Revolution, without sooner or later encountering coal.
Encountering it in the same chapter that documents the rise of the experimental method is perhaps a little less obvious. But that proximity is neither sloppiness nor coincidence; the two are subtly, but inextricably, linked. The mechanism by which the steam engine was first developed, and then improved, was a function not only of a belief in progressive improvement, but of an acute awareness that incremental improvements could be measured by reducing cost. Demand for Newcomen’s steam engine was bounded by the price of fuel per unit of work.
For a million years, the fuel of choice for humans was hydrocarbons, in the form of both wood and charcoal, but it did no work, in the mechanical sense. Instead, it was used exclusively to cook food and combat the cold, and, occasionally, to harden wood. Several hundred thousand years later, a group of South Asians, or possibly Middle Easterners, discovered that their charcoal fires also worked pretty well to turn metals into something easier to make into useful shapes, either by casting or bending. For both space heating and metalworking, wood, the original “renewable” fuel, was perfectly adequate; measured in British Thermal Units—as above, the heat required to raise the temperature of one pound of water 1°F—a pound of dry wood produces about 7,000 BTUs by weight, charcoal about 25 percent more. Only as wood became scarce did it occur to anyone that its highest value was as a construction material rather than as a fuel. It takes some fourteen years to grow a crop of wood, and burning it for space heating or for smelting became a progressively worse bargain.
Europe’s first true “wood crisis”16 occurred in the late twelfth century as a bit of collateral damage from a Christian crusade to destroy the continent’s tree-rich sanctuaries of pagan worship and open up enough farmland to make possible the European population explosion of the following centuries. A lot more Europeans meant a lot more wooden carts, wooden houses, and wooden ships. It also meant a lot more wood17 for the charcoal to fuel iron smelters, since smelting one pound of iron required the charcoal produced by burning nearly eight cubic feet of wood. By 1230, England had cut down so many trees for construction and fuel that it was importing most of its timber from Scandinavia, and turned to what would then have been called an alternative energy source: coal.
Coal consists primarily of carbon, but it includes any number of other elements, including sulfur, hydrogen, and oxygen, that have been compressed between other rocks and otherwise changed by the action of bacteria and heat over millions of years. It originates as imperfectly decayed vegetable matter, imperfect because incomplete. When most of the plants that covered the earth three hundred million years ago, during the period not at all coincidentally known as the Carboniferous, died, the air that permitted them to grow to gargantuan sizes—trees nearly two hundred feet tall, for example—collected its payback in the form of corrosion. The oxygen-rich atmosphere converted most of the dead plant matter into carbon dioxide and water. Some, however, died in mud or water, where oxygen was unable to reach them. The result was the carbon-dense sponge known as peat. Combine peat with a few million years, a few thousand pounds of pressure, several hundred degrees of heat, and the ministrations of uncounted billions of bacteria, and it develops through stages, or ranks, of “coalification.” The shorter the coalification process, the more the final product resembles its plant ancestors: softer and moister, with far more impurities by weight. Or so goes the consensus, “biogenic” view of coal’s natural history; an alternative theory does exist, arguing that coal and other fossil fuels have a completely geological origin. The theory, arrived at independently in the 1970s by Soviet geologists and the Austrian-born astrophysicist Thomas Gold,* contends that the pressures and heat present in what Gold called the “deep hot biosphere” formed the hydrocarbon fuels currently being used to run the world’s energy economy, and the presence of biological detritus in coal and other “fossil” fuels is a side effect of the bacteria that fed on them.
Whether as a result of geologic or biogenic forces, each piece of coal is unique, the result of both different plant origins and differing histories of pressure, heat, and fermentation. What they have in common is that they all share the same relationship between time and energy: Over thousands of millennia, hydrogen and hydroxyl compounds are boiled and pushed out, leaving successively purer and purer carbon. The younger the coal, the greater the percentage of impurities, and the lower the ranking. In fourteenth-century Britain, lower-ranked minerals like lignite and sub-bituminous coals were known as “sea coal,” a term with an uncertain etymology but whose likeliest root is the fact that the handiest outcroppings were found along seams leading along the River Tyne to the North Sea.
Long before concerns about particulate pollution and global warming, coal had PR problems. Almost everyone in medieval England found the smell of the sea coal obnoxious, partly because of sulfuric impurities that put right-thinking Englishmen in mind of the devil, or at least of rotten eggs; by the early fourteenth century,18 it was producing so much noxious smoke in London that King Edward I forbade burning it, with punishments ranging from fines to the smashing of coal-fired furnaces. The ban was largely ignored, as sea coal remained useful for space heating, though distasteful. Working iron, on the other hand, required a much hotter-burning fuel, and in this respect the softer coals were inferior to the much older, and harder, bituminous and anthracite. Unfortunately, along with burning hotter and cleaner—a pound of anthracite, with a carbon content of between 86 and 98 percent by weight, produces 15,000 BTUs, while a pound of lignite (which can be as little as one-quarter carbon) only about 4,000 to 8,000 BTUs—hard coal is found a lot deeper under the ground. Romans in Britain mined that sort of coal, which they called gagate, and we call jet, for jewelry, but interest in deep coal mining declined with their departure in the fifth century. It was not until the 1600s19 that English miners found their way down to the level of the water table and started needing a means to get at the coal below it.
ANY NARRATIVE HISTORY OF the steam engine must sooner or later make a detour underground. An Industrial Revolution without mining, and particularly coal mining, is as incomplete as rock and roll without drums. Actually, that understates the case; the degree to which cultures have solved the geometric puzzle of extracting useful ores from the complicated and refractory crust of the earth is a pretty fair proxy for civilization itself.
At more or less the same moment in history that groups of H. sapiens started digging a few inches into the dirt in order to plant seeds, they also began digging a little farther looking for flint that could be chipped into useful shapes. Even earlier, some forty thousand years ago, the caves at Lascaux were decorated with, among other things, pigment extracted from the iron ore known as hematite. Sometime thereafter, every community of humans discovered that clay plus fire equals pottery, and, by about 4000 BCE, that the earth contained semiprecious stones like turquoise and malachite, and easily worked metals like gold, and especially copper. The Roman mines along the Rio Tinto (named for the color its copper ore gave the water) in southern Spain not only provided precious metals but mechanized the process for the first time, using aqueducts to wash the debris out of the excavation and waterwheels to crush the ore left behind. And, of course, for four centuries Roman Britain was a source of silver, copper, and gold—and jet—for the imperial treasury.
The demand for Roman engineering was a function of the change from surface to deep mining, though the adjective is a relative one. The deepest mine in the world, at 2.4 miles down, is the Tau Tona gold mine outside of Johannesburg; the world’s deepest coal mine is barely three-quarters of a mile to the bottom, which means that the most elaborately dug structures in mining history have scratched only the tiniest fraction of the mineral trove of the planet.*
Despite six millennia of improvement in mining technology, the “scratching” is actually more dangerous today than it was in the Neolithic period, and nearly as hazardous as it was for medieval pick-and-axe miners. Once the potential for surface mining, which is the complete removal of the ground cover, was exhausted, the only recourse for coal extraction was digging, usually into a hilltop. Whether the goal was hard coal or soft, the first step in such digging was mounting a large bore auger on a framework, rotating it, usually by either men or tethered mules walking in circles, and adding segments to it as it drilled deeper. The auger was followed by miners using tools, primarily picks, to carve coal from seams (some up to 100 feet thick) in a “room-and-pillar” method at the face, and transporting the coal by cart to the adit, or borehole entrance. With increasing depth, water-driven elevators or skip buckets were used to carry coal to the surface. In medieval England, the combination of technical difficulty with the ever-present risk of cave-in, flooding, and sharp tools wielded in close quarters meant that miners were treated like a relatively privileged class; unlike tenant farmers, they dug without obligation to the lords whose lands they worked, living “in a state within a state,20 subject, only, in the last resort, to the approval of the Crown.”
As coal mines went deeper, they also became more dangerous, and not merely because of the engineering challenge of supporting tons of overburden; one of the volatile components of raw coal is the hydrocarbon CH4, or methane, which is the main component of the flammable mixture known as “firedamp.” Though it is slightly lighter than air, it can still pool in sealed areas of mines, causing a danger of asphyxiation and, far more significant in an age in which the only illumination came from fire in one form or another, explosion.* Savery’s “Miner’s Friend” was not, as it happens, sold exclusively as a water pump, but also as a means for ventilating such mines.
Anything that improved mining was attractive to the innovators of eighteenth-century England. Three-quarters of the patents for invention granted prior to the Savery engine were, one way or the other, mining innovations; 15 percent of the total were for drainage alone,21 as the shortage of surface coal became more and more acute and prices rose.
Price is the mechanism by which we allocate the things we value, from iPhones to coal, and even an imperfect system sooner or later incorporates the cost of manufacture into the selling price. In 1752, a study was made22 of a 240-foot-deep coal mine in northeast England in which a horse-driven pump lifted just over 67,000 gallons every twenty-four hours at a cost of twenty-four shillings, while Newcomen’s engine pumped more than 250,000 gallons using twenty shillings’ of coal—a demonstration not only of the value of the engine, but of a newfound enthusiasm for cost accounting. Newcomen’s engine, by pumping water out of deeper mines at a lower cost, also lowered the effective price.
The problem was that it didn’t lower it enough. The coal-fired atmospheric engines of the type designed by Newcomen and Calley burned so much coal for the amount of water they pumped that the only cost-effective place for their use was at the coal mine itself. This did a lot more for heating British homes than running British factories; as late as the 1840s,23 the smoky fireplaces of British homes still consumed two-thirds of Britain’s domestic coal output, and a shocking 40 percent of the world’s. An eighteenth-century coal porter in London might carry loads of twice his own weight up rickety stairs and even ladders up to sixty times a day. But no one was using steam engines for much else, because the cost of transporting the coal to a steam engine more than a few hundred yards from the mine itself ate up any savings the engine offered.
For fifty years, lowering the cost of mining coal for heat had been enough to make the Newcomen engine a giant success. It was dominant in Britain, copied all over Europe, and even studied at universities—unsurprisingly, given the experimental methods that had created the engine in the first place. One of the universities interested in producing a superior version of the Newcomen engine was the University of Glasgow, the fourth oldest in the English-speaking world, and home not only to Joseph Black but to James Boswell, Adam Smith, and a dozen other leading lights of what came to be known as the Scottish Enlightenment.
And, of course, to James Watt.
* The modern definition of experimentation—isolation of a single variable, to test and record the effect of changing it—still lay a hundred years in the future. We will meet the creator of this sort of experimental design, John Smeaton, in chapter 6.
* Modern engineers generally measure this as kilojoules/kilogram, but in British Thermal Units (the amount of heat needed to raise the temperature of a pound of water 1°F) the numbers are 144 and 965 respectively. Thus, it takes 144 BTUs to turn a pound of ice at 32° into a pound of water at 32°, and 965 BTUs to convert a pound of water at 212° into steam.
* Gold died in 2004 after four decades at Cornell University and a lifetime of swimming outside the mainstream of scientific orthodoxy. In the 1950s, along with Fred Hoyle, Gold was the originator of the so-called steady state theory of the universe, which preceded and contradicted the generally accepted big bang theory.
* A back-of-the-envelope calculation, using 33 × 1012 cubic miles as the rough spherical volume of the planet, concludes that excavating a truly giant mine—2.4 miles down, a mile on a side—gets at no more than 7 × 10-9 of the earth’s volume. Barely a scratch.
* An explosion is essentially a fast-burning fire with nowhere to go. Firedamp, however, can also burn slowly. Very slowly. The mine fire that started in Centralia, Pennsylvania, in May 1962 is, as of this writing, still burning.