CHAPTER SIX

THE WHOLE THING WAS ARRANGED IN MY MIND

concerning the surprising contents of a Ladies Diary; invention by natural selection; the Flynn Effect; neuronal avalanches; the critical distinction between invention and innovation; and the memory of a stroll on Glasgow Green

It was in the Green of Glasgow.1 I had gone to take a walk on a fine Sabbath afternoon. I had entered the Green by the gate at the foot of Charlotte Street—had passed the old washing-house. I was thinking upon the engine at the time, and had gone as far as the Herd’s-house, when the idea came into my mind, that as steam was an elastic body it would rush into a vacuum, and if a communication was made between the cylinder and an exhausted vessel, it would rush into it, and might be there condensed without cooling the cylinder. I then saw that I must get quit of the condensed steam and injection water, if I used a jet as in Newcomen’s engine. Two ways of doing this occurred to me. First, the water might be run off by a descending pipe, if an offlet could be got at the depth of 35 or 36 feet, and any air might be extracted by a small pump; the second was to make the pump large enough to extract both water and air…. I had not walked farther than the Golf-house when the whole thing was arranged in my mind.

THE “WHOLE THING” WAS, of course, James Watt’s world-historic invention of the separate condenser. It is one of the best recorded, and most repeated, eureka moments since Archimedes leaped out of his bathtub; but accounts of sudden insights have been a regular feature in virtually every history of scientific progress. The fascination with the eureka moment has endured mostly because it turns out to be largely accurate, in general terms if not in detail (no apple actually hit Sir Isaac’s cranium, but one falling from a tree in Newton’s garden at Woolsthorpe Manor really did inspire the first speculations on the nature of universal gravitation).

Watt’s own flash of insight is worth examining not only for its content, but for what it says about insight itself. Those eureka moments are so central to the process of invention that understanding the revolutionary increase in inventive activity demanded by the steam engine also means exploring what modern cognitive science knows (and, more often, suspects) about the mechanism of insight. Watt’s moment is just one instance—an earth-shaking one, to be sure—of a phenomenon that is, among humans, nearly as universal as the acquisition of language: solving problems without conscious effort, after effort has failed.

This is not, of course, to say that effort is irrelevant. The real reason that insights seem effortless is that the effort they demand takes place long before the insight appears. It takes a lot of prospecting to find a diamond (to say nothing of the time it took to makeone), which is why—scrambled metaphors aside—“effortless” insights about musical composition don’t occur to nonmusicians. And why, of course, insights about separate condensers don’t occur to scholars of ancient Greek. Expertise matters.

This seemingly obvious statement was first tested experimentally in the 1980s by a Swedish émigré psychologist, now at the University of Florida, named K. Anders Ericsson, who has spent the intervening decades developing what has come to be known as the “expert performance” model for human achievement. In study after study of experts in fields as diverse as music, competitive athletics, medicine, and chess, Ericsson and his colleagues were unable to discover any significant inborn difference between the most accomplished performers and the “merely” good. That is, no test for memory, IQ, reaction time, or any other human capacity that might seem to indicate natural talent differentiated the master from the journeyman.

What did separate them was, therefore, not inherited, but created; time, not talent, was the critical measurement. Though Ericsson found that both the violinists and basketball players started playing at roughly the same age, the stars in both pursuits spent more time at it than their less accomplished colleagues. Twice as much time, in fact; against all expectations, an expert musician spent, on average, ten thousand hours practicing, as compared to five thousand spent by the not-quite-expert.

The model turned out to apply to a range of pursuits. Cabinetmakers and cardiologists, golfers and gardeners, all became expert after roughly the same amount of time spent mastering their craft. Of all the legends of James Watt’s youth, the one no one doubts is that he spent virtually every waking hour of his “apprenticeship” year with John Morgan mastering the skills of fine brasswork, gearing, and instrument repair. His pride in the fine navigational instrument he built as his graduation project is indistinguishable from that felt by a gymnast doing her first back handspring.

James Watt, however, is remembered not as a master clockmaker, but as one of the greatest inventors of all time. And this is where the expert performance model becomes even more relevant. By the 1990s, Ericsson’s research was demonstrating2 that the same phenomenon he had first discovered among concert violinists also applied to the creation of innovations: that the cost of becoming consistently productive at creative inventing is ten thousand hours of practice—five to seven years—just as it is for music, athletics, and chess.

Some of that time is spent acquiring a history of the field: knowledge of what other violinists and inventors have achieved before in order to avoid, in the telling phrase, “reinventing the wheel.” The knowledge need not be explicit; the philosopher of science Michael Polanyi* famously thought that leaps of invention were a function of what he called tacit knowing: the idea that, in Polanyi’s words, “we know more than we can tell.” To Polanyi, the acquisition of such internalized knowledge, via doing, rather than studying, is necessary preparation of the soil for any true insight.

But knowing that inventors accumulate knowledge of other invention doesn’t explain how they accumulate skills during those ten thousand hours of repetition. Inventing, after all, isn’t a craft like basketball, in which mastery is acquired by training muscle and nerve with constant repetition.

Nonetheless, in light of Ericsson’s discovery that the route to expert performance looks very similar whether the performance in question is a basketball game, or a chess match, or inventing a new kind of steam engine, it seems worth considering whether the brain’s neurons behave like the body’s muscles. And, it turns out, they seem to do just that: The more a particular connection between nerve cells is exercised, the stronger it gets. Fifty years ago, a Canadian psychologist named Donald Hebb first tried to put some mathematical rigor behind this well-documented phenomenon, but “Hebbian” learning—the idea that “neurons that fire together, wire together”—was pretty difficult to observe in any nervous system more complicated than that of a marine invertebrate, and even then it was easier to observe than to explain.

In the 1970s, Eric Kandel, a neuroscientist then working at New York University, embarked on a series of experiments that conclusively proved that cognition could be plotted by following a series of chemical reactions that changed the electrical potential of neurons. Kandel and his colleagues demonstrated that experiences literally change the chemistry of neurons by producing a protein called cyclic Adenosine MonoPhosphate, or cAMP. The cAMP protein, in turn, produces a cascade of chemical changes that either promote or inhibit the synaptic response between neurons; every time the brain calculates the area of a rectangle, or sight-reads a piece of music, or tests an experimental hypothesis, the neurons involved are chemically changed to make it easier to travel the same path again. Kandel’s research seems to have identified that repetition forms the chains that Polanyi called tacit knowing, and that James Watt called “the correct modes of reasoning.”

Kandel’s discovery of the mechanism by which memory is formed and preserved at the cellular level, for which he received the Nobel Prize in Physiology in 2000, was provocative. But because the experiments in question were performed on the fairly simple nervous system of Aplysia californica, a giant marine snail, and documented the speed with which the snails could “learn” to eject ink in response to predators, it may be overreaching to say that science knows that the more one practices the violin, or extracts cube roots, the more cAMP is produced. It’s even more of a stretch to explain how one learns to sight-read a Chopin etude. Or invent a separate condenser for a steam engine.

Which is why, a decade before Kandel was sticking needles into Aplysia, a Caltech neurobiologist named Roger Sperry was working at the other end of the evolutionary scale, performing a series of experiments on a man whose brain had been surgically severed into right and left halves.*The 1962 demonstration of the existence of a two-sided brain, which would win Sperry the Nobel Prize twenty years later, remains a fixture in the world of pop psychology, as anyone who has ever been complimented (or criticized) for right-brained behavior can testify. The notion that creativity is localized in the right hemisphere of the brain and analytic, linguistic rationality in the left has proved enduringly popular with the general public long after it lost the allegiance of scientists.

However simple the right brain/left brain model, the idea that ideas must originate somewhere in the brain’s structure continued to attract scientists for years: neurologists, psychologists, Artificial Intelligence researchers. Neuroscientists have even applied the equations of chaos theory to explain how neurons “fire together.” John Beggs of Indiana University has shown that the same math used to analyze how sandhills spontaneously collapse, or a stable snowpack turns into an avalanche—the term is “self-organized criticality”—also describes how sudden thoughts, especially insights, appear in the brain. When a single neuron chemically fires3 its electrical charge, and causes its neighbors to do the same, the random electrical activity that is always present in the human brain can result in a “neuronal avalanche” within the brain.

Where those avalanches ended up, however, remained little more than speculation until there was some way to see what was actually going on inside the brain; so long as the pathway leading to a creative insight remained invisible, theories about them could be proposed, but not tested.

Those new pathways aren’t invisible anymore. A cognitive scientist at Northwestern named Mark Jung-Beeman, and one at Drexel named John Kounios, have performed a series of experiments very nicely calibrated to measure heightened activity in portions of the brain when those “eureka” moments strike. In the experiments, subjects were asked to solve a series of puzzles and to report when they solved them by using a systematic strategy versus when the solution came to them by way of a sudden insight. By wiring those subjects up like Christmas trees, they discovered two critical things:

First, when subjects reported solving a puzzle via a sudden flash of insight, an electroencephalograph, which picks up different frequencies of electrical activity, recorded that their brains burst out with the highest of its frequencies: the one that cycles thirty times each second, or 30Hz. This was expected,4 since this is the frequency band that earlier researchers had associated with similar activities such as recognizing the definition of a word or the outline of a car. What wasn’t expected was that the EEG picked up the burst of 30Hz activity three-tenths of a second before a correct “insightful” answer—and did nothing before a wrong one. Second, and even better, simultaneous with the burst of electricity, another machine, the newer-than-new fMRI (functional Magnetic Resonance Imaging) machine showed blood rushing to several sections of the brain’s right, “emotional” hemisphere, with the heaviest flow to the same spot—the anterior Superior Temporal Gyrus, or aSTG.

But the discovery that resonates most strongly with James Watt’s flash of insight about separating the condensing chamber from the piston is this: Most “normal” brain activity serves to inhibit the blood flow to the aSTG. The more active the brain, the more inhibitory, probably for evolutionary reasons: early Homo sapiens who spent an inordinate amount of time daydreaming about new ways to start fire were, by definition, spending less time alert to danger, which would have given an overactive aSTG a distinctly negative reproductive value. The brain is evolutionarily hard-wired to do its best daydreaming only when it senses that it is safe to do so—when, in short, it is relaxed. In Kounios’s words, “The relaxation phase is crucial.5 That’s why so many insights happen during warm showers.” Or during Sunday afternoon walks on Glasgow Green, when the idea of a separate condenser seems to have excited the aSTG in the skull of James Watt. Eureka indeed.

IN 1930, JOSEPH ROSSMAN, who had served for decades as an examiner in the U.S. Patent Office, polled more than seven hundred patentees, producing a remarkable picture of the mind of the inventor. Some of the results were predictable;6 the three biggest motivators were “love of inventing,” “desire to improve,” and “financial gain,” the ranking for each of which was statistically identical, and each at least twice as important as those appearing down the list, such as “desire to achieve,” “prestige,” or “altruism” (and certainly not the old saw, “laziness,” which was named roughly one-thirtieth as frequently as “financial gain”). A century after Rocket, the world of technology had changed immensely: electric power, automobiles, telephones. But the motivations of individual inventors were indistinguishable from those inaugurated by the Industrial Revolution.

Less predictably, Rossman’s results demonstrated that the motivation to invent is not typically limited to one invention or industry. Though the most famous inventors are associated in the popular imagination with a single invention—Watt and the separate condenser, Stephenson and Rocket—Watt was just as proud of the portable copying machine he invented in 1780 as he was of his steam engine; Stephenson was, in some circles, just as famous for the safety lamp he invented to prevent explosions in coal mines as for his locomotive. Inventors, in Rossman’s words, are “recidivists.”

In the same vein, Rossman’s survey revealed that the greatest obstacle perceived by his patentee universe was not lack of knowledge, legal difficulties, lack of time, or even prejudice against the innovation under consideration. Overwhelmingly, the largest obstacle faced by early twentieth-century inventors (and, almost certainly, their ancestors in the eighteenth century) was “lack of capital.”7 Inventors need investors.

Investors don’t always need inventors. Rational investment decisions, as the English economist John Maynard Keynes demonstrated just a few years after Rossman completed his survey, are made by calculating the marginal efficiency of the investment, that is, how much more profit one can expect from putting money into one investment rather than another. When the internal rate of return—Keynes’s term—for a given investment is higher than the rate that could be earned somewhere else, it is a smart one; when it is lower, it isn’t.

Unfortunately, while any given invention can have a positive IRR, the decision to spend one’s life inventing is overwhelmingly negative. Inventors typically forgo more than one-third of their lifetime earnings. Thus, the characteristic stubbornness of inventors throughout history turns out to be fundamentally irrational. Their optimism is by any measure far greater than that found in the general population, with the result that their decision making is, to be charitable, flawed, whether as a result of the classic confirmation bias—the tendency to overvalue data that confirm one’s original ideas—or the “sunk-cost” bias, which is another name for throwing good money after bad. Even after reliable colleagues urge them to quit, a third of inventors will continue to invest money, and more than half will continue to invest their time.8

A favorite explanation for the seeming contradiction is the work of the Czech émigré economist Joseph Schumpeter,* who drew a famous, though not perfectly clear, boundary between invention and innovation, with the former an economically irrelevant version of the latter. The heroes of Schumpeter’s economic analysis were, in consequence, entrepreneurs, who “may9 be inventors just as they may be capitalists … they are inventors not by nature of their function, but by coincidence….” To Schumpeter, invention preceded innovation—he characterized the process as embracing three stages: invention, commercialization, and imitation—but was otherwise insignificant. However, his concession that (a) the chances of successful commercialization were improved dramatically when the inventor was involved throughout the process, and (b) the imitation stage looks a lot like invention all over again, since all inventions are to some extent imitative, makes his dichotomy look a little like a chicken-and-egg paradox.

Another study, this one conducted in 1962,10 compared the results of psychometric tests given to inventors and noninventors (the former defined by behaviors such as application for or receipt of a patent) in similar professions, such as engineers, chemists, architects, psychologists, and science teachers. Some of the results were about what one might expect: inventors are significantly more thing-oriented than people-oriented, more detail-oriented than holistic. They are also likely to come from poorer families than noninventors in the same professions. No surprise there; the eighteenth-century Swiss mathematician Daniel Bernoulli,11 who coined the term “human capital,” explained why innovation has always been a more attractive occupation to have-nots than to haves: not only do small successes seem larger, but they have considerably less to lose.

More interesting, the 1962 study also revealed that independent inventors scored far lower on general intelligence tests than did research scientists, architects, or even graduate students. There’s less to this than meets the eye: The intelligence test that was given to the subjects subtracted wrong answers from right answers, and though the inventors consistently got as many answers correct as did the research scientists, they answered far more questions, thereby incurring a ton of deductions. While the study was too small a sample to prove that inventors fear wrong answers less than noninventors, it suggested just that. In the words of the study’s authors, “The more inventive an independent inventor is,12 the more disposed he will be—and this indeed to a marked degree—to try anything that might work.”

WATT’S FLASH OF INSIGHT, like those of Newcomen and Savery before him (and thousands more after), was the result of complicated neural activity, operating on a fund of tacit knowledge, in response to both a love of inventing and a love of financial gain. But what gave him the ability to recognize and test that insight was a trained aptitude for mathematics.

The history of mechanical invention in Britain began in a distinctively British manner: with a first generation of craftsmen whose knowledge of machinery was exclusively practical and who were seldom if ever trained in the theory or science behind the levers, escapements, gears, and wheels that they manipulated. These men, however, were followed (not merely paralleled) by another generation of instrument makers, millwrights, and so on, who were.

Beginning in 1704, for example, John Harris, the Vicar of Icklesham in Sussex, published, via subscription, the first volume of the Lexicon Technicum, or an Universal Dictionary of Arts and Sciences, the prototype for Enlightenment dictionaries and encyclopedias. Unlike many of the encyclopedias that followed, Harris’s work had a decidedly pragmatic bent, containing the most thorough, and most widely read, account of the air pump or Thomas Savery’s steam engine. In 1713, a former surveyor and engineer named Henry Beighton, the “first scientific man to study the Newcomen engine,”13 replaced his friend John Tipper as the editor of a journal of calendars, recipes, and medicinal advice called The Ladies Diary. His decision to differentiate it from its competitors in a fairly crowded market by including mathematical games and recreations, riddles, and geographical puzzles made it an eighteenth-century version of Scientific American and, soon enough, Britain’s first and most important mathematical journal. More important, it inaugurated an even more significant expansion of what might be called Britain’s mathematically literate population.

Teaching more Britons the intricacies of mathematics would be a giant long-term asset to building an inventive society. Even though uneducated craftsmen had been producing remarkable efficiencies using only rule of thumb—when the great Swiss mathematician Leonhard Euler applied14his own considerable talents to calculating the best possible orientation and size for the sails on a Dutch windmill (a brutally complicated bit of engineering, what with the sail pivoting in one plane while rotating in another), he found that carpenters and millwrights had gotten to the same point by trial and error—it took them decades, sometimes centuries, to do so. Giving them the gift of mathematics to do the same work was functionally equivalent to choosing to travel by stagecoach rather than oxcart; you got to the same place, but you got there a lot faster.

Adding experimental rigor to mathematical sophistication accelerated things still more, from stagecoach to—perhaps—Rocket. The power of the two in combination, well documented in the work of James Watt, was hugely powerful. But the archetype of mathematical invention in the eighteenth century was not Watt, but John Smeaton, by consensus the most brilliant engineer of his era—a bit like being the most talented painter in sixteenth-century Florence.

SMEATON, UNLIKE MOST OF his generation’s innovators, came from a secure middle-class family: his father was an attorney in Leeds, who invited his then sixteen-year-old son into the family firm in 1740. Luckily for the history of engineering, young John found the law less interesting than tinkering, and by 1748 he had moved to London and set up shop as a maker of scientific instruments; five years later, when James Watt arrived in the city seeking to be trained in exactly the same trade, Smeaton was a Fellow of the Royal Society, and had already built his first water mill.

In 1756, he was hired to rebuild the Eddystone Lighthouse, which had burned down the year before; the specification for the sixty-foot-tall structure* required that it be constructed on the Eddystone rocks off the Devonshire coast between high and low tide, and so demanded the invention of a cement—hydraulic lime—that would set even if submerged in water.

The Eddystone Lighthouse was completed in October 1759. That same year, evidently lacking enough occupation to keep himself interested, Smeaton published a paper entitled An Experimental Enquiry Concerning the Natural Powers of Water and Wind to Turn Mills. The Enquiry, which was rewarded with the Royal Society’s oldest and most prestigious prize—the Copley Medal for “outstanding research in any branch of science”—documented Smeaton’s nearly seven years’ worth of research into the efficiency of different types of waterwheels, a subject that despite several millennia of practical experience with the technology was still largely a matter of anecdote or, worse, bad theory. In 1704, for example, a French scientist named Antoine Parent had calculated the theoretical benefits of a wheel operated by water flowing past its blades at the lowest point—an “undershot” wheel—against one in which the water fell into buckets just offset from the top of the “overshot” wheel—and got it wrong. Smeaton was a skilled mathematician, but the engineer in him knew that experimental comparison was the only way to answer the question, and, by the way, to demonstrate the best way to generate what was then producing nearly 70 percent of Britain’s measured power. His method remains one of the most meticulous experiments of the entire eighteenth century.

Fig. 4: One of the best-designed experiments of the eighteenth century, Smeaton’s waterwheel was able to measure the work produced by water flowing over, under, and past a mill. Science Museum / Science & Society Picture Library

He constructed a model waterwheel twenty inches in diameter, running in a model “river” fed by a cistern four feet above the base of the wheel. He then ran a rope through a pulley fifteen feet above the model, with one end attached to the waterwheel’s axle and the other to a weight. He was so extraordinarily careful to avoid error that he set the wheel in motion with a counterweight timed so that it would rotate at precisely the same velocity as the flow of water, thus avoiding splashing as well as the confounding element of friction. With this model, Smeaton was able to measure the height to which a constant weight could be lifted by an overshot, an undershot, and even a “breastshot” wheel; and he measured more than just height. His published table of results15 recorded thirteen categories of data, including cistern height, “virtual head” (the distance water fell into buckets in an overshot wheel), weight of water, and maximum load. The resulting experiment16 not only disproved Parent’s argument for the undershot wheel, but also showed that the overshot wheel was up to two times more “efficient” (though he never used the term in its modern sense).

Smeaton’s gifts for engineering weren’t, of course, applied only to improving waterpower; an abbreviated list of his achievements include the Calder navigational canal, the Perth Bridge, the Forth & Clyde canal (near the Carron ironworks of John Roebuck, for whom he worked as consultant, building boring mills and furnaces), and Aberdeen harbor. He made dramatic improvements in the original Newcomen design for the steam engine, and enough of a contribution to the Watt separate condenser engine that Watt & Boulton offered him the royalties on one of their installed engines as a thank-you.

But Smeaton’s greatest contribution was methodological and, in a critical sense, social. His example showed a generation of other engineers17 how to approach a problem by systematically varying parameters through experimentation and so improve a technique, or a mechanism, even if they didn’t fully grasp the underlying theory. He also explicitly linked the scientific perspective of Isaac Newton with his own world of engineering: “In comparing different experiments,18 as some fall short, and others exceed the maxim … we may, according to the laws of reasoning by induction* conclude the maxim true.” More significant than his writings, however, were his readers. Smeaton was, as much as Watt, a hero to the worker bees of the Industrial Revolution. When the first engineering society in the world first met, in 1771 London, Smeaton was sitting at the head of the table, and after his death in 1792, the Society of Civil Engineers—Smeaton’s own term, by which he meant not the modern designer of public works, but engineering that was not military—renamed itself the Smeatonian Society. The widespread imitation of Smeaton’s systematic technique and professional standards dramatically increased the population of Britons who were competent to evaluate one another’s innovations.

The result, in Britain, was not so much a dramatic increase in the number of inventive insights; the example of Watt, and others, provided that. What Smeaton bequeathed to his nation was a process by which those inventions could be experimentally tested, and a large number of engineers who were competent to do so. Their ability to identify the best inventions, and reject the worst, might even have made creative innovation subject to the same forces that cause species to adapt over time: evolution by natural selection.

THE APPLICATION OF THE Darwinian model to everything from dating strategies to cultural history is sometimes dismissed as “secondary” or “pop” Darwinism, to distinguish it from the genuine article, and the habit has become promiscuous. However, this doesn’t mean that the Darwinian model is useful only in biology; consider, for example, whether the same sort of circumstances—random variation with selection pressure—preserved the “fittest” of inventions as well.

As far back as the 1960s,19 the term “blind variation and selective retention” was being used to describe creative innovation without foresight, and advocates for the BVSR model remain so entranced by the potential for mapping creative behavior onto a Darwinian map that they refer to innovations as “ideational mutations.”20 A more modest, and jargon-free, application of Darwinism simply argues that technological progress is proportional to population in the same way as evolutionary change: Unless a population is large enough, the evolutionary changes that occur are not progressive but random, the phenomenon known as genetic drift.

It is, needless to say, pretty difficult to identify “progressive change” over time for cognitive abilities like those exhibited by inventors. A brave attempt has been made by James Flynn, the intelligence researcher from New Zealand who first documented, in 1984, what is now known as the Flynn Effect: the phenomenon that the current generation in dozens of different countries scores higher on general intelligence tests than previous generations. Not a little higher: a lot. The bottom 10 percent of today’s families are somehow scoring at the same level as the top 10 percent did fifty years ago. The phenomenon is datable to the Industrial Revolution, which exposed an ever larger population to stimulation of their abilities to reason abstractly and concretely simultaneously. The “self-perpetuating feedback loops”21 (Flynn’s term) resulted in the exercise, and therefore the growth, of potential abilities that mattered a lot more to mechanics and artisans than to farmers, or to hunter-gatherers.

Most investigations of the relationship between evolutionary theory and industrialization seem likely to be little more than an entertaining academic parlor game for centuries to come.* One area, however, recalls that the most important inspiration for the original theory of evolution by natural selection was Charles Darwin’s observation of evolution by unnatural selection: the deliberate breeding of animals to reinforce desirable traits, most vividly in Darwin’s recognition of the work of pigeon fanciers. Reversing the process, a number of economists have wondered whether it is possible to “breed” inventors: to create circumstances in which more inventive activity occurs (and, by inference, to discover whether those same circumstances obtained in eighteenth-century Britain).

This was one of the many areas that attracted the attention of the Austrian American economist Fritz Machlup, who, forty years ago, approached the question in a slightly different way: Is it possible to expand the inventive work force? Can labor be diverted into the business of invention? Can an educational or training system emphasize invention?

Machlup—who first popularized the idea of a “knowledge economy”—spent decades collecting data on innovation in everything from advertising to typewriter manufacture—by one estimate, on nearly a third of the entire U.S. economy—and concluded with suggesting the counterintuitive possibility that higher rates of compensation actually lower the quality of labor. Machlup argued that the person who prefers to do something other than inventing and does so only under the seductive lure of more money is likely to be less gifted than one who doesn’t. This is the “vocation” argument dressed up in econometric equations; at some point, the recruits are going to reduce22 the average quality of the inventing “army.” This is true at some point; doctors who cure only for money may be less successful than those who have a true calling. The trick is figuring out whatpoint. There will indeed always be amateur inventors (in the original meaning: those who invent out of love), and they may well spend as much time mastering their inventive skills as any professional. But they will also always be fairly thin on the ground compared to the population as a whole.

He also examined the behavior of inventors as an element of what economists call input-output analysis. Input-output analysis creates snapshots of entire economies by showing how the output of one economic activity is the input of another: farmers selling wheat to bakers who sell bread to blacksmiths who sell plows back to the farmers. Harvesting, baking, and forging, respectively, are “production functions”: the lines on a graph that represent one person adding value and selling it to another. In Machlup’s exercise,23 the supply of inventors (or inventive labor) was the key input; the production function was the transformation of such labor into a commercially useful invention; and the supply of inventions was the output. As always, the equation included a simplifying assumption, and in this case, it was a doozy: that one man’s labor is worth roughly the same as another’s. This particular assumption gets distorted pretty quickly even in traditional input-output analysis, but it leaps right through the looking glass when applied to the business of inventing, a fact of which Machlup was keenly aware: “a statement that five hours of Mr. Doakes’ time24[is] equivalent to one hour of Mr. Edison’s or two hours of Mr. Bessemer’s would sound preposterous.”

The invention business is no more immune to the principle of diminishing returns than any other, and in any economic system, diminishing returns result anytime a crucial input stays fixed when another one increases. In the case of inventiveness, anytime the store of scientific knowledge (and the number of problems capable of tempting an inventor) isn’t increasing, more and more time and resources are required to produce a useful invention. Only the once-in-human-history event known as the Industrial Revolution, because it began the era of continuous invention, had a temporary reprieve from it.

But input-output analysis misses the most important factor of all, which might be called the genius of the system. You only get the one hour of Mr. Edison’s time during which he figures out how to make a practical incandescent lightbulb if you also get Mr. Doakes plugging away for five hours at refining the carbonized bamboo filament inside it.

The reason why is actually at the heart of the thing. Mr. Doakes didn’t spend those hours because of a simple economic calculus, given the time needed to actually pursue all the variables in all possible frames; Watt’s notebooks record months of trying every material under the sun to seal the first boiler of the separate condenser engine. The return on improving even the inventions of antiquity, given the hours, days, and months required and the other demands on the inventor’s time, must have been poor indeed. Mr. Doakes spent the time playing the game because he dreamed of winning it.

Which brings us back to James Watt’s famous walk on Glasgow Green. The quotation from Watt that opened this chapter appears (not always in the same words) in not only virtually every biography of Watt, but in just about every history of mechanical invention itself, including that of A. P. Usher. Only rarely noted, however, is the fact that Watt’s reminiscence first appeared nearly forty years after his death—and was the recollection of two men who heard it from Watt nearly fifty years after the famous walk.

Robert and John Hart were two Glasgow engineers and merchants who regarded James Watt with the sort of awe usually reserved for pop musicians, film stars, or star athletes. Or even more: They regarded him “as the greatest and most useful man25 who ever lived.” So when the elderly James Watt entered their shop, sometime in 1813, he was welcomed with adoration, and a barrage of questions about the great events of his life, rather like Michael Jordan beset by fans asking for a play-by-play account of the 1989 NBA playoffs. Watt’s recollection of the Sunday stroll down Glasgow Green in 1765 comes entirely from this episode. In short, it is not the sort of memory that a skeptic would regard as completely reliable in all its details.

This is to suggest not that Watt’s account is inaccurate, but rather that it says something far more significant about the nature of invention. The research emerging from the fields of information theory and neuroscience on the nature of creative insights offer intriguing ideas about what is happening in an individual inventor’s brain at the moment of inspiration. Theories about the aSTG, or cerebellum, or anything else, do not, however, explain much about the notable differences between the nature of invention in the eighteenth century and in the eighth; the structure of the individual brain has not, so far as is known, changed in millennia.

On the other hand, the number of brains producing inventive insights has increased. A lot.

This is why the hero worship of the brothers Hart is more enlightening about the explosion of inventive activity that started in eighteenth-century Britain than their reminiscences. For virtually all of human history, statues had been built to honor kings, soldiers, and religious figures; the Harts lived in the first era that built them to honor builders and inventors. James Watt was an inventor inspired in every way possible, right down to the neurons in his Scottish skull; but he was also, and just as significantly, the inspiration for thousands of other inventors, during his lifetime and beyond. The inscription on the statue of Watt that stood in Westminster Abbey from 1825 until it was moved in 1960 reminded visitors that it was made “Not to perpetuate a name which must endure while the peaceful arts flourish, but to shew that mankind have learned to know those who best deserve their gratitude” (emphasis added).

A nation’s heroes reveal its ideals, and the Watt memorial carries an impressive weight of symbolism. However, it must be said that the statue, sculpted by Sir Francis Chantrey in marble, might bear that weight more appropriately if it had been made out of the trademark material of the Industrial Revolution: iron.

* A member of the embarrassingly overachieving clan of Hungarian Jews that included Michael’s brother, Karl, the economist and author of The Great Transformation, a history of the modern market state (one built on “an almost miraculous improvement in the tools of production,” i.e., the Industrial Revolution), and his son, John, the 1986 winner of the Nobel Prize in Chemistry.

* A remarkable number of discoveries about the function of brain structures have been preceded by an improbable bit of head trauma.

* In addition to his status as a cheerleader for entrepreneurism—his most famous phrase is undoubtedly the one about the “perennial gale of creative destruction”—Schumpeter was also legendarily hostile to the importance of institutions, particularly laws, and especially patent law.

* When the original finally wore out, in 1879, a replica, using many of the same granite stones (and Smeaton’s innovative marble dowels and dovetails), was rebuilt in Plymouth in honor of Smeaton.

* This is an explicit reference to Newton’s fourth rule of reasoning from Book III of the Principia Mathematica; Smeaton was himself something of an astronomer, and entered the Newtonian world through its calculations of celestial motions.

* The evidence that invention has a Darwinian character is easier to find using the tools of demography than of microbiology, but while the landscape of evolution is large populations, its raw materials are the tiny bits of coded proteins called genes. Bruce Lahn, a geneticist at the University of Chicago, has documented an intriguing discontinuity in the evolutionary history of two genes—microcephalin and abnormal spindle-like microcephaly associated (ASPM)—which, when damaged, are complicit in some fairly onerous genetic disorders affecting intelligence (including big reductions in the size of cerebellums). That history shows substantial changes that can be dated to roughly 37,000 years ago and 5,800 years ago, which are approximately the dates of language acquisition and the discovery of agriculture. This is the first hard evidence that arguably the two biggest social changes in human history are associated with changes in brain size, and presumably function. No such changes dating from the birth of industrialization have been found, or even suspected.

If you find an error please notify us in the comments. Thank you!