Atomic energy requires an atom. No such beast was born legitimately into physics until the beginning of the twentieth century. The atom as an idea—as an invisible layer of eternal, elemental substance below the world of appearances where things combine, teem, dissolve and rot—is ancient. Leucippus, a Greek philosopher of the fifth century B.C. whose name survives on the strength of an allusion in Aristotle, proposed the concept; Democritus, a wealthy Thracian of the same era and wider repute, developed it. “ ‘For by convention color exists,’ ” the Greek physician Galen quotes from one of Democritus’ seventy-two lost books, “ ‘by convention bitter, by convention sweet, but in reality atoms and void.’ ” From the seventeenth century onward, physicists postulated atomic models of the world whenever developments in physical theory seemed to require them.85 But whether or not atoms really existed was a matter for continuing debate.
Gradually the debate shifted to the question of what kind of atom was necessary and possible. Isaac Newton imagined something like a miniature billiard ball to serve the purposes of his mechanical universe of masses in motion: “It seems probable to me,” he wrote in 1704, “that God in the beginning formed matter in solid, massy, hard, impenetrable, movable particles, of such sizes and figures, and with such other properties, and in such proportion to space, as most conduced to the end to which he formed them.”86The Scottish physicist James Clerk Maxwell, who organized the founding of the Cavendish Laboratory, published a seminal Treatise on Electricity and Magnetism in 1873 that modified Newton’s purely mechanical universe of particles colliding in a void by introducing into it the idea of an electromagnetic field. The field permeated the void; electric and magnetic energy propagated through it at the speed of light; light itself, Clerk Maxwell demonstrated, is a form of electromagnetic radiation. But despite his modifications, Clerk Maxwell was as devoted as Newton to a hard, mechanical atom:
Though in the course of ages catastrophes have occurred and may yet occur in the heavens, though ancient systems may be dissolved and new systems evolved out of their ruins, the [atoms] out of which [the sun and the planets] are built—the foundation stones of the material universe—remain unbroken and unworn. They continue this day as they were created—perfect in number and measure and weight.87
Max Planck thought otherwise. He doubted that atoms existed at all, as did many of his colleagues—the particulate theory of matter was an English invention more than a Continental, and its faintly Britannic odor made it repulsive to the xenophobic German nose—but if atoms did exist he was sure they could not be mechanical. “It is of paramount importance,” he confessed in his Scientific Autobiography, “that the outside world is something independent from man, something absolute, and the quest for laws which apply to this absolute appeared to me as the most sublime scientific pursuit in life.” Of all the laws of physics, Planck believed that the thermodynamic laws applied most basically to the independent “outside world” that his need for an absolute required.88 He saw early that purely mechanical atoms violated the second law of thermodynamics. His choice was clear.
The second law specifies that heat will not pass spontaneously from a colder to a hotter body without some change in the system. Or, as Planck himself generalized it in his Ph.D. dissertation at the University of Munich in 1879, that “the process of heat conduction cannot be completely reversed by any means.” Besides forbidding the construction of perpetual-motion machines, the second law defines what Planck’s predecessor Rudolf Clausius named entropy: because energy dissipates as heat whenever work is done—heat that cannot be collected back into useful, organized form—the universe must slowly run down to randomness.89 This vision of increasing disorder means that the universe is one-way and not reversible; the second law is the expression in physical form of what we call time. But the equations of mechanical physics—of what is now called classical physics—theoretically allowed the universe to run equally well forward or backward. “Thus,” an important German chemist complained, “in a purely mechanical world, the tree could become a shoot and a seed again, the butterfly turn back into a caterpillar, and the old man into a child. No explanation is given by the mechanistic doctrine for the fact that this does not happen. . . . The actual irreversibility of natural phenomena thus proves the existence of phenomena that cannot be described by mechanical equations; and with this the verdict on scientific materialism is settled.”90 Planck, writing a few years earlier, was characteristically more succinct: “The consistent implementation of the second law . . . is incompatible with the assumption of finite atoms.”91
A major part of the problem was that atoms were not then directly accessible to experiment. They were a useful concept in chemistry, where they were invoked to explain why certain substances—elements—combine to make other substances but cannot themselves be chemically broken down. Atoms seemed to be the reason gases behaved as they did, expanding to fill whatever container they were let into and pushing equally on all the container’s walls. They were invoked again to explain the surprising discovery that every element, heated in a laboratory flame or vaporized in an electric arc, colors the resulting light and that such light, spread out into its rainbow spectrum by a prism or a diffraction grating, invariably is divided into bands by characteristic bright lines. But as late as 1894, when Robert Cecil, the third Marquis of Salisbury, chancellor of Oxford and former Prime Minister of England, catalogued the unfinished business of science in his presidential address to the British Association, whether atoms were real or only convenient and what structure they hid were still undecided issues:
What the atom of each element is, whether it is a movement, or a thing, or a vortex, or a point having inertia, whether there is any limit to its divisibility, and, if so, how that limit is imposed, whether the long list of elements is final, or whether any of them have any common origin, all these questions remain surrounded by a darkness as profound as ever.92
Physics worked that way, sorting among alternatives: all science works that way. The chemist Michael Polanyi, Leo Szilard’s friend, looked into the workings of science in his later years at the University of Manchester and at Oxford. He discovered a traditional organization far different from what most nonscientists suppose. A “republic of science,” he called it, a community of independent men and women freely cooperating, “a highly simplified example of a free society.” Not all philosophers of science, which is what Polanyi became, have agreed.93, 94 Even Polanyi sometimes called science an “orthodoxy.” But his republican model of science is powerful in the same way successful scientific models are powerful: it explains relationships that have not been clear.
Polanyi asked straightforward questions. How were scientists chosen? What oath of allegiance did they swear? Who guided their research—chose the problems to be studied, approved the experiments, judged the value of the results? In the last analysis, who decided what was scientifically “true”? Armed with these questions, Polanyi then stepped back and looked at science from outside.
Behind the great structure that in only three centuries had begun to reshape the entire human world lay a basic commitment to a naturalistic view of life. Other views of life dominated at other times and places—the magical, the mythological. Children learned the naturalistic outlook when they learned to speak, when they learned to read, when they went to school. “Millions are spent annually on the cultivation and dissemination of science by the public authorities,” Polanyi wrote once when he felt impatient with those who refused to understand his point, “who will not give a penny for the advancement of astrology or sorcery. In other words, our civilization is deeply committed to certain beliefs about the nature of things; beliefs which are different, for example, from those to which the early Egyptian or the Aztec civilizations were committed.”95
Most young people learned no more than the orthodoxy of science. They acquired “the established doctrine, the dead letter.” Some, at university, went on to study the beginnings of method.96 They practiced experimental proof in routine research. They discovered science’s “uncertainties and its eternally provisional nature.” That began to bring it to life.97
Which was not yet to become a scientist. To become a scientist, Polanyi thought, required “a full initiation.” Such an initiation came from “close personal association with the intimate views and practice of a distinguished master.” The practice of science was not itself a science; it was an art, to be passed from master to apprentice as the art of painting is passed or as the skills and traditions of the law or of medicine are passed.98, 99 You could not learn the law from books and classes alone. You could not learn medicine. No more could you learn science, because nothing in science ever quite fits; no experiment is ever final proof; everything is simplified and approximate.
The American theoretical physicist Richard Feynman once spoke about his science with similar candor to a lecture hall crowded with undergraduates at the California Institute of Technology. “What do we mean by ‘understanding’ something?” Feynman asked innocently.100 His amused sense of human limitation informs his answer:
We can imagine that this complicated array of moving things which constitutes “the world” is something like a great chess game being played by the gods, and we are observers of the game. We do not know what the rules of the game are; all we are allowed to do is to watch the playing. Of course, if we watch long enough, we may eventually catch on to a few of the rules. The rules of the game are what we mean by fundamental physics. Even if we know every rule, however . . . what we really can explain in terms of those rules is very limited, because almost all situations are so enormously complicated that we cannot follow the plays of the game using the rules, much less tell what is going to happen next. We must, therefore, limit ourselves to the more basic question of the rules of the game. If we know the rules, we consider that we “understand” the world.
Learning the feel of proof; learning judgment; learning which hunches to play; learning which stunning calculations to rework, which experimental results not to trust: these skills admitted you to the spectators’ benches at the chess game of the gods, and acquiring them required sitting first at the feet of a master.
Polanyi found one other necessary requirement for full initiation into science: belief. If science has become the orthodoxy of the West, individuals are nevertheless still free to take it or leave it, in whole or in part; believers in astrology, Marxism and virgin birth abound. But “no one can become a scientist unless he presumes that the scientific doctrine and method are fundamentally sound and that their ultimate premises can be unquestioningly accepted.”101
Becoming a scientist is necessarily an act of profound commitment to the scientific system and the scientific world view. “Any account of science which does not explicitly describe it as something we believe in is essentially incomplete and a false pretense. It amounts to a claim that science is essentially different from and superior to all human beliefs that are not scientific statements—and this is untrue.” Belief is the oath of allegiance that scientists swear.102
That was how scientists were chosen and admitted to the order. They constituted a republic of educated believers taught through a chain of masters and apprentices to judge carefully the slippery edges of their work.
Who then guided that work? The question was really two questions: who decided which problems to study, which experiments to perform? And who judged the value of the results?
Polanyi proposed an analogy. Imagine, he said, a group of workers faced with the problem of assembling a very large, very complex jigsaw puzzle.103 How could they organize themselves to do the job most efficiently?
Each worker could take some of the pieces from the pile and try to fit them together. That would be an efficient method if assembling a puzzle was like shelling peas. But it wasn’t. The pieces weren’t isolated. They fitted together into a whole. And the chance of any one worker’s collection of pieces fitting together was small. Even if the group made enough copies of the pieces to give every worker the entire puzzle to attack, no one would accomplish as much alone as the group might if it could contrive a way to work together.
The best way to do the job, Polanyi argued, was to allow each worker to keep track of what every other worker was doing. “Let them work on putting the puzzle together in the sight of the others, so that every time a piece of it is fitted in by one [worker], all the others will immediately watch out for the next step that becomes possible in consequence.” That way, even though each worker acts on his own initiative, he acts to further the entire group’s achievement.104 The group works independently together; the puzzle is assembled in the most efficient way.
Polanyi thought science reached into the unknown along a series of what he called “growing points,” each point the place where the most productive discoveries were being made.105 Alerted by their network of scientific publications and professional friendships—by the complete openness of their communication, an absolute and vital freedom of speech—scientists rushed to work at just those points where their particular talents would bring them the maximum emotional and intellectual return on their investment of effort and thought.
It was clear, then, who among scientists judged the value of scientific results: every member of the group, as in a Quaker meeting. “The authority of scientific opinion remains essentially mutual; it is established between scientists, not above them.” There were leading scientists, scientists who worked with unusual fertility at the growing points of their fields; but science had no ultimate leaders.106 Consensus ruled.
Not that every scientist was competent to judge every contribution. The network solved that problem too. Suppose Scientist M announces a new result. He knows his highly specialized subject better than anyone in the world; who is competent to judge him? But next to Scientist M are Scientists L and N. Their subjects overlap M’s, so they understand his work well enough to assess its quality and reliability and to understand where it fits into science. Next to L and N are other scientists, K and O and J and P, who know L and N well enough to decide whether to trust their judgment about M. On out to Scientists A and Z, whose subjects are almost completely removed from M’s.
“This network is the seat of scientific opinion,” Polanyi emphasized; “of an opinion which is not held by any single human brain, but which, split into thousands of different fragments, is held by a multitude of individuals, each of whom endorses the other’s opinion at second hand, by relying on the consensual chains which link him to all the others through a sequence of overlapping neighborhoods.”107 Science, Polanyi was hinting, worked like a giant brain of individual intelligences linked together. That was the source of its cumulative and seemingly inexorable power. But the price of that power, as both Polanyi and Feynman are careful to emphasize, is voluntary limitation. Science succeeds in the difficult task of sustaining a political network among men and women of differing backgrounds and differing values, and in the even more difficult task of discovering the rules of the chess game of the gods, by severely limiting its range of competence. “Physics,” as Eugene Wigner once reminded a group of his fellows, “does not even try to give us complete information about the events around us—it gives information about the correlations between those events.”108
Which still left the question of what standards scientists consulted when they passed judgment on the contributions of their peers. Good science, original work, always went beyond the body of received opinion, always represented a dissent from orthodoxy. How, then, could the orthodox fairly assess it?
Polanyi suspected that science’s system of masters and apprentices protected it from rigidity. The apprentice learned high standards of judgment from his master. At the same time he learned to trust his own judgment: he learned the possibility and the necessity of dissent. Books and lectures might teach rules; masters taught controlled rebellion, if only by the example of their own original—and in that sense rebellious—work.
Apprentices learned three broad criteria of scientific judgment.109 The first criterion was plausibility. That would eliminate crackpots and frauds. It might also (and sometimes did) eliminate ideas so original that the orthodox could not recognize them, but to work at all, science had to take that risk. The second criterion was scientific value, a composite consisting of equal parts accuracy, importance to the entire system of whatever branch of science the idea belonged to, and intrinsic interest. The third criterion was originality. Patent examiners assess an invention for originality according to the degree of surprise the invention produces in specialists familiar with the art. Scientists judged new theories and new discoveries similarly. Plausibility and scientific value measured an idea’s quality by the standards of orthodoxy; originality measured the quality of its dissent.
Polanyi’s model of an open republic of science where each scientist judges the work of his peers against mutually agreed upon and mutually supported standards explains why the atom found such precarious lodging in nineteenth-century physics. It was plausible; it had considerable scientific value, especially in systematic importance; but no one had yet made any surprising discoveries about it. None, at least, sufficient to convince the network of only about one thousand men and women throughout the world in 1895 who called themselves physicists and the larger, associated network of chemists.110
The atom’s time was at hand. The great surprises in basic science in the nineteenth century came in chemistry. The great surprises in basic science in the first half of the twentieth century would come in physics.
* * *
In 1895, when young Ernest Rutherford roared up out of the Antipodes to study physics at the Cavendish with a view to making his name, the New Zealand he left behind was still a rough frontier. British nonconformist craftsmen and farmers and a few adventurous gentry had settled the rugged volcanic archipelago in the 1840s, pushing aside the Polynesian Maori who had found it first five centuries before; the Maori gave up serious resistance after decades of bloody skirmish only in 1871, the year Rutherford was born. He attended recently established schools, drove the cows home for milking, rode horseback into the bush to shoot wild pigeons from the berry-laden branches of virgin miro trees, helped at his father’s flax mill at Brightwater where wild flax cut from aboriginal swamps was retted, scutched and hackled for linen thread and tow. He lost two younger brothers to drowning; the family searched the Pacific shore near the farm for months.
It was a hard and healthy childhood. Rutherford capped it by winning scholarships, first to modest Nelson College in nearby Nelson, South Island, then to the University of New Zealand, where he earned an M.A. with double firsts in mathematics and physical science at twenty-two. He was sturdy, enthusiastic and smart, qualities he would need to carry him from rural New Zealand to the leadership of British science. Another, more subtle quality, a braiding of country-boy acuity with a profound frontier innocence, was crucial to his unmatched lifetime record of physical discovery. As his protégé James Chadwick said, Rutherford’s ultimate distinction was “his genius to be astonished.” He preserved that quality against every assault of success and despite a well-hidden but sometimes sickening insecurity, the stiff scar of his colonial birth.111, 112
His genius found its first occasion at the University of New Zealand, where Rutherford in 1893 stayed on to earn a B.Sc. Heinrich Hertz’s 1887 discovery of “electric waves”—radio, we call the phenomenon now—had impressed Rutherford wonderfully, as it did young people everywhere in the world. To study the waves he set up a Hertzian oscillator—electrically charged metal knobs spaced to make sparks jump between metal plates—in a dank basement cloakroom. He was looking for a problem for his first independent work of research.
He located it in a general agreement among scientists, pointedly including Hertz himself, that high-frequency alternating current, the sort of current a Hertzian oscillator produced when the spark radiation surged rapidly back and forth between the metal plates, would not magnetize iron. Rutherford suspected otherwise and ingeniously proved he was right. The work earned him an 1851 Exhibition scholarship to Cambridge. He was spading up potatoes in the family garden when the cable came. His mother called the news down the row; he laughed and jettisoned his spade, shouting triumph for son and mother both: “That’s the last potato I’ll dig!” (Thirty-six years later, when he was created Baron Rutherford of Nelson, he sent his mother a cable in her turn: “Now Lord Rutherford, more your honour than mine.”113, 114)
“Magnetization of iron by high-frequency discharges” was skilled observation and brave dissent.115 With deeper originality, Rutherford noticed a subtle converse reaction while magnetizing iron needles with high-frequency current: needles already saturated with magnetism became partly demagnetized when a high-frequency current passed by. His genius to be astonished was at work. He quickly realized that he could use radio waves, picked up by a suitable antenna and fed into a coil of wire, to induce a high-frequency current into a packet of magnetized needles. Then the needles would be partly demagnetized and if he set a compass beside them it would swing to show the change.
By the time he arrived on borrowed funds at Cambridge in September 1895 to take up work at the Cavendish under its renowned director, J. J. Thomson, Rutherford had elaborated his observation into a device for detecting radio waves at a distance—in effect, the first crude radio receiver. Guglielmo Marconi was still laboring to perfect his version of a receiver at his father’s estate in Italy; for a few months the young New Zealander held the world record in detecting radio transmissions at a distance.116
Rutherford’s experiments delighted the distinguished British scientists who learned of them from J. J. Thomson. They quickly adopted Rutherford, even seating him one evening at the Fellows’ high table at King’s in the place of honor next to the provost, which made him feel, he said, “like an ass in a lion’s skin” and which shaded certain snobs on the Cavendish staff green with envy.117 Thomson generously arranged for a nervous but exultant Rutherford to read his third scientific paper, “A magnetic detector of electrical waves and some of its applications,” at the June 18, 1896, meeting of the Royal Society of London, the foremost scientific organization in the world.118 Marconi only caught up with him in September.119
Rutherford was poor. He was engaged to Mary Newton, the daughter of his University of New Zealand landlady, but the couple had postponed marriage until his fortunes improved. Working to improve them, he wrote his fiancée in the midst of his midwinter research: “The reason I am so keen on the subject [of radio detection] is because of its practical importance. . . . If my next week’s experiments come out as well as I anticipate, I see a chance of making cash rapidly in the future.”120
There is mystery here, mystery that carries forward all the way to “moonshine.” Rutherford was known in later years as a hard man with a research budget, unwilling to accept grants from industry or private donors, unwilling even to ask, convinced that string and sealing wax would carry the day. He was actively hostile to the commercialization of scientific research, telling his Russian protégé Peter Kapitza, for example, when Kapitza was offered consulting work in industry, “You cannot serve God and Mammon at the same time.”121 The mystery bears on what C. P. Snow, who knew him, calls the “one curious exception” to Rutherford’s “infallible” intuition, adding that “no scientist has made fewer mistakes.” The exception was Rutherford’s refusal to admit the possibility of usable energy from the atom, the very refusal that irritated Leo Szilard in 1933.122 “I believe that he was fearful that his beloved nuclear domain was about to be invaded by infidels who wished to blow it to pieces by exploiting it commercially,” another protege, Mark Oliphant, speculates.123 Yet Rutherford himself was eager to exploit radio commercially in January 1896. Whence the dramatic and lifelong change?
The record is ambiguous but suggestive. The English scientific tradition was historically genteel. It generally disdained research patents and any other legal and commercial restraints that threatened the open dissemination of scientific results. In practice that guard of scientific liberty could molder into clubbish distaste for “vulgar commercialism.” Ernest Marsden, a Rutherford-trained physicist and an insightful biographer, heard that “in his early days at Cambridge there were some few who said that Rutherford was not a cultured man.” One component of that canard may have been contempt for his eagerness to make a profit from radio.124
It seems that J. J. Thomson intervened. A grand new work had abruptly offered itself. On November 8, 1895, one month after Rutherford arrived at Cambridge, the German physicist Wilhelm Röntgen discovered X rays radiating from the fluorescing glass wall of a cathode-ray tube. Röntgen reported his discovery in December and stunned the world. The strange radiation was a new growing point for science and Thomson began studying it almost immediately. At the same time he also continued his experiments with cathode rays, experiments that would culminate in 1897 in his identification of what he called the “negative corpuscle”—the electron, the first atomic particle to be identified. He must have needed help. He would also have understood the extraordinary opportunity for original research that radiation offered a young man of Rutherford’s skill at experiment.
To settle the issue Thomson wrote the grand old man of British science, Lord Kelvin, then seventy-two, asking his opinion of the commercial possibilities of radio—“before tempting Rutherford to turn to the new subject,” Marsden says. Kelvin after all, vulgar commercialism or not, had developed the transoceanic telegraph cable. “The reply of the great man was that [radio] might justify a captial expenditure of a £100,000 Company on its promotion, but no more.”125
By April 24 Rutherford has seen the light. He writes Mary Newton: “I hope to make both ends meet somehow, but I must expect to dub out my first year. . . . My scientific work at present is progressing slowly. I am working with the Professor this term on Röntgen Rays. I am a little full up of my old subject and am glad of a change. I expect it will be a good thing for me to work with the Professor for a time. I have done one research to show I can work by myself.”126 The tone is chastened and not nearly convinced, as if a ghostly, parental J. J. Thomson were speaking through Rutherford to his fiancée. He has not yet appeared before the Royal Society, where he was hardly “a little full up” of his subject. But the turnabout is accomplished. Hereafter Rutherford’s healthy ambition will go to scientific honors, not commercial success.
It seems probable that J. J. Thomson sat eager young Ernest Rutherford down in the darkly paneled rooms of the Gothic Revival Cavendish Laboratory that Clerk Maxwell had founded, at the university where Newton wrote his great Principia, and kindly told him he could not serve God and Mammon at the same time. It seems probable that the news that the distinguished director of the Cavendish had written the Olympian Lord Kelvin about the commercial ambitions of a brash New Zealander chagrined Rutherford to the bone and that he went away from the encounter feeling grotesquely like a parvenu. He would never make the same mistake again, even if it meant strapping his laboratories for funds, even if it meant driving away the best of his protégés, as eventually it did. Even if it meant that energy from his cherished atom could be nothing more than moonshine. But if Rutherford gave up commercial wealth for holy science, he won the atom in exchange. He found its constituent parts and named them. With string and sealing wax he made the atom real.
* * *
The sealing wax was blood red and it was the Bank of England’s most visible contribution to science. British experimenters used Bank of England sealing wax to make glass tubes airtight.127 Rutherford’s earliest work on the atom, like J. J. Thomson’s work with cathode rays, grew out of nineteenthcentury examination of the fascinating effects produced by evacuating the air from a glass tube that had metal plates sealed into its ends and then connecting the metal plates to a battery or an induction coil. Thus charged with electricity, the emptiness inside the sealed tube glowed. The glow emerged from the negative plate—the cathode—and disappeared into the positive plate—the anode. If you made the anode into a cylinder and sealed the cylinder into the middle of the tube you could project a beam of glow—of cathode rays—through the cylinder and on into the end of the tube opposite the cathode. If the beam was energetic enough to hit the glass it would make the glass fluoresce. The cathode-ray tube, suitably modified, its all-glass end flattened and covered with phosphors to increase the fluorescence, is the television tube of today.
In the spring of 1897 Thomson demonstrated that the beam of glowing matter in a cathode-ray tube was not made up of light waves, as (he wrote drily) “the almost unanimous opinion of German physicists” held. Rather, cathode rays were negatively charged particles boiling off the negative cathode and attracted to the positive anode. These particles could be deflected by an electric field and bent into curved paths by a magnetic field. They were much lighter than hydrogen atoms and were identical “whatever the gas through which the discharge passes” if gas was introduced into the tube.128 Since they were lighter than the lightest known kind of matter and identical regardless of the kind of matter they were born from, it followed that they must be some basic constituentpart of matter, and if they were a part, then there must be a whole. The real, physical electron implied a real, physical atom: the particulate theory of matter was therefore justified for the first time convincingly by physical experiment. They sang J. J.’s success at the annual Cavendish dinner:
The corpuscle won the day129
And in freedom went away
And became a cathode ray.
Armed with the electron, and knowing from other experiments that what was left when electrons were stripped away from an atom was a much more massive remainder that was positively charged, Thomson went on in the next decade to develop a model of the atom that came to be called the “plum pudding” model. The Thomson atom, “a number of negativelyelectrified corpuscles enclosed in a sphere of uniform positive electrification” like raisins in a pudding, was a hybrid: particulate electrons and diffuse remainder.130It served the useful purpose of demonstrating mathematically that electrons could be arranged in stable configurations within an atom and that the mathematically stable arrangements could account for the similarities and regularities among chemical elements that the periodic table of the elements displays. It was becoming clear that electrons were responsible for chemical affinities between elements, that chemistry was ultimately electrical.
Thomson just missed discovering X rays in 1894. He was not so unlucky in legend as the Oxford physicist Frederick Smith, who found that photographic plates kept near a cathode-ray tube were liable to be fogged and merely told his assistant to move them to another place.131, 132 Thomson noticed that glass tubing held “at a distance of some feet from the dischargetube” fluoresced just as the wall of the tube itself did when bombarded with cathode rays, but he was too intent on studying the rays themselves to pursue the cause.133 Röntgen isolated the effect by covering his cathode-ray tube with black paper. When a nearby screen of fluorescent material still glowed he realized that whatever was causing the screen to glow was passing through the paper and the intervening air.134 If he held his hand between the covered tube and the screen, his hand slightly reduced the glow on the screen but in dark shadow he could see its bones.
Röntgen’s discovery intrigued other researchers besides J. J. Thomson and Ernest Rutherford. The Frenchman Henri Becquerel was a third-generation physicist who, like his father and grandfather before him, occupied the chair of physics at the Musée d’Histoire Naturelle in Paris; like them also he was an expert on phosphorescence and fluorescence—in his case, particularly of uranium. He heard a report of Röntgen’s work at the weekly meeting of the Académie des Sciences on January 20, 1896. He learned that the X rays emerged from the fluorescing glass, which immediately suggested to him that he should test various fluorescing materials to see if they also emitted X rays. He worked for ten days without success, read an article on X rays on January 30 that encouraged him to keep working and decided to try a uranium salt, uranyl potassium sulfate.
His first experiment succeeded—he found that the uranium salt emitted radiation—but misled him. He had sealed a photographic plate in black paper, sprinkled a layer of the uranium salt onto the paper and “exposed the whole thing to the sun for several hours.” When he developed the photographic plate “I saw the silhouette of the phosphorescent substance in black on the negative.” He mistakenly thought sunlight activated the effect, much as cathode rays released Röntgen’s X rays from the glass.135
The story of Becquerel’s subsequent serendipity is famous. When he tried to repeat his experiment on February 26 and again on February 27 Paris was gray. He put the covered photographic plate away in a dark drawer, uranium salt in place. On March 1 he decided to go ahead and develop the plate, “expecting to find the images very feeble. On the contrary, the silhouettes appeared with great intensity. I thought at once that the action might be able to go on in the dark.” Energetic, penetrating radiation from inert matter unstimulated by rays or light: now Rutherford had his subject, as Marie and Pierre Curie, looking for the pure element that radiated, had their backbreaking work.136
* * *
Between 1898, when Rutherford first turned his attention to the phenomenon Henri Becquerel found and which Marie Curie named radioactivity, and 1911, when he made the most important discovery of his life, the young New Zealand physicist systematically dissected the atom.
He studied the radiations emitted by uranium and thorium and named two of them: “There are present at least two distinct types of radiation—one that is very readily absorbed, which will be termed for convenience the α [alpha] radiation, and the other of a more penetrative character, which will be termed the β [beta] radiation.”137 (A Frenchman, P. V. Villard, later discovered the third distinct type, a form of high-energy X rays that was named gamma radiation in keeping with Rutherford’s scheme.138) The work was done at the Cavendish, but by the time he published it, in 1899, when he was twenty-seven, Rutherford had moved to Montreal to become professor of physics at McGill University. A Canadian tobacco merchant had given money there to build a physics laboratory and to endow a number of professorships, including Rutherford’s. “The McGill University has a good name,” Rutherford wrote his mother.139 “£500 is not so bad [a salary] and as the physical laboratory is the best of its kind in the world, I cannot complain.”
In 1900 Rutherford reported the discovery of a radioactive gas emanating from the radioactive element thorium.140 Marie and Pierre Curie soon discovered that radium (which they had purified from uranium ores in 1898) also gave off a radioactive gas. Rutherford needed a good chemist to help him establish whether the thorium “emanation” was thorium or something else; fortunately he was able to shanghai a young Oxford man at McGill, Frederick Soddy, of talent sufficient eventually to earn a Nobel Prize. “At the beginning of the winter [of 1900],” Soddy remembers, “Ernest Rutherford, the Junior Professor of Physics, called on me in the laboratory and told me about the discoveries he had made. He had just returned with his bride from New Zealand . . . but before leaving Canada for his trip he had discovered what he called the thorium emanation. . . . I was, of course, intensely interested and suggested that the chemical character of the [substance] ought to be examined.”141
The gas proved to have no chemical character whatsoever. That, says Soddy, “conveyed the tremendous and inevitable conclusion that the element thorium was slowly and spontaneously transmuting itself into [chemically inert] argon gas!” Soddy and Rutherford had observed the spontaneous disintegration of the radioactive elements, one of the major discoveries of twentieth-century physics.142 They set about tracing the way uranium, radium and thorium changed their elemental nature by radiating away part of their substance as alpha and beta particles. They discovered that each different radioactive product possessed a characteristic “half-life,” the time required for its radiation to reduce to half its previously measured intensity. The half-life measured the transmutation of half the atoms in an element into atoms of another element or of a physically variant form of the same element—an “isotope,” as Soddy later named it.143 Half-life became a way to detect the presence of amounts of transmuted substances—“decay products”—too small to detect chemically. The half-life of uranium proved to be 4.5 billion years, of radium 1,620 years, of one decay product of thorium 22 minutes, of another decay product of thorium 27 days. Some decay products appeared and transmuted themselves in minute fractions of a second—in the twinkle of an eye. It was work of immense importance to physics, opening up field after new field to excited view, and “for more than two years,” as Soddy remembered afterward, “life, scientific life, became hectic to a degree rare in the lifetime of an individual, rare perhaps in the lifetime of an institution.”144
Along the way Rutherford explored the radiation emanating from the radioactive elements in the course of their transmutation. He demonstrated that beta radiation consisted of high-energy electrons “similar in all respects to cathode rays.” He suspected, and later in England conclusively proved, that alpha particles were positively charged helium atoms ejected during radioactive decay.145 Helium is found captured in the crystalline spaces of uranium and thorium ores; now he knew why.
An important 1903 paper written with Soddy, “Radioactive change,” offered the first informed calculations of the amount of energy released by radioactive decay:
It may therefore be stated that the total energy of radiation during the disintegration of one gram of radium cannot be less than 108 [i.e., 100,000,000] gram-calories, and may be between 109 and 1010 gram-calories. . . . The union of hydrogen and oxygen liberates approximately 4 × 103 [i.e., 4,000] gram-calories per gram of water produced, and this reaction sets free more energy for a given weight than any other chemical change known. The energy of radioactive change must therefore be at least twenty-thousand times, and may be a million times, as great as the energy of any molecular change.146
That was the formal scientific statement; informally Rutherford inclined to whimsical eschatology. A Cambridge associate writing an article on radioactivity that year, 1903, considered quoting Rutherford’s “playful suggestion that, could a proper detonator be found, it was just conceivable that a wave of atomic disintegration might be started through matter, which would indeed make this old world vanish in smoke.” Rutherford liked to quip that “some fool in a laboratory might blow up the universe unawares.” If atomic energy would never be useful, it might still be dangerous.147, 148
Soddy, who returned to England that year, examined the theme more seriously. Lecturing on radium to the Corps of Royal Engineers in 1904, he speculated presciently on the uses to which atomic energy might be put:
It is probable that all heavy matter possesses—latent and bound up with the structure of the atom—a similar quantity of energy to that possessed by radium. If it could be tapped and controlled what an agent it would be in shaping the world’s destiny! The man who put his hand on the lever by which a parsimonious nature regulates so jealously the output of this store of energy would possess a weapon by which he could destroy the earth if he chose.149
Soddy did not think the possibility likely: “The fact that we exist is a proof that [massive energetic release] did not occur; that it has not occurred is the best possible assurance that it never will. We may trust Nature to guard her secret.”
H. G. Wells thought Nature less trustworthy when he read similar statements in Soddy’s 1909 book Interpretation of Radium. “My idea is taken from Soddy,” he wrote of The World Set Free. “One of the good old scientific romances,” he called his novel; it was important enough to him that he interrupted a series of social novels to write it.150 Rutherford’s and Soddy’s discussions of radioactive change therefore inspired the sciencefiction novel that eventually started Leo Szilard thinking about chain reactions and atomic bombs.
In the summer of 1903 the Rutherfords visited the Curies in Paris. Mme. Curie happened to be receiving her doctorate in science on the day of their arrival; mutual friends arranged a celebration. “After a very lively evening,” Rutherford recalled, “we retired about 11 o’clock in the garden, where Professor Curie brought out a tube coated in part with zinc sulphide and containing a large quantity of radium in solution.151 The luminosity was brilliant in the darkness and it was a splendid finale to an unforgettable day.” The zinc-sulfide coating fluoresced white, making the radium’s ejection of energetic particles on its progess down the periodic table from uranium to lead visible in the darkness of the Paris evening. The light was bright enough to show Rutherford Pierre Curie’s hands, “in a very inflamed and painful state due to exposure to radium rays.” Hands swollen with radiation burns was another object lesson in what the energy of matter could do.
A twenty-six-year-old German chemist from Frankfurt, Otto Hahn, came to Montreal in 1905 to work with Rutherford. Hahn had already discovered a new “element,” radiothorium, later understood to be one of thorium’s twelve isotopes. He studied thorium radiation with Rutherford; together they determined that the alpha particles ejected from thorium had the same mass as the alpha particles ejected from radium and those from another radioactive element, actinium. The various particles were probably therefore identical—one conclusion along the way to Rutherford’s proof in 1908 that the alpha particle was inevitably a charged helium atom. Hahn went back to Germany in 1906 to begin a distinguished career as a discoverer of isotopes and elements; Leo Szilard encountered him working with physicist Lise Meitner at the Kaiser Wilhelm Institute for Chemistry in the 1920s in Berlin.
Rutherford’s research at McGill unraveling the complex transmutations of the radioactive elements earned him, in 1908, a Nobel Prize—not in physics but in chemistry. He had wanted that prize, writing his wife when she returned to New Zealand to visit her family in late 1904, “I may have a chance if I keep going,” and again early in 1905, “They are all following on my trail, and if I am to have a chance for a Nobel Prize in the next few years I must keep my work moving.” The award for chemistry rather than for physics at least amused him.152, 153 “It remained to the end a good joke against him,” says his son-in-law, “which he thoroughly appreciated, that he was thereby branded for all time as a chemist and no true physicist.”154
An eyewitness to the ceremonies said Rutherford looked ridiculously young—he was thirty-seven—and made the speech of the evening.155 He announced his recent confirmation, only briefly reported the month before, that the alpha particle was in fact helium.156 The confirming experiment was typically elegant. Rutherford had a glassblower make him a tube with extremely thin walls. He evacuated the tube and filled it with radon gas, a fertile source of alpha particles. The tube was gastight, but its thin walls allowed alpha particles to escape. Rutherford surrounded the radon tube with another glass tube, pumped out the air between the two tubes and sealed off the space. “After some days,” he told his Stockholm audience triumphantly, “a bright spectrum of helium was observed in the outer vessel.” Rutherford’s experiments still stun with their simplicity.157 “In this Rutherford was an artist,” says a former student. “All his experiments had style.”158
In the spring of 1907 Rutherford had left Montreal with his family—by then including a six-year-old daughter, his only child—and moved back to England. He had accepted appointment as professor of physics at Manchester, in the city where John Dalton had first revived the atomic theory almost exactly a century earlier. Rutherford bought a house and went immediately to work. He inherited an experienced German physicist named Hans Geiger who had been his predecessor’s assistant. Years later Geiger fondly recalled the Manchester days, Rutherford settled in among his gear:
I see his quiet research room at the top of the physics building, under the roof, where his radium was kept and in which so much well-known work on the emanation was carried out. But I also see the gloomy cellar in which he had fitted up his delicate apparatus for the study of the alpha rays. Rutherford loved this room. One went down two steps and then heard from the darkness Rutherford’s voice reminding one that a hot-pipe crossed the room at headlevel, and to step over two water-pipes. Then finally, in the feeble light one saw the great man himself seated at his apparatus.159
The Rutherford house was cheerier; another Manchester protégé liked to recall that “supper in the white-painted dining room on Saturdays and Sundays preceded pow-wows till all hours in the study on the first floor; tea on Sundays in the drawing room often followed a spin on the Cheshire roads in the motor.” There was no liquor in the house because Mary Rutherford did not approve of drinking.160 Smoking she reluctantly allowed because her husband smoked heavily, pipe and cigarettes both.
Now in early middle age he was famously loud, a “tribal chief,” as a student said, fond of banter and slang. He would march around the lab singing “Onward Christian Soldiers” off key. He took up room in the world now; you knew he was coming. He was ruddy-faced with twinkling blue eyes and he was beginning to develop a substantial belly. The diffidence was well hidden: his handshake was brief, limp and boneless; “he gave the impression,” says another former student, “that he was shy of physical contact.” He could still be mortified by condescension, blushing bright red and turning aside dumbstruck.161, 162, 163 With his students he was quieter, gentler, solid gold. “He was a man,” pronounces one in high praise, “who never did dirty tricks.”164
Chaim Weizmann, the Russian-Jewish biochemist who was later elected the first president of Israel, was working at Manchester on fermentation products in those days. He and Rutherford became good friends. “Youthful, energetic, boisterous,” Weizmann recalled, “he suggested anything but the scientist. He talked readily and vigorously on every subject under the sun, often without knowing anything about it. Going down to the refectory for lunch I would hear the loud, friendly voice rolling up the corridor.” Rutherford had no political knowledge at all, Weizmann thought, but excused him on the grounds that his important scientific work took all his time.165 “He was a kindly person, but he did not suffer fools gladly.”
In September 1907, his first term at Manchester, Rutherford made up a list of possible subjects for research. Number seven on the list was “Scattering of alpha rays.” Working over the years to establish the alpha particle’s identity, he had come to appreciate its great value as an atomic probe; because it was massive compared to the high-energy but nearly weightless beta electron, it interacted vigorously with matter.166 The measure of that interaction could reveal the atom’s structure. “I was brought up to look at the atom as a nice hard fellow, red or grey in colour, according to taste,” Rutherford told a dinner audience once.167 By 1907 it was clear to him that the atom was not a hard fellow at all but was substantially empty space. The German physicist Philipp Lenard had demonstrated as much in 1903 by bombarding elements with cathode rays.168 Lenard dramatized his findings with a vivid metaphor: the space occupied by a cubic meter of solid platinum, he said, was as empty as the space of stars beyond the earth.
But if there was empty space in atoms—void within void—there was something else as well. In 1906, at McGill, Rutherford had studied the magnetic deflection of alpha particles by projecting them through a narrow defining slit and passing the resulting thin beam through a magnetic field. At one point he covered half the defining slit with a sheet of mica only about three thousandths of a centimeter thick, thin enough to allow alpha particles to go through. He was recording the results of the experiment on photographic paper; he found that the edges of the part of the beam covered with the mica were blurred. The blurring meant that as the alpha particles passed through, the atoms of mica were deflecting—scattering—many of them from a straight line by as much as two degrees of angle. Since an intense magnetic field scattered the uncovered alpha particles only a little more, something unusual was happening. For a particle as comparatively massive as the alpha, moving at such high velocity, two degrees was an enormous deflection. Rutherford calculated that it would require an electrical field of about 100 million volts per centimeter of mica to scatter an alpha particle so far.169 “Such results bring out clearly,” he wrote, “the fact that the atoms of matter must be the seat of very intense electrical forces.” It was just this scattering that he marked down on his list to study.170
To do so he needed not only to count but also to see individual alpha particles. At Manchester he accepted the challenge of perfecting the necessary instruments. He worked with Hans Geiger to develop an electrical device that clicked off the arrival of each individual alpha particle into a counting chamber. Geiger would later elaborate the invention into the familiar Geiger counter of modern radiation studies.
There was a way to make individual alpha particles visible using zinc sulfide, the compound that coated the tube of radium solution Pierre Curie had carried into the night garden in Paris in 1903. A small glass plate coated with zinc sulfide and bombarded with alpha particles briefly fluoresced at the point where each particle struck, a phenomenon known as “scintillation” from the Greek word for spark. Under a microscope the faint scintillations in the zinc sulfide could be individually distinguished and counted. The method was tedious in the extreme. It required sitting for at least thirty minutes in a dark room to adapt the eyes, then taking counting turns of only a minute at a time—the change signaled by a timer that rang a bell—because focusing the eyes consistently on a small, dim screen was impossible for much longer than that.171 Even through the microscope the scintillations hovered at the edge of visibility; a counter who expected an experiment to produce a certain number of scintillations sometimes unintentionally saw imaginary flashes. So the question was whether the count was generally accurate. Rutherford and Geiger compared the observation counts with matched counts by the electric method. When the observation method proved reliable they put the electric counter away. It could count, but it couldn’t see, and Rutherford was interested first of all in locating an alpha particle’s position in space.
Geiger went to work on alpha scattering, aided by Ernest Marsden, then an eighteen-year-old Manchester undergraduate. They observed alpha particles coming out of a firing tube and passing through foils of such metals as aluminum, silver, gold and platinum. The results were generally consistent with expectation: alpha particles might very well accumulate as much as two degrees of total deflection bouncing around among atoms of the plum-pudding sort. But the experiment was troubled with stray particles.172Geiger and Marsden thought molecules in the walls of the firing tube might be scattering them. They tried eliminating the strays by narrowing and defining the end of the firing tube with a series of graduated metal washers. That proved no help.
Rutherford wandered into the room. The three men talked over the problem. Something about it alerted Rutherford’s intuition for promising side effects. Almost as an afterthought he turned to Marsden and said, “See if you can get some effect of alpha particles directly reflected from a metal surface.” Marsden knew that a negative result was expected—alpha particles shot through thin foils, they did not bounce back from them—but that missing a positive result would be an unforgivable sin.173 He took great care to prepare a strong alpha source. He aimed the pencil-narrow beam of alphas at a forty-five degree angle onto a sheet of gold foil. He positioned his scintillation screen on the same side of the foil, beside the alpha beam, so that a particle bouncing back would strike the screen and register as a scintillation. Between firing tube and screen he interposed a thick lead plate so that no direct alpha particles could interfere.
Arrangement of Ernest Marsden’s experiment: A-B, alpha particle source. R-R, gold foil. P, lead plate. S, zinc sulfide scintillation screen. M, microscope.
Immediately, and to his surprise, he found what he was looking for. “I remember well reporting the result to Rutherford,” he wrote, “ . . . when I met him on the steps leading to his private room, and the joy with which I told him.”174
A few weeks later, at Rutherford’s direction, Geiger and Marsden formulated the experiment for publication. “If the high velocity and mass of the α-particle be taken into account,” they concluded, “it seems surprising that some of the α-particles, as the experiment shows, can be turned within a layer of 6 × 10−5 [i.e., .00006] cm. of gold through an angle of 90°, and even more. To produce a similar effect by magnetic field, the enormous field of 109 absolute units would be required.” Rutherford in the meantime went off to ponder what the scattering meant.175
He pondered, in the midst of other work, for more than a year. He had a first quick intuition of what the experiment portended and then lost it.176 Even after he announced his spectacular conclusion he was reluctant to promote it. One reason for his reluctance might be that the discovery contradicted the atomic models J. J. Thomson and Lord Kelvin had postulated earlier. There were physical objections to his interpretation of Marsden’s discovery that would require working out as well.
Rutherford had been genuinely astonished by Marsden’s results. “It was quite the most incredible event that has ever happened to me in my life,” he said later. “It was almost as incredible as if you fired a 15-inch shell at a piece of tissue paper and it came back and hit you. On consideration I realised that this scattering backwards must be the result of a single collision, and when I made calculations I saw that it was impossible to get anything of that order of magnitude unless you took a system in which the greatest part of the mass of the atom was concentrated in a minute nucleus.”177
“Collision” is misleading. What Rutherford had visualized, making calculations and drawing diagrammatic atoms on large sheets of good paper, was exactly the sort of curving path toward and away from a compact, massive central body that a comet follows in its gravitational pas de deux with the sun.178 He had a model made, a heavy electromagnet suspended as a pendulum on thirty feet of wire that grazed the face of another electromagnet set on a table.179 With the two grazing faces matched in polarity and therefore repelling each other, the pendulum was deflected into a parabolic path according to its velocity and angle of approach, just as the alpha particles were deflected. He needed as always to visualize his work.
When further experiment confirmed his theory that the atom had a small, massive nucleus, he was finally ready to go public. He chose as his forum an old Manchester organization, the Manchester Literary and Philosophical Society—“largely the general public,” says James Chadwick, who attended the historic occasion as a student on March 7, 1911, “ . . . people interested in literary and philosophical ideas, largely business people.”180
The first item on the agenda was a Manchester fruit importer’s report that he had found a rare snake in a consignment of Jamaica bananas.181 He exhibited the snake. Then it was Rutherford’s turn. Only an abstract of the announcement survives, but Chadwick remembers how it felt to hear it: it was “a most shattering performance to us, young boys that we were. . . . We realized this was obviously the truth, this was it.”182
Rutherford had found the nucleus of his atom. He did not yet have an arrangement for its electrons. At the Manchester meeting he spoke of “a central electric charge concentrated at a point and surrounded by a uniform spherical distribution of opposite electricity equal in amount.” That was sufficiently idealized for calculation, but it neglected the significant physical fact that the “opposite electricity” must be embodied in electrons.183 Somehow they would have to be arranged around the nucleus.
Another mystery. A Japanese theoretical physicist, Hantaro Nagaoka, had postulated in 1903 a “Saturnian” model of the atom with flat rings of electrons revolving like Saturn’s rings around a “positively charged particle.”184 Nagaoka adapted the mathematics for his model from James Clerk Maxwell’s first triumphant paper, published in 1859, “On the stability of motion of Saturn’s rings.” All Rutherford’s biographers agree that Rutherford was unaware of Nagaoka’s paper until March 11, 1911—after the Manchester meeting—when he heard about it by postcard from a physicist friend: “Campbell tells me that Nagaoka once tried to deduce a big positive centre in his atom in order to account for optical effects.” He thereupon looked up the paper in the Philosophical Magazineand added a discussion of it to the last page of the full-length paper, “The scattering of a and β particles by matter and the structure of the atom,” that he sent to the same magazine in April.185 He described Nagaoka’s atom in that paper as being “supposed to consist of a central attracting mass surrounded by rings of rotating electrons.”186
But it seems that Nagaoka had recently visited him, because the Japanese physicist wrote from Tokyo on February 22, 1911, thanking him “for the great kindness you showed me in Manchester.”1 Yet the two physicists seem not to have discussed atomic models, or Nagaoka would probably have continued the discussion in his letter and Rutherford, a totally honest man, would certainly have acknowledged it in his paper.187
One reason Rutherford was unaware of Nagaoka’s Saturnian model of the atom is that it had been criticized and abandoned soon after Nagaoka introduced it because it suffered from a severe defect, the same theoretical defect that marred the atom Rutherford was now proposing.188 The rings of Saturn are stable because the force operating between the particles of debris that make them up—gravity—is attractive. The force operating between the electrons of Nagaoka’s Saturnian electron rings, however—negative electric charge—was repulsive. It followed mathematically that whenever two or more electrons equally spaced on an orbit rotated around the nucleus, they would drift into modes of oscillation—instabilities—that would quickly tear the atom apart.
What was true for Nagaoka’s Saturnian atom was also true, theoretically, for the atom Rutherford had found by experiment. It the atom operated by the mechanical laws of classical physics, the Newtonian laws that govern relationships within planetary systems, then Rutherford’s model should not work. But his was not a merely theoretical construct. It was the result of real physical experiment. And work it clearly did. It was as stable as the ages and it bounced back alpha particles like cannon shells.
Someone would have to resolve the contradiction between classical physics and Rutherford’s experimentally tested atom. It would need to be someone with qualities different from Rutherford’s: not an experimentalist but a theoretician, yet a theoretician rooted deeply in the real. He would need at least as much courage as Rutherford had and equal self-confidence. He would need to be willing to step through the mechanical looking glass into a strange, nonmechanical world where what happened on the atomic scale could not be modeled with planets or pendulums.
As if he had been called to the cause, such a person abruptly appeared in Manchester. Writing to an American friend on March 18, 1912, Rutherford announced the arrival: “Bohr, a Dane, has pulled out of Cambridge and turned up here to get some experience in radioactive work.” “Bohr” was Niels Henrick David Bohr, the Danish theoretical physicist.189 He was then twenty-seven years old.