THE MIGHTY ATOM

While Einstein and Hubble were productively unravelling the large-scale structure of the cosmos, others were struggling to understand something closer to hand but in its way just as remote: the tiny and ever-mysterious atom.

The great Caltech physicist Richard Feynman once observed that if you had to reduce scientific history to one important statement it would be: “All things are made of atoms.” They are everywhere and they constitute everything. Look around you. It is all atoms. Not just the solid things like walls and tables and sofas, but the air in between. And they are there in numbers that you really cannot conceive.

The basic working arrangement of atoms is the molecule (from the Latin for “little mass”). A molecule is simply two or more atoms working together in a more or less stable arrangement: add two atoms of hydrogen to one of oxygen and you have a molecule of water. Chemists tend to think in terms of molecules rather than elements in much the way that writers tend to think in terms of words and not letters, so it is molecules they count, and these are numerous to say the least. At sea level, at a temperature of 0 degrees Celsius, one cubic centimetre of air (that is, a space about the size of a sugar cube) will contain 45 billion billion molecules. And they are in every single cubic centimetre you see around you. Think how many cubic centimetres there are in the world outside your window—how many sugar cubes it would take to fill that view. Then think how many it would take to build a universe. Atoms, in short, are very abundant.

They are also fantastically durable. Because they are so long-lived, atoms really get around. Every atom you possess has almost certainly passed through several stars and been part of millions of organisms on its way to becoming you. We are each so atomically numerous and so vigorously recycled at death that a significant number of our atoms—up to a billion for each of us, it has been suggested—probably once belonged to Shakespeare. A billion more each came from Buddha and Genghis Khan and Beethoven, and any other historical figure you care to name. (The personages have to be historical, apparently, as it takes the atoms some decades to become thoroughly redistributed; however much you may wish it, you are not yet one with Elvis Presley.)

So we are all reincarnations—though short-lived ones. When we die, our atoms will disassemble and move off to find new uses elsewhere—as part of a leaf or other human being or drop of dew. Atoms themselves, however, go on practically for ever. Nobody actually knows how long an atom can survive, but according to Martin Rees it is probably about 1035 years—a number so big that even I am happy to express it in mathematical notation.

Above all, atoms are tiny—very tiny indeed. Half a million of them lined up shoulder to shoulder could hide behind a human hair. On such a scale an individual atom is essentially impossible to imagine, but we can of course try.

Start with a millimetre, which is a line this long: -. Now imagine that line divided into a thousand equal widths. Each of those widths is a micron. This is the scale of micro-organisms. A typical paramecium, for instance—a tiny, single-celled, freshwater creature—is about 2 microns wide, 0.002 millimetres, which is really very small. If you wanted to see with your naked eye a paramecium swimming in a drop of water, you would have to enlarge the drop until it was some 12 metres across. However, if you wanted to see the atoms in the same drop, you would have to make the drop 24 kilometres across.

Atoms, in other words, exist on a scale of minuteness of another order altogether. To get down to the scale of atoms, you would need to take each one of those micron slices and shave it into ten thousand finer widths. That’s the scale of an atom: one ten-millionth of a millimetre. It is a degree of slenderness way beyond the capacity of our imaginations, but you can get some idea of the proportions if you bear in mind that one atom is to that millimetre line above as the thickness of a sheet of paper is to the height of the Empire State Building.

It is, of course, the abundance and extreme durability of atoms that make them so useful, and the tininess that makes them so hard to detect and understand. The realization that atoms are these three things—small, numerous, practically indestructible—and that all things are made from them first occurred not to Antoine-Laurent Lavoisier, as you might expect, or even to Henry Cavendish or Humphry Davy, but rather to a spare and lightly educated English Quaker named John Dalton, whom we first encountered in Chapter 7.

Dalton was born in 1766 on the edge of the Lake District, near Cockermouth, to a family of poor and devout Quaker weavers. (Four years later the poet William Wordsworth would also join the world at Cockermouth.) He was an exceptionally bright student—so very bright, indeed, that at the improbably youthful age of twelve he was put in charge of the local Quaker school. This perhaps says as much about the school as about Dalton’s precocity, but perhaps not: we know from his diaries that at about this time he was reading Newton’s Principia—in the original Latin—and other works of a similarly challenging nature. At fifteen, still schoolmastering, he took a job in the nearby town of Kendal, and a decade after that he moved to Manchester, whence he scarcely stirred for the remaining fifty years of his life. In Manchester he became something of an intellectual whirlwind, producing books and papers on subjects ranging from meteorology to grammar. Colour blindness, a condition from which he suffered, was for a long time called Daltonism because of his studies. But it was a plump book called A New System of Chemical Philosophy, published in 1808, that established his reputation.

Paramecia—one of them caught in the process of dividing—swim in a drop of water, accompanied by smaller protozoans called biflagellates. Though invisible to the naked eye, such organisms are gigantic compared with atoms(credit 9.2)

There, in a short chapter of just five pages (out of the book’s more than nine hundred), people of learning first encountered atoms in something approaching their modern conception. Dalton’s simple insight was that at the root of all matter are exceedingly tiny, irreducible particles. “We might as well attempt to introduce a new planet into the solar system or annihilate one already in existence, as to create or destroy a particle of hydrogen,” he wrote.

Neither the idea of atoms nor the term itself was exactly new. Both had been developed by the ancient Greeks. Dalton’s contribution was to consider the relative sizes and characters of these atoms and how they fit together. He knew, for instance, that hydrogen was the lightest element, so he gave it an atomic weight of 1. He believed also that water consisted of seven parts of oxygen to one of hydrogen, and so he gave oxygen an atomic weight of 7. By such means was he able to arrive at the relative weights of the known elements. He wasn’t always terribly accurate—oxygen’s atomic weight is actually 16, not 7—but the principle was sound and formed the basis for all of modern chemistry and much of the rest of modern science.

The work made Dalton famous—albeit in a low-key, English Quaker sort of way. In 1826, the French chemist P. J. Pelletier travelled to Manchester to meet the atomic hero. Pelletier expected to find him attached to some grand institution, so he was astounded to discover him teaching elementary arithmetic to boys in a small school on a back street. According to the scientific historian E. J. Holmyard, a confused Pelletier, upon beholding the great man, stammered:

“Est-ce que j’ai l’honneur de m’addresser à Monsieur Dalton?” for he could hardly believe his eyes that this was the chemist of European fame, teaching a boy his first four rules. “Yes,” said the matter-of-fact Quaker. “Wilt thou sit down whilst I put this lad right about his arithmetic?”

Although Dalton tried to avoid all honours, he was elected to the Royal Society against his wishes, showered with medals and given a handsome government pension. When he died in 1844, forty thousand people viewed the coffin and the funeral cortège stretched for two miles. His entry in the Dictionary of National Biography is one of the longest, rivalled in length among nineteenth-century men of science only by those of Darwin and Lyell.

For a century after Dalton made his proposal, it remained entirely hypothetical, and a few eminent scientists—notably the Viennese physicist Ernst Mach, for whom is named the speed of sound—doubted the existence of atoms at all. “Atoms cannot be perceived by the senses … they are things of thought,” he wrote. Such was the scepticism with which the existence of atoms was viewed in the German-speaking world in particular that it was said to have played a part in the suicide of the great theoretical physicist and atomic enthusiast Ludwig Boltzmann in 1906.

John Dalton, the English chemist and thinker who became famous when he developed the theory that all matter is made of tiny indivisible particles called atoms(credit 9.3)

It was Einstein who provided the first incontrovertible evidence of atoms’ existence with his paper on Brownian motion in 1905, but this attracted little attention and in any case Einstein was soon to become consumed with his work on general relativity. So the first real hero of the atomic age, if not the first personage on the scene, was Ernest Rutherford.

Rutherford was born in 1871 in the “back blocks” of New Zealand to parents who had emigrated from Scotland to raise a little flax and a lot of children (to paraphrase Steven Weinberg). Growing up in a remote part of a remote country, he was about as far from the mainstream of science as it was possible to be, but in 1895 he won a scholarship that took him to the Cavendish Laboratory at Cambridge University, which was about to become the hottest place in the world to do physics.

Physicists are notoriously scornful of scientists from other fields. When the great Austrian physicist Wolfgang Pauli’s wife left him for a chemist, he was staggered with disbelief. “Had she taken a bullfighter I would have understood,” he remarked in wonder to a friend. “But a chemist…”

It was a feeling Rutherford would have understood. “All science is either physics or stamp collecting,” he once said, in a line that has been used many times since. There is a certain engaging irony, therefore, that his award of the Nobel Prize in 1908 was in chemistry, not physics.

Rutherford was a lucky man—lucky to be a genius, but even luckier to live at a time when physics and chemistry were so exciting and so compatible (his own sentiments notwithstanding). Never again would they quite so comfortably overlap.

For all his success, Rutherford was not an especially brilliant man and was actually pretty terrible at mathematics. Often during lectures he would get so lost in his own equations that he would give up halfway through and tell the students to work it out for themselves. According to his longtime colleague James Chadwick, discoverer of the neutron, he wasn’t even particularly clever at experimentation. He was simply tenacious and open-minded. For brilliance he substituted shrewdness and a kind of daring. His mind, in the words of one biographer, was “always operating out towards the frontiers, as far as he could see, and that was a great deal further than most other men.” Confronted with an intractable problem, he was prepared to work at it harder and longer than most people and to be more receptive to unorthodox explanations. His greatest breakthrough came because he was prepared to spend immensely tedious hours sitting at a screen counting alpha particle scintillations, as they were known—the sort of work that would normally have been farmed out. He was one of the first—possibly the very first—to see that the power inherent in the atom could, if harnessed, make bombs powerful enough to “make this old world vanish in smoke.”

Some unusual personal effects of John Dalton: a letter, a hank of hair and his eyeballs, dissected. Dalton suffered from colour blindness and thought that his eyes might yield clues to the condition. He bequeathed them to a physician friend, who failed to find anything significant in them(credit 9.4)

Physically he was big and booming, with a voice that made the timid shrink. Once, when told that Rutherford was about to make a radio broadcast across the Atlantic, a colleague drily asked: “Why use radio?” He also had a huge amount of good-natured confidence. When someoneremarked to him that he seemed always to be at the crest of a wave, he responded, “Well, after all, I made the wave, didn’t I?” C. P Snow recalled how, in a Cambridge tailor’s, he overheard Rutherford remark: “Every day I grow in girth. And in mentality.”

But both girth and fame were far ahead of him in 1895 when he fetched up at the Cavendish.1 It was a singularly eventful period in science. In the year of Rutherford’s arrival in Cambridge, Wilhelm Roentgen discovered X-rays at the University of Würzburg in Germany; the next year,Henri Becquerel discovered radioactivity. And the Cavendish itself was about to embark on a long period of greatness. In 1897, J. J. Thomson and colleagues would discover the electron there, in 1911 C. T. R. Wilson would produce the first particle detector there (as we shall see), and in 1932 James Chadwick would discover the neutron there. Further still in the future, in 1953, James Watson and Francis Crick would discover the structure of DNA at the Cavendish.

In the beginning Rutherford worked on radio waves, and with some distinction—he managed to transmit a crisp signal more than a mile, a very reasonable achievement for the time—but he gave it up when he was persuaded by a senior colleague that radio had little future. On the whole, however, Rutherford didn’t thrive at the Cavendish, and after three years there, feeling he was going nowhere, he took a post at McGill University in Montreal, where he began his long and steady rise to greatness. By the time he received his Nobel Prize (for “investigations into the disintegration of the elements, and the chemistry of radioactive substances,” according to the official citation) he had moved on to Manchester University, and it was there, in fact, that he would do his most important work in determining the structure and nature of the atom.

By the early twentieth century it was known that atoms were made of parts—Thomson’s discovery of the electron had established that—but it wasn’t known how many parts there were or how they fitted together or what shape they took. Some physicists thought that atoms might be cube-shaped, because cubes can be packed together so neatly without any wasted space. The more general view, however, was that an atom was more like a currant bun or a plum pudding: a dense, solid object that carried a positive charge but that was studded with negatively charged electrons, like the currants in a currant bun.

In 1910, Rutherford (assisted by his student Hans Geiger, who would later invent the radiation detector that bears his name) fired ionized helium atoms, or alpha particles, at a sheet of gold foil.2 To Rutherford’s astonishment, some of the particles bounced back. It was as if, he said, he had fired a 15-inch shell at a sheet of paper and it rebounded into his lap. This was just not supposed to happen. After considerable reflection he realized there could be only one possible explanation: the particles that bounced back were striking something small and dense at the heart of the atom, while the other particles sailed through unimpeded. An atom, Rutherford realized, was mostly empty space, with a very dense nucleus at the centre. This was a most gratifying discovery, but it presented one immediate problem. By all the laws of conventional physics, atoms shouldn’t therefore exist.

The New Zealand-born physicist Ernest Rutherford (facing camera) strikes a thoughtful pose in the Cavendish Laboratory at Cambridge in 1926. Rutherford, who won a Nobel Prize for his work on atomic structure in 1908, became one of many Nobel laureates to work at the Cavendish during its years of pre-eminence(credit 9.5)

Let us pause for a moment and consider the structure of the atom as we know it now. Every atom is made from three kinds of elementary particles: protons, which have a positive electrical charge; electrons, which have a negative electrical charge; and neutrons, which have no charge. Protons and neutrons are packed into the nucleus, while electrons spin around outside. The number of protons is what gives an atom its chemical identity. An atom with one proton is an atom of hydrogen, one with two protons is helium, with three protons lithium, and so on up the scale. Each time you add a proton you get a new element. (Because the number of protons in an atom is always balanced by an equal number of electrons, you will sometimes see it written that it is the number of electrons that defines an element; it comes to the same thing. The way it was explained to me is that protons give an atom its identity, electrons its personality.)

A traditional rendering of an atom showing electrons orbiting a nucleus as planets orbit a sun. The image was originally created in 1904 by a Japanese physicist named Hantaro Nagaoka, but in fact is not accurate(credit 9.6)

Neutrons don’t influence an atom’s identity, but they do add to its mass. The number of neutrons is generally about the same as the number of protons, but they can vary up and down slightly. Add or subtract a neutron or two and you get an isotope. The terms you hear in reference to dating techniques in archaeology refer to isotopes—carbon-14, for instance, which is an atom of carbon with six protons and eight neutrons (the fourteen being the sum of the two).

Neutrons and protons occupy the atom’s nucleus. The nucleus of an atom is tiny—only one-millionth of a billionth of the full volume of the atom—but fantastically dense, since it contains virtually all the atom’s mass. As Cropper has put it, if an atom were expanded to the size of a cathedral, the nucleus would be only about the size of a fly—but a fly many thousands of times heavier than the cathedral. It was this spaciousness—this resounding, unexpected roominess—that had Rutherford scratching his head in 1910.

It is still a fairly astounding notion to consider that atoms are mostly empty space, and that the solidity we experience all around us is an illusion. When two objects come together in the real world—billiard balls are most often used for illustration—they don’t actually strike each other. “Rather,” as Timothy Ferris explains, “the negatively charged fields of the two balls repel each other…[W]ere it not for their electrical charges they could, like galaxies, pass right through each other unscathed.” When you sit in a chair, you are not actually sitting there, but levitating above it at a height of one angstrom (a hundred millionth of a centimetre), your electrons and its electrons implacably opposed to any closer intimacy.

The picture of an atom that nearly everybody has in mind is of an electron or two flying around a nucleus, like planets orbiting a sun. This image was created in 1904, based on little more than clever guesswork, by a Japanese physicist named Hantaro Nagaoka. It is completely wrong, but durable just the same. As Isaac Asimov liked to note, it inspired generations of science-fiction writers to create stories of worlds-within-worlds, in which atoms become tiny inhabited solar systems or our solar system turns out to be merely a mote in some much larger scheme. Even nowCERN, the European Organization for Nuclear Research, uses Nagaoka’s image as a logo on its website. In fact, as physicists were soon to realize, electrons are not like orbiting planets at all, but more like the blades of a spinning fan, managing to fill every bit of space in their orbits simultaneously (but with the crucial difference that the blades of a fan only seem to be everywhere at once; electrons are).

Needless to say, very little of this was understood in 1910 or for many years afterwards. Rutherford’s finding presented some large and immediate problems, not least that no electron should be able to orbit a nucleus without crashing. Conventional electrodynamic theory demanded that a flying electron should run out of energy very quickly—in only an instant or so—and spiral into the nucleus, with disastrous consequences for both. There was also the problem of how protons, with their positive charges, could bundle together inside the nucleus without blowing themselves and the rest of the atom apart. Clearly, whatever was going on down there in the world of the very small was not governed by the laws that applied in the macro world where our expectations reside.

As physicists began to delve into this subatomic realm, they realized that it wasn’t merely different from anything we knew, but different from anything ever imagined. “Because atomic behaviour is so unlike ordinary experience,” Richard Feynman once observed, “it is very difficult to get used to and it appears peculiar and mysterious to everyone, both to the novice and to the experienced physicist.” When Feynman made that comment, physicists had had half a century to adjust to the strangeness of atomic behaviour. So think how it must have felt to Rutherford and his colleagues in the early 1910s when it was all brand new.

A computer graphic of an atom of helium, one of the commonest elements in the universe, showing a nucleus of two protons and two neutrons surrounded by an electron cloud. The atom’s electrons are able to create such a cloud through their weird ability to be “at once everywhere and nowhere.” (credit 9.7)

One of the people working with Rutherford was a mild and affable young Dane named Niels Bohr. In 1913, while puzzling over the structure of the atom, Bohr had an idea so exciting that he postponed his honeymoon to write what became a landmark paper.

Because physicists couldn’t see anything so small as an atom, they had to try to work out its structure from how it behaved when they did things to it, as Rutherford had done by firing alpha particles at foil. Sometimes, not surprisingly, the results of these experiments were puzzling. One puzzle that had been around for a long time was to do with spectrum readings of the wavelengths of hydrogen. These produced patterns showing that hydrogen atoms emitted energy at certain wavelengths but not others. It was rather as if someone under surveillance kept turning up at particular locations but was never observed travelling between them. No-one could understand why this should be.

It was while puzzling over this problem that Bohr was struck by a solution and dashed off his famous paper. Called “On the Constitutions of Atoms and Molecules,” the paper explained how electrons could keep from falling into the nucleus by suggesting that they could occupy only certain well-defined orbits. According to the new theory, an electron moving between orbits would disappear from one and reappear instantaneously in another without visiting the space between. This idea—the famous “quantum leap”—is of course utterly strange, but it was too good not to be true. It not only kept electrons from spiralling catastrophically into the nucleus, it also explained hydrogen’s bewildering wavelengths. The electrons only appeared in certain orbits because they only existed in certain orbits. It was a dazzling insight and it won Bohr the 1922 Nobel Prize in physics, the year after Einstein received his.

The Danish physicist Niels Bohr in 1926, four years after winning a Nobel Prize for working out the mysterious behaviour of electrons(credit 9.8a)

Meanwhile the tireless Rutherford, now back at Cambridge having succeeded J. J. Thomson as head of the Cavendish Laboratory, came up with a model that explained why the nuclei didn’t blow up. He saw that the positive charge of the protons must be offset by some type of neutralizing particles, which he called neutrons. The idea was simple and appealing, but not easy to prove. Rutherford’s associate, James Chadwick, devoted eleven intensive years to hunting for neutrons before finally succeeding in 1932. He, too, was awarded a Nobel Prize in physics, in 1935. As Boorse and his colleagues point out in their history of the subject, the delay in discovery was probably a very good thing, as mastery of the neutron was essential to the development of the atomic bomb. (Because neutrons have no charge, they aren’t repelled by the electrical fields at the heart of an atom and thus could be fired like tiny torpedoes into an atomic nucleus, setting off the destructive process known as fission.) Had the neutron been isolated in the 1920s, they note, it is “very likely the atomic bomb would have been developed first in Europe, undoubtedly by the Germans.”

J. J. Thomson, Rutherford’s predecessor as director of the Cavendish Laboratory, in an undated photograph(credit 9.8b)

James Chadwick’s neutron detector, the device he used to prove the existence of the elusive and long-sought particles in 1932(credit 9.8c)

Left: James Chadwick, protege of Ernest Rutherford, who spent eleven years searching devotedly for neutrons. In 1935 he was awarded the Nobel Prize in physics for their discovery. (credit 9.9a) Right: Prince Louis-Victor de Broglie, who suggested that an electron should be regarded as a wave and not as a particle, as this minimized anomalies that had long baffled scientists(credit 9.9b)

As it was, the Europeans had their hands full trying to understand the strange behaviour of the electron. The principal problem they faced was that the electron sometimes behaved like a particle and sometimes like a wave. This impossible duality drove physicists nearly mad. For the next decade all across Europe they furiously thought and scribbled and offered competing hypotheses. In France, Prince Louis-Victor de Broglie, the scion of a ducal family, found that certain anomalies in the behaviour of electrons disappeared when one regarded them as waves. The observation excited the attention of the Austrian Erwin Schrödinger, who made some deft refinements and devised a handy system called wave mechanics. At almost the same time, the German physicist Werner Heisenberg came up with a competing theory called matrix mechanics. This was so mathematically complex that hardly anyone really understood it, including Heisenberg himself (“I do not even know what a matrix is,” Heisenberg despaired to a friend at one point), but it did seem to solve certain problems that Schrödinger’s waves failed to explain.

The upshot is that physics had two theories, based on conflicting premises, that produced the same results. It was an impossible situation.

Finally, in 1926, Heisenberg came up with a celebrated compromise, producing a new discipline that came to be known as quantum mechanics. At the heart of it was Heisenberg’s Uncertainty Principle, which states that the electron is a particle but a particle that can be described in terms of waves. The uncertainty around which the theory is built is that we can know the path an electron takes as it moves through a space or we can know where it is at a given instant, but we cannot know both.3 Any attempt to measure one will unavoidably disturb the other. This isn’t a matter of simply needing more precise instruments; it is an immutable property of the universe.

What this means in practice is that you can never predict where an electron will be at any given moment. You can only list its probability of being there. In a sense, as Dennis Overbye has put it, an electron doesn’t exist until it is observed. Or, put slightly differently, until it is observed an electron must be regarded as being “at once everywhere and nowhere.”

If this seems confusing, you may take some comfort in knowing that it was confusing to physicists, too. Overbye notes: “Bohr once commented that a person who wasn’t outraged on first hearing about quantum theory didn’t understand what had been said.” Heisenberg, when asked how one could envision an atom, replied: “Don’t try.”

Left: Werner Heisenberg, whose Uncertainty Principle became the heart of the new discipline of quantum mechanics. (credit 9.10a) Right: Erwin Schrödinger, who published a series of papers in 1926 that founded the field of quantum wave mechanics. His famous hypothetical wave experiment linked quantum theory with philosophy by asserting that two possible outcomes of any situation will simultaneously exist until the actual outcome is observed(credit 9.10b)

So the atom turned out to be quite unlike the image that most people had created. The electron doesn’t fly around the nucleus like a planet around its sun, but instead takes on the more amorphous aspect of a cloud. The “shell” of an atom isn’t some hard, shiny casing, as illustrations sometimes encourage us to suppose, but simply the outermost of these fuzzy electron clouds. The cloud itself is essentially just a zone of statistical probability marking the area beyond which the electron only very seldom strays. Thus an atom, if you could see it, would look more like a very fuzzy tennis ball than a hard-edged metallic sphere (but not much like either or, indeed, like anything you’ve ever seen; we are, after all, dealing here with a world very different from the one we see around us).

It seemed as if there was no end of strangeness. For the first time, as James Trefil has put it, scientists had encountered “an area of the universe that our brains just aren’t wired to understand.” Or, as Feynman expressed it, “things on a small scale behave nothing like things on a large scale.” As physicists delved deeper, they realized they had found a world not only where electrons could jump from one orbit to another without travelling across any intervening space, but where matter could pop into existence from nothing at all—“provided,” in the words of Alan Lightman of MIT, “it disappears again with sufficient haste.”

Perhaps the most arresting of quantum improbabilities is the idea, arising from Wolfgang Pauli’s Exclusion Principle of 1925, that certain pairs of subatomic particles, even when separated by the most considerable distances, can each instantly “know” what the other is doing. Particles have a quality known as spin and, according to quantum theory, the moment you determine the spin of one particle, its sister particle, no matter how distant away, will immediately begin spinning in the opposite direction and at the same rate.

It is as if, in the words of the science writer Lawrence Joseph, you had two identical pool balls, one in Ohio and the other in Fiji, and that the instant you sent one spinning the other would immediately spin in a contrary direction at precisely the same speed. Remarkably, the phenomenon was proved in 1997 when physicists at the University of Geneva sent photons seven miles in opposite directions and demonstrated that interfering with one provoked an instantaneous response in the other.

Things reached such a pitch that at one conference Bohr remarked of a new theory that the question was not whether it was crazy, but whether it was crazy enough. To illustrate the non-intuitive nature of the quantum world, Schrödinger offered a famous thought experiment in which a hypothetical cat was placed in a box with one atom of a radioactive substance attached to a vial of hydrocyanic acid. If the particle degraded within an hour, it would trigger a mechanism that would break the vial and poison the cat. If not, the cat would live. But we could not know which was the case, so there was no choice, scientifically, but to regard the cat as 100 per cent alive and 100 per cent dead at the same time. This means, as Stephen Hawking has observed with a touch of understandable excitement, that one cannot “predict future events exactly if one cannot even measure the present state of the universe precisely!”

(credit 9.11)

Because of its oddities, many physicists disliked quantum theory, or at least certain aspects of it, and none more so than Einstein. This was more than a little ironic since it was he, in his annus mirabilis of 1905, who had so persuasively explained how photons of light could sometimes behave like particles and sometimes like waves—the notion at the very heart of the new physics. “Quantum theory is very worthy of regard,” he observed politely, but he really didn’t like it. “God doesn’t play dice,” he said.4

Einstein couldn’t bear the notion that God could create a universe in which some things were for ever unknowable. Moreover, the idea of action at a distance—that one particle could instantaneously influence another trillions of miles away—was a stark violation of the Special Theory of Relativity. Nothing could outrace the speed of light and yet here were physicists insisting that, somehow, at the subatomic level, information could. (No-one, incidentally, has ever explained how the particles achieve this feat. Scientists have dealt with this problem, according to the physicistYakir Aharanov, “by not thinking about it.”)

Above all, there was the problem that quantum physics introduced a level of untidiness that hadn’t previously existed. Suddenly you needed two sets of laws to explain the behaviour of the universe—quantum theory for the world of the very small and relativity for the larger universe beyond. The gravity of relativity theory was brilliant at explaining why planets orbited suns or why galaxies tended to cluster, but turned out to have no influence at all at the particle level. To explain what kept atoms together, other forces were needed and in the 1930s two were discovered: the strong nuclear force and the weak nuclear force. The strong force binds atoms together; it’s what allows protons to bed down together in the nucleus. The weak force engages in more miscellaneous tasks, mostly to do with controlling the rates of certain sorts of radioactive decay.

The weak nuclear force, despite its name, is ten billion billion billion times stronger than gravity, and the strong nuclear force is more powerful still—vastly so, in fact—but their influence extends to only the tiniest distances. The grip of the strong force reaches out only to about one-hundred-thousandth of the diameter of an atom. That’s why the nuclei of atoms are so compacted and dense, and why elements with big, crowded nuclei tend to be so unstable: the strong force just can’t hold on to all the protons.

The upshot of all this is that physics ended up with two bodies of laws—one for the world of the very small, one for the universe at large—leading quite separate lives. Einstein disliked that, too. He devoted the rest of his life to searching for a way to tie up these loose ends by finding a Grand Unified Theory, and always failed. From time to time he thought he had it, but it always unravelled on him in the end. As time passed he became increasingly marginalized and even a little pitied. Almost without exception, wrote Snow, “his colleagues thought, and still think, that he wasted the second half of his life.”

Elsewhere, however, real progress was being made. By the mid-1940s scientists had reached a point where they understood the atom at an extremely profound level—as they all too effectively demonstrated in August 1945 by exploding a pair of atomic bombs over Japan.

By this point physicists could be excused for thinking that they had just about conquered the atom. In fact, everything in particle physics was about to get a whole lot more complicated. But before we take up that slightly exhausting story, we must bring another strand of our history up to date by considering an important and salutary tale of avarice, deceit, bad science, several needless deaths and the final determination of the age of the Earth.

The chillingly familiar shape of a mushroom cloud rises above Bikini Atoll in the South Pacific in 1954 during one of the American military’s first tests of hydrogen bombs. The blast shown here had a force of 11 megatons, or more than twice the destructive impact of all the explosives used by all sides in the Second World War(credit 9.12)

1 The name comes from the same Cavendishes who produced Henry. This one was William Cavendish, seventh Duke of Devonshire, who was a gifted mathematician and steel baron in Victorian England. In 1870 he gave the university £6,300 to build an experimental laboratory.

2 Geiger would also later become a loyal Nazi, unhesitatingly betraying Jewish colleagues, including many who had helped him.

3 There is a little uncertainty about the use of the word uncertainty in regard to Heisenberg’s principle. Michael Frayn, in an afterword to his play Copenhagen, notes that several words in German—Unsicherheit, Unschärfe, Ungenauigkeit and Unbestimmtheit—have been used by various translators, but that none quite equates to the English uncertainty. Frayn suggests that indeterminacy would be a better word for the principle and indeterminability would be better still. Heisenberg himself generally used Unbestimmtheit.

4 Or at least, that is how it is nearly always rendered. The actual quote was: “It seems hard to sneak a look at God’s cards. But that He plays dice and uses “telepathic” methods…is something that I cannot believe for a single moment.”

Morning rush hour in Mexico City shows a city choking under a haze of pollution and smog(credit 10.1)

If you find an error or have any questions, please email us at admin@erenow.org. Thank you!