SCIENCE AND NATURE

By 1950 modern industry was already dependent on science and scientists, directly or indirectly, obviously or not, acknowledged or not. Moreover, the transformation of fundamental science into end products was by then often very rapid, and has continued to accelerate in most areas of technology. A substantial generalization of the use of the motor car, after the grasping of the principle of the internal combustion engine, took about half a century; in recent times, the microchip made hand-held computers possible in about ten years. Technological progress is still the only way in which large numbers of people become aware of the importance of science. Yet there have been important changes in the way in which it has come to shape their lives. In the nineteenth century, most practical results of science were still often by-products of scientific curiosity. Sometimes they were even accidental. By 1900 a change was underway. Some scientists had seen that consciously directed and focused research was sensible. Twenty years later, large industrial companies were beginning to see research as a proper call on their investment, albeit a small one. Some industrial research departments were in the end to grow into enormous establishments in their own right as petrochemicals, plastics, electronics and biochemical medicine made their appearance. Nowadays, the ordinary citizen of a developed country cannot lead a life that does not rely on applied science. This all-pervasiveness, coupled with its impressiveness in its most spectacular achievements, was one of the reasons for the evergrowing recognition given to science. Money is one yardstick. The Caven

dish Laboratory at Cambridge, for example, in which some of the fundamental experiments of nuclear physics were carried out before 1914, had then a grant from the university of about £300 a year - roughly $1500 at rates then current. When, during the war of 1939-45, the British and Americans decided that a major effort had to be mounted to produce nuclear weapons, the resulting ‘Manhattan Project’ (as it was called) is estimated to have cost as much as all the scientific research previously conducted by mankind from the beginnings of recorded time.

Such huge sums - and there were to be even larger bills to meet in the post-war world - mark another momentous change, the new importance of science to government. After being for centuries the object of only occasional patronage by the state, it now became a major political concern. Only governments could provide resources on the scale needed for some of the things done since 1945. One benefit they usually sought was better weapons, which explained much of the huge scientific investment of the United States and the Soviet Union. The increasing interest and participation of governments has not, on the other hand, meant that science has grown more national; indeed, the reverse is true. The tradition of international communication among scientists is one of their most splendid inheritances from the first great age of science in the seventeenth century, but even without it, science would jump national frontiers for purely theoretical and technical reasons.

Once again, the historical context is complex and deep. Already before 1914 it was increasingly clear that boundaries between the individual sciences, some of them intelligible and usefully distinct fields of study since the seventeenth century, were tending to blur and then to disappear. The full implications of this have only begun to appear very lately, however. For all the achievements of the great chemists and biologists of the eighteenth and nineteenth centuries, it was the physicists who did most to change the scientific map of the twentieth century. James Clerk Maxwell, the first professor of experimental physics at Cambridge, published in the 1870s the work in electromagnetism which first broke effectively into fields and problems left untouched by Newtonian physics. Maxwell’s theoretical work and its experimental investigation profoundly affected the accepted view that the universe obeyed natural, regular and discoverable laws of a somewhat mechanical kind and that it consisted essentially of indestructible matter in various combinations and arrangements. Into this picture had now to be fitted the newly discovered electromagnetic fields, whose technological possibilities quickly fascinated laymen and scientists alike.

The crucial work that followed and that founded modern physical theory was done between 1895 and 1914, by Rontgen who discovered X-rays, Becquerel who discovered radioactivity, Thomson who identified the electron, the Curies who isolated radium, and Rutherford who investigated the structure of the atom. They made it possible to see the physical world in a new way. Instead of lumps of matter, the universe began to look more like an aggregate of atoms, which were tiny solar systems of particles held together by electrical forces in different arrangements. These particles seemed to behave in a way that blurred the distinction between matter and electromagnetic fields. Moreover, such arrangements of particles were not fixed, for in nature one arrangement might give way to another and thus elements could change into other elements. Rutherford’s work, in particular, was decisive, for he established that atoms could be ‘split’ because of their structure as a system of particles. This meant that matter, even at this fundamental level, could be manipulated. Two such particles were soon identified: the proton and the electron; others were not isolated until after 1932, when Chadwick discovered the neutron. The scientific world now had an experimentally validated picture of the atom’s structure as a system of particles. But as late as 1935 Rutherford said that nuclear physics would have no practical implications - and no one rushed to contradict him.

What this radically important experimental work did not at once do was supply a new theoretical framework to replace the Newtonian system. This only came with a long revolution in theory, beginning in the last years of the nineteenth century and culminating in the 1920s. It was focused on two different sets of problems, which gave rise to the work designated by the terms relativity and quantum theory. The pioneers were Max Planck and Albert Einstein. By 1905 they had provided experimental and mathematical demonstration that the Newtonian laws of motion were an inadequate framework for explanation of a fact no longer to be contested: that energy transactions in the material world took place not in an even flow but in discrete jumps - quanta, as they came to be termed. Planck showed that radiant heat (from, for example, the sun) was not, as Newtonian physics required, emitted continuously; he argued that this was true of all energy transactions. Einstein argued that light was propagated not continuously but in particles. Though much important work was to be done in the next twenty or so years, Planck’s contribution had the most profound effect and it was again unsettling. Newton’s views had been found wanting, but there was nothing to put in their place.

Meanwhile, after his work on quanta, Einstein had published in 1905 the work for which he was to be most widely, if uncomprehendingly, celebrated, his statement of the theory of relativity. This was essentially a demonstration that the traditional distinctions of space and time, and mass and energy, could not be consistently maintained. Instead of Newton’s three-dimensional physics, he directed men’s attention to a ‘space-time continuum’ in which the interplay of space, time and motion could be understood. This was soon to be corroborated by astronomical observation of facts for which Newtonian cosmology could not properly account, but which could find a place in Einstein’s theory. One strange and unanticipated consequence of the work on which relativity theory was based was his demonstration of the relations of mass and energy which he formulated as E = me1, where E is energy, m is mass and c is the constant speed of light. The importance and accuracy of this theoretical formulation was not to become clear until much more nuclear physics had been done. It would then be apparent that the relationships observed when mass energy was converted into heat energy in the breaking up of nuclei also corresponded to his formula.

While these advances were absorbed, attempts continued to rewrite physics, but they did not get far until a major theoretical breakthrough in 1926 finally provided a mathematical framework for Planck’s observations and, indeed, for nuclear physics. So sweeping was the achievement of Schrodinger and Heisenberg, the two mathematicians mainly responsible, that it seemed for a time as if quantum mechanics might be of virtually limitless explanatory power in the sciences. The behaviour of particles in the atom observed by Rutherford and Bohr could now be accounted for. Further development of their work led to predictions of the existence of new nuclear particles, notably the positron, which was duly identified in the 193 os. The discovery of new particles continued. Quantum mechanics seemed to have inaugurated a new age of physics.

By mid-century much more had disappeared in science than just a once-accepted set of general laws (and in any case it remained true that, for most everyday purposes, Newtonian physics was still all that was needed). In physics, from which it had spread to other sciences, the whole notion of a general law was being replaced by the concept of statistical probability as the best that could be hoped for. The idea, as well as the content, of science was changing. Furthermore, the boundaries between sciences collapsed under the onrush of new knowledge made accessible by new theories and instrumentation. Any one of the great traditional divisions of science was soon beyond the grasp of a single mind. The conflations involved in importing physical theory into neurology or mathematics into biology put further barriers in the way of attaining that synthesis of knowledge that had been the dream of the nineteenth century, just as the rate of acquisition of new knowledge (some in such quantities that it could only be handled by the newly available computers) became faster than ever. Such considerations did nothing to diminish either the prestige of the scientists or the faith that they were mankind’s best hope for the better management of its future. Doubts, when they came, arose from other sources than their inability to generate an overarching theory as intelligible to lay understanding as Newton’s had been. Meanwhile, the flow of specific advances in the sciences continued.

In a measure, the baton passed after 1945 from the physical to the biological or ‘life’ sciences. Their current success and promise have, once again, deep roots. The seventeenth-century invention of the microscope had first revealed the organization of tissue into discrete units called cells. In the nineteenth century investigators already understood that cells could divide and that they developed individually. Cell theory, widely accepted by 1900, suggested that individual cells, being alive themselves, provided a good approach to the study of life, and the application of chemistry to this became one of the main avenues of biological research. Another mainline advance in nineteenth-century biological science was provided by a new discipline, genetics, the study of the inheritance by offspring of characteristics from parents. Darwin had invoked inheritance as the means of propagation of traits favoured by natural selection. The first steps towards understanding the mechanism that made this possible were those of an Austrian monk, Gregor Mendel, in the 1850s and 1860s. From a meticulous series of breeding experiments on pea plants, Mendel concluded that there existed hereditary units controlling the expression of traits passed from parents to offspring. In 1909 a Dane gave them the name ‘genes’.

Gradually the chemistry of cells became better understood and the physical reality of genes was accepted. In 1873 the presence in the cell nucleus of a substance that might embody the most fundamental determinant of all living matter was already established. Experiments then revealed a visible location for genes in chromosomes, and in the 1940s it was shown that genes controlled the chemical structure of protein, the most important constituent of cells. In 1944 the first step was taken towards identifying the specific effective agent in bringing about changes in certain bacteria, and therefore in controlling protein structure. In the 1950s it was at last identified as ‘DNA’, whose physical structure (the double helix) was established in 1953. The crucial importance of this substance (its full name is deoxyribonucleic acid) is that it is the carrier of the genetic information that determines the synthesis of protein molecules at the basis of life. The chemical mechanisms underlying the diversity of biological phenomena were at last accessible. Physiologically, and perhaps psychologically, this implied a transformation of man’s view of himself unprecedented since the diffusion of Darwinian ideas in the last century.

The identification and analysis of the structure of DNA was the most conspicuous single step towards a new manipulation of nature, the shaping of life forms. Already in 1947, the word ‘biotechnology’ had been coined. Once again, not only more scientific knowledge but also new definitions of fields of study and new applications followed. ‘Molecular biology’ and ‘genetic engineering’, like ‘biotechnology’, quickly became familiar terms. The genes of some organisms could, it was soon shown, be altered so as to give those organisms new and desirable characteristics. By manipulating their growth processes, yeast and other micro-organisms could be made to produce novel substances, too - enzymes, hormones or other chemicals. This was one of the first applications of the new science; the technology and data accumulated empirically and informally for thousands of years in making bread, beer, wine and cheese was at last to be overtaken. Genetic modification of bacteria could now grow new compounds. By the end of the twentieth century, three-quarters of the soya grown in the United States was the product of genetically modified seed, while agricultural producers like Canada, Argentina and Brazil were also raising huge genetically modified crops.

More dramatically, by the end of the 1980s there was underway a worldwide collaborative investigation, the Human Genome Project. Its almost unimaginably ambitious aim was the mapping of the human genetic apparatus. The position, structure and function of every human gene - of which there were said to be from 30,000 to 50,000 in every cell, each gene having up to 30,000 pairs of the four basic chemical units that form the genetic code - was to be identified. As the century closed, it was announced that the project had been completed. (Shortly afterwards, the sobering discovery was made that human beings possessed only about twice the number of genes as the fruit fly - substantially fewer than had been expected.) The door had been opened to a great future for manipulation of nature at a new level - and what that might mean was already visible in a Scottish laboratory in the form of the first successfully ‘cloned’ sheep. Already, too, the screening for the presence of defective genes is a reality and the replacement of some of them is possible. The social and medical implications are tremendous. At a day-to-day level, what is called DNA ‘fingerprinting’ is now a matter of routine in police work in identifying individuals from blood, saliva or semen samples.

Progress in these matters has owed much of its startling rapidity to the availability of new computer power, another instance of the acceleration of scientific advance so as both to provide faster applications of new knowledge and to challenge more quickly the world of settled assumptions and landmarks with new ideas that must be taken into account by laymen.

Yet it remains as hard as ever to see what such challenges imply or may mean. For all the huge recent advances in the life sciences, it is doubtful that even their approximate importance is sensed by more than tiny minorities.

If you find an error or have any questions, please email us at admin@erenow.net. Thank you!