Doctor has just returned most enthusiastic and confident that the little boy is as husky as his big brother. The light in his eyes discernible from here to High Hold and I could have heard his screams from here to my farm.Coded message describing the successful Trinity Test from George L. Harrison to US Secretary of War Henry L. Stimson, 18 July 1945
The footage is black and white, and silent, but it still has the power to shock: the sudden violent flash of light, so bright that for a second or two the horizon is invisible; the massive pyrocumulus cloud rising up over the arid valley; the way the night sky seems to quiver and throb as the light from the explosion fades. ‘Mushroom cloud’ is the noun phrase of choice, but that scarcely does justice to the scale of the thing, let alone the immensity of the event. Both figuratively and literally, the Trinity Test was earth-shattering.
It was, perhaps, to shield themselves from the existential implications of their work that the scientists of the Manhattan Project nicknamed the first plutonium bomb The Gadget. To place such a device in the same category of objects as, say, an electric can-opener is to rob it of its power to harm; it’s the psychological equivalent of donning a protective mask and gloves before handling hazardous material. But whatever coping mechanisms were in place on that morning in 1945, they must have taken a battering as the blast ripped through the desert air, turning the surrounding sand to glass and sending a cloud of incandescent gas some 12 kilometres into the sky. What did they expect, the scientists of base camp, as they lay down in their shallow trenches in preparation for the Trinity Test? The possibility had occurred to some of them that the explosion would ignite the atmosphere. No doubt a prayer or two was muttered in the forty seconds it took for the sound of the ‘atom bomb’ to reach their ears.
It’s significant that the Trinity Test, conducted seventy-five years ago this month, is often mooted as the starting point of the Anthropocene, or Age of Humans, an unofficial geological designation that recognises the decisive influence humanity has had on planet Earth. The suggestion makes scientific and aesthetic sense. Scientific sense because gamma radiation from the testing and use of nuclear weaponry now mingles with particles of plastic and concrete, soot from power stations, chemicals from fertilisers, and trillions of animal bones as evidence of that influence. And aesthetic sense because the case for a new epoch is infused with a growing feeling of alarm. The events in New Mexico ushered in a period of existential anxiety. Today we stand in the shadow of a future in which rising sea levels, coastal flooding, devastating wildfire seasons, pollution, the spread of infectious diseases, and disruption to food and water supplies will transform our world to such a degree that even the recent Australian bushfires will seem like the lull before the storm. Our rise to biospheric dominance is inseparable from our talent for destruction. The mushroom cloud is the symbol of that.
But there is an even deeper sense in which the Trinity Test marks a rupture with the past – one that many of Arena’s editors and regular contributors have written about extensively in the last four decades. For the team at Los Alamos was engaged in something very different from its scientific forebears. For hundreds of years the thrust of science had been towards an understanding of nature, while the thrust of technology, which took its lead from science, was towards nature’s conquest and utilisation. By contrast, the Manhattan Project scientists moved beyond conquest to reconstitution. In his seminal essay ‘From Here to Eternity’, Arena’s founder Geoff Sharp wrote of the need to see nuclear power as central to the emergence of an ethos of ‘transformation’ in science and technology, and of the related need to ‘acknowledge the significance of the break in continuity concealed by the belief that technological change is simply more of that same progress that has defined modernity’. Nuclear power is not like wood or coal—a given attribute of the natural world—and even Einstein, who theorised the equivalence of mass and energy in his iconic equation, did not believe that it was a practical possibility. Wrenching that equation from the theoretical realm, the Los Alamos scientists proved him wrong and, in so doing, moved scientific endeavour from understanding to authorship. As the only journalist present at the test, William Laurence, intuited, this represented a radical break. The blast, he wrote, was ‘the first fire ever made on Earth that did not have its origin in the Sun’.
No doubt it’s for this reason that descriptions of the test were frequently couched in religious language. For Laurence, the explosion was ‘the first cry of a newborn world’, while for Major General Thomas Farrell, who observed the test alongside the Manhattan Project’s chief scientist J. Robert Oppenheimer, the ‘sustained, awesome roar…made us feel that we puny things were blasphemous to dare tamper with the forces heretofore reserved to the Almighty’. ‘We knew the world would not be the same’, said Oppenheimer himself in 1965. ‘I remembered the line from the Hindu scripture the Bhagavad-Gita: Vishnu is trying to persuade the Prince that he should do his duty and, to impress him, takes on his multi-armed form and says, “Now I am become Death, the destroyer of worlds”.’ With the detonation of the first atomic bomb—at Jornada del Muerto (Journey of the Dead)—science attained the power of a god.
It is this that makes the Trinity Test so relevant to the contemporary world. For the powers evinced in the New Mexico desert three quarters of a century ago raised the curtain on a new era of ‘techno-science’, in which nature was taken as a thing to be remade and not merely ‘harnessed’ or ‘tamed’ or ‘conquered’. From the ‘editing’ of DNA in agriculture and medicine to the suggestion that machine intelligence may become more powerful than the ‘intelligence’ of all human beings combined, we have entered an era in which science and technology have the power to rewrite the book of nature, and to renegotiate the fundamental terms of our existence. Such technologies are properly Promethean, in the sense that they unlock (or unleash) new powers, and with them radical new potentialities: the prospect of a world without work, for example, or of a social life without physical presence, or even of a life without death. The bright young things of Silicon Valley, with their dreams of direct democracy on Mars and digital immortality, are often difficult to take seriously. But their hubris is only the gaudy version of a broader cultural and political belief in the power of science and technology to edit, alter and override the very stuff from which our world is made—in other words, to ‘play God’.
Of course, humanity has at least as much to fear from the rejection of science as it does from its advance. There is no doubt, for example, that climate-change ‘scepticism’ has seriously undermined international efforts to lower greenhouse-gas emissions. But it is important to recognise nonetheless that the techno-sciences carry dangers of their own, and that their spectacular rise to prominence under the rubrics of ‘innovation’ and ‘progress’ has often been at the expense of the planet and the majority of its human inhabitants, not to mention almost all of its non-human inhabitants. Even determined action on climate change might conceivably do more harm than good. Attempts to reengineer the climate through techniques such as ‘solar radiation management’ (SRM), which would involve pumping large quantities of sulphur particles into the stratosphere in an effort to deflect radiation from the sun, could have devastating consequences, from drought in Africa to monsoonal changes to increased acidification of the oceans. Moreover, and crucially, it would fundamentally change the relationship between humanity and the planet, bringing in its wake a perilous new sense of power and possibility. Those who are against SRM and its analogues will often talk about what could go wrong, but surely we should also think about the consequences of such a thing going right. Are we happy to live on a ‘designer planet’? If so, in whose interests should it be designed? And what kinds of moral hazard might emerge as a consequence of such an enterprise?
What goes for so-called ‘geo-engineering’ goes for other technologies as well. Take biotechnology, for example. The discovery of the double-helix structure of DNA in 1953—biochemistry’s ‘atom-splitting’ moment—has transformed our understanding of genetic inheritance, and revolutionised agriculture, forensic science and medicine. But the development of new capacities—from the use of ‘test-tube babies’ in IVF to the artificial cloning of organisms to the ‘editing’ of genomes using CRISPR technology—also raises ethical and moral issues about the role of science in human affairs, as well as political questions about who or what should own the techniques or products derived from that science. Should private companies be allowed to develop and patent new seed and livestock varieties? If so, should they also be permitted to enhance or augment or even design human beings? In 2018 the young Chinese scientist He Jiankui announced that he’d created the first genetically edited human babies, declaring ‘Society will decide what to do next’. But surely the take-up of such techniques will reflect the existing inequalities within and between societies, as has happened in the case of the trade in live organs—a trade whose routes invariably run from the impoverished South to the more affluent North.
More broadly, we should ask how new technologies change our sense of what life is, and the purposes for which it is lived. Perhaps the most controversial technique to have emerged from the discovery of the double helix – one discussed many times in the pages of Arena, by Simon Cooper, Kate Cregan and others – is the use of stem cells to engineer replacement tissues for transplant into humans, a technique that often involves the use of generic cells from aborted foetuses or from embryos grown in laboratories. One doesn’t need to be religious to see that this process marks a fundamental shift in humanity’s view of what human life is, and thus a shift, or a potential shift, in how human beings view each other. What might be the effects, then, of such a technique becoming more widespread? Could an idea of human life as something ‘special’ survive such a transformative change? Or will we come to see each other as essentially no different from other materials that can be grown, augmented and modified?
A comparable set of questions attends the emergence of computer and information technologies, a process catalysed by the rapid development of microprocessors in the 1970s, as well as by the commercialisation of the World Wide Web in the 1990s. Following the rough trajectory of ‘Moore’s Law’—the observation that the processing power of computers doubles about every two years—the personal computer and internet technology (now combined in the form of the smartphone) have transformed not only the relationship between human beings and information but also the relationship of human beings to each other. As the recent pandemic has reminded us, human beings are social creatures whose sociality evolved in conditions of presence. But infotech makes possible a sociality based on absence, such that we are now able to move through the world enclosed in our own atomised spaces. Increasingly intimate with our PCs and phones, in thrall to social-media platforms, we have social lives that are privatised, ephemeral and performative in ways that may engender intolerance and other antisocial behaviours. We talk about the ‘filter bubble’ in the context of political discourse to describe the tendency to privilege information and opinions that support our own world-pictures. But the metaphor of the bubble can be applied more broadly, implying as it does both delicacy and distortion. The subjectivities of a social animal predisposed to see a human face in a random distribution of wood knots (or, indeed, a mushroom cloud) do not remain unchanged in the event that their social basis changes its character. Cut off from the physical presence of others, the self becomes fragile, defensive and anxious. Connectivity engenders disconnection; and disconnection makes us miserable.
Again, and as with biotechnologies, the increasing dominance of the algorithm invites us to think in different ways about the nature of the human animal. One fear common to much science fiction is that humanity will one day create thinking machines that develop anti-human behaviours: the Hal computer in 2001 and Skynet in Terminator are different versions of this anxiety. But a more immediate possibility is that human beings come to see themselves as mere flesh-and-blood automata—elaborate systems to be augmented or improved. Implicit in much discussion about artificial intelligence is a view of human intelligence as in some sense machine-like or algorithmic. For Yuval Noah Harari, for example, there is no essential difference between a machine that makes a cup of tea and the person who, by pressing the relevant buttons, sets the tea-making process in motion; both are computers, albeit computers fashioned from radically different stuff. Thus it becomes permissible to think of human beings as meat algorithms whose minds are in some sense detachable from their bodies. Google’s Director of Engineering Ray Kurzweil looks forward to the day when human beings will ‘upload’ their minds to powerful computers. And while such ‘transhumanism’ will strike many as perverse, there is a sense in which we already regard the human brain as a wet computer that can be reconfigured at will, whether through mood-altering pharmaceuticals or so-called ‘deep-brain stimulation’ or even psychological therapies that promise to ‘rewire’ the mind. It is less the danger of algorithmic machines attaining full consciousness that should worry us than the social and political implications of regarding ourselves as no different in kind from algorithmic machines.
In so many ways, then, humanity stands at a crucial point in its development—a point, indeed, at which its development is in its own, uniquely nimble hands. The techno-sciences have made us masters of our own destiny, or made an elite few the masters of such, and so now we must evolve a capacity for reflection equal to that mastery. How will augmented reality and the creation of anthropomorphic robots affect our sexuality, already half in flight from ‘meatspace’ as a consequence of ubiquitous pornography? How will the use of autonomous weapons transform a police and military ethos still notionally attached to personal sacrifice and the cultivation of ‘hearts and minds’? Does the widespread acceptance of antidepressants signal a new techno-political dispensation—one in which it is the individual and not the society that is to be made ‘better’? These are not questions about The Future. They are questions about the kind of creatures we are.
On the morning of 6 August 1945, less than a month after the Trinity Test in New Mexico, three B-29s appeared in the sky above the Japanese city of Hiroshima. The city’s residents paid them little attention: the sight of US reconnaissance planes was not unusual at this stage of the war. Mindful of the firestorms that had ripped through Tokyo and other cities as a result of US aerial attacks, the authorities had set some eight thousand schoolchildren to work preparing firebreaks. No siren sounded, and they returned to work. Only a few of them saw a large parachute unfurl beneath one of the planes as it turned around and pointed its propellers away from the city.
Debate still rages about President Harry S. Truman’s motivation for dropping the atomic bomb on Hiroshima and (three days later) Nagasaki. But the argument that it was militarily necessary has not fared well historically. Even many of the US scientists who accepted the case for the Hiroshima bombing—that such an act was necessary in order to save American lives and demonstrate to the Japanese that further resistance would be suicidal—could not forgive the US president for ordering the second attack. Many historians now take the view that Truman was making a longer-term calculation, attempting to demonstrate the power and ruthlessness of the United States to the Soviet Union, which had just declared war against Japan and was looking to gain a foothold in the East. Others stress the role of anti-Japanese racism or, more broadly, the moral degeneration that necessarily occurs in war.
Frankly, we will never know precisely Truman’s rationale. But what we can say is that this transformative event was the confluence of two kinds of power: the power of the bombs themselves, which was like no other power on Earth, and the power to deploy them. As thermal energy from the massive blast—several million degrees centigrade at its hottest point—travelled outwards at the speed of light, turning human beings within a half-mile radius into instant carbon statues of themselves, the human species was itself transformed, psychologically and politically. It became the only species on the planet with the ability to end all life on it—potentially, the destroyer of worlds.
It is clear that such Promethean capacities are now a general characteristic of humankind, and that alongside the existential threats of anthropogenic climate change and thermonuclear weaponry other threats are taking shape that deserve much more prominence than they are given. As the global situation grows ever more volatile, it is necessary to name those threats, and to frame the overarching one of which they are a part: that in seeking to transform itself, humanity ceases to be human at all.