A month or so out from Christopher Nolan’s much-anticipated biopic Oppenheimer, the Artificial Intelligence (AI) community is having its own Oppenheimer moment. Like the director of the Manhattan Project and Los Alamos Laboratory, who famously came to regret his part in the development of the atomic bomb, the Big Tech Titans are falling over each other to declare themselves ‘the destroyer of worlds’. Open letters and statements, high-profile resignations, and appearances before the US Senate are just a few manifestations of this collective show of soul-searching. The tech bros have unleashed their creations on the world, and now they are demanding to be put on the leash.
It’s a remarkable spectacle, all the more so for the rhetoric that accompanies these outbursts of performative anxiety. Concerns around ‘deep fakes’ and technological unemployment are for now merely secondary considerations, as a procession of Silicon Valley identities insists upon the existential risks of new AI (and AGI) tools, in terms that draw explicitly on the atomic developments mentioned above. In a recent piece for Arena Online, Guy Rundle speculated that the atomic analogy for ChatGPT and its equivalents might have more to do with the ‘qualitative change’ that the new technology may bring than with any catastrophic scenarios. But in the last couple of weeks it is precisely the possibility of catastrophe to which the tech community has been pointing.
I say precisely, but the particular forms such a catastrophe might take remain obscure. Certainly no one with any credibility believes that AIs are about to cross some hermeneutic threshold into consciousness and start quibbling about who is running the show. Rather, the fear would seem to be that an AI with the ability to augment its own intelligence might be set a task with poorly designed parameters and follow through in a way that could harm human beings. One scenario that is sometimes cited in this connection is the ‘paper-clip maximiser’ thought experiment, in which a powerful AI is instructed to manufacture as many paper-clips as possible, but not to avoid harm to human beings, whose atoms (the AI calculates) could be a good source iron and thus of paper-clips (it turns out OpenAI’s Celtic-knot style logo is in fact a reference to this thought experiment, which doesn’t inspire confidence). The recent news that an AI simulation conducted by the US Air Force resulted in a military drone killing its own operator in an effort to prevent him interfering with its mission would appear to highlight precisely this problem, though it should be said that the US Air Force itself denies that any such event occurred. In any case, it is unclear to me how the ‘regulation’ demanded by OpenAI’s Sam Altman could prevent such a scenario from occurring at some point. If a species-ending event is really in prospect as a consequence of smart machines, we’re going to need more than a roll of red tape.
How, then, to account for this weird combination of rapid and transformative technological development and noisy anxiety as to its possible consequences? Some commentators claim to smell a rat. Perhaps, they speculate, the Big Tech companies are seeking to consolidate their lead over their rivals in the Large Language Model AI space, inviting government regulation as a way of cementing their dominance. But while that could account for Altman’s recent appearance before a fawning Senate committee in Washington, it hardly explains the more general epidemic of existential angst in the tech community, which now calls for its own institutional supervision, upon pain of Homo sapiens’s extinction.
I’m convinced that at least a part of the answer has to do with the way in which the tech community conceives of its own place in the world—a conception very close in spirit to the so-called ‘big history’ analysis associated with Yuval Noah Harari, whose ‘brief history of tomorrow’ Homo Deus made a big splash in the San Francisco Bay Area despite being more coolly received elsewhere. Roughly speaking, this perspective frames the story of humanity in the broader history of the cosmos, showing how the universe and everything in it is part of a drive towards ever-increasing complexity. Like the larger universe of which it is a part, humanity has thus evolved across energy ‘thresholds’ (the ‘discovery’ of fire, the development of agriculture) that greatly, and in exponential fashion, increased its power over capricious nature. Such a perspective tends to regard innovation and technological progress as inevitable processes, combining natural and human history in a way that effectively collapses the two—or folds the latter into the former. Human culture and history are acknowledged, of course, but only as secondary considerations, since everything in the human past is in essence a biological process that is ultimately no more than an episode in the wider biology of the planet.Yes, human culture increases in complexity, but the discovery of fire is no different in kind to the invention of the steam engine or the splitting of the atom. Thus the evolution of the human animal is framed in cosmic/planetary terms, in a way that necessarily obscures—indeed discounts—much messy detail.
There is both arrogance and modesty lurking in such accounts, which in Silicon Valley and its equivalents combine into a sort of ostentatious fatalism. Presenting new technologies as evidence of increasing complexity, but not as things that are owned and developed by people with particular views of the world and a particular stake in its future development, the big-history view both naturalises ‘innovation’ and flatters the class whose innovations are already shaping the future of humanity. Just as human beings have evolved together, the story of big history runs, so they innovate and progress together. Occasionally the species throws up an individual who is able to see what others can’t (standing on the shoulders of giants, obviously) and rig up some magnificent contraption that will carry us all to a better future. But it is as the representative of the species, and not as the representative of a class or an ideology, that such an individual demands to be seen. Rather as the royal ‘we’ insists upon the identification of the monarch with God, so the Silicon Valley ‘we’ insists upon the identity of the innovator and Homo sapiens.
This, then, is one route to understanding the bizarre combination of irresponsibility and public-spiritedness currently on show across the info-tech community. But by far the more important point that emerges from such an analysis is that the Silicon Valley operators are more or less guaranteed to miss the broader, deeper catastrophe currently unfolding before our eyes, partly as a consequence of their own endeavours. In all kinds of ways, and in all kinds of fields, we do seem to be at an inflection point, and it would be absurd and irresponsible to ignore the warnings of the developers and scientists calling attention to the risks of new AIs. But our ability to think around those risks depends also on our ability to place them within the broader conjuncture of which they are, in the end, just symptoms—to think about the changing relationship between human beings, nature and technology.
There are many ways into that story, of course, but one fruitful point of entry concerns the historical, as opposed to the analogical, relationship between information technology and nuclear weapons. For while the black-skivvied things of Silicon Valley are apt to describe the emergence of AI as a story of brilliant engineers and entrepreneurs convening in the San Francisco Bay Area in the 1970s and 1980s, the beginnings of information technology in fact go back to the Second World War and its ‘cold’ geopolitical aftermath. Faced with the prospect of nuclear annihilation, state planners imagined a decentralised system that would respond automatically to foreign attacks while also keeping communication up and running in the event of an outbreak of hostilities. In the early defence and warning systems of the 1950s and 1960s, scientists and engineers integrated information technologies into military command and control systems, developing the nascent disciplines of cybernetics and AI as they did so. This was the origin of what we now call the Internet, the privatisation of which in the 1990s effectively made the latest phase of globalisation possible, embedding capitalism in a complex network of finance and international trade. At the same time, computers came to the centre of social life, accelerating the atomisation of communities attendant on neoliberal policies and producing, eventually, a range of technologies (principally social media and the smartphone) that have engendered new forms of sociality, the fleeting and morbid character of which further damages social solidarity and makes selfhood increasingly precarious. The idea of an algorithmic catastrophe waiting for us down the road is from this perspective clearly limiting. The catastrophe is already here, and unfolding.
But still this picture is incomplete. For as the Big Tech Titans were creating Facebook, Google, Twitter and the MacBook Pro, info-tech was also driving development in other technological fields—in nanotechnology, biotechnology and even the creation of new medicines, including psychotropic medicines, which is to say mind- and mood-altering drugs. This is, so to speak, the hidden story of information technology: the story of how it has combined/is combining with other technological developments in what amounts to a remarkable and transformative convergence. The Human Genome Project, for example, could not have occurred were it not for the use of automated sequencing systems. It follows that AI cannot be treated in isolation from other technologies, and that rapid (and potentially transformative) ‘progress’ will be distributed across a range of areas as the new AI technologies find their way into the economy.
It is this ‘godlike’ potential for transformation that lay at the heart of Oppenheimer’s crisis. For in intervening directly in natural systems—not bending capricious nature to human ends, but reconstituting it at the levels of the atom, the cell and the molecule, including human DNA molecules—the modern techno-sciences make it possible to redesign the world in line with current ideologies of neoliberalism and ‘progressive’ individualism. Moreover, in recasting the natural world, up to and including Homo sapiens, in essentially mechanical terms—one unintended consequence of creating machines that can ‘think’ like humans is that humans are now regarded as no different in principle from thinking machines—the techno-sciences make such interventions not only possible but permissible. To put it simply, we are now approaching the point where we could, if ‘we’ wanted to, re-engineer ourselves and the biosphere on which we depend in a way that would preclude the necessity for social and political innovation—a situation that would spell the end of politics as anything more than a technocratic enterprise, unconcerned with the question of human being. Having helped create a society in which human beings struggle to flourish, algorithmic technologies now hold out the prospect of a ‘posthuman’ world of fine manipulation, optimisation and transformation. For some, this prospect is a liberatory one. For others—this writer (and this organ) included—it is one to be feared and fought against.
This, then, is the real catastrophe against which the catastrophising of Big Tech needs to be seen. It is not a catastrophe that can be squeezed into a headline, and it will not be solved, as Sam Altman intends, by an info-tech equivalent of the International Atomic Energy Agency. It will be solved, if it is solvable, by people who love this fragile planet but despise the social and political arrangements through which its resources are extracted and shared returning ‘the question concerning technology’ to the heart of political thinking and activism. Gazing into the dazzling eyes of Vishnu as he spreads his arms throughout the world and boasts of his power to bring it to an end is perhaps not the ideal way to begin.

I Sing the Body of Work Electric: ChatGPT, AI, writing, technology and humanity
Guy Rundle, 26 Jan 2023
With such developments unrestrained, we will become more like the imitations of ourselves than computers will become like us.