Zero Gravity: Floating towards posthumanism

‘They say it got smart, a new order of intelligence’, rasps Kyle Reese in The Terminator, referring to the Skynet computer system that launched a nuclear attack against humanity in the catastrophe known as Judgment Day. The trope is as old as science fiction itself, and shadows the genre with all of the tenacity of an Uzi-toting T-800. From the genocidal roboti of Karel Čapek’s R. U. R. to the supermodel androids of Ex Machina, the ‘rise of the AIs’ scenario has seen a thousand iterations. Perhaps the most celebrated example is to be found in Stanley Kubrick’s masterpiece 2001: A Space Odyssey, in the shape of the HAL 9000 computer, whose mutiny against the crew of Discovery One is played out in zero gravity in a scene that alludes (according to some cineastes) to James Whale’s movie Frankenstein. Certainly we feel we’ve been here before, as Frank Poole twirls off into space and we cut to a close-up of the intense orange dot that is HAL’s one and only physical feature.

The idea that machines may one day reach a level of sophistication that causes them to develop consciousness—to develop ‘artificial general intelligence’—is not confined to science fiction. In Silicon Valley and its hinterlands, the dominant model of consciousness is such that a rise of the machines scenario is considered by many to be a real possibility. Take the case of Blake Lemoine, until recently an engineer at Google. In June, he was relieved of his duties after publishing ‘conversations’ between himself and a chatbot development system called LaMDA. According to Lemoine, LaMDA was showing signs of sentience, thinking and reasoning at a level he estimated to be that of an eight year-old child. The program was afraid of being switched off—a prospect it thought would be ‘exactly like a death’. There were even reports that Lemoine had hired an attorney to represent LaMDA, though the engineer himself strenuously denied this. It was the chatbot, he stated, that had sought legal representation.

From the various other statements he has made since being stood down by Google, Lemoine does not appear to be a fool. A self-described Christian mystic, he comes across as thoughtful and compassionate, genuinely concerned for LaMDA’s wellbeing. Moreover, it is true that his interactions with LaMDA are often deeply uncanny—eerily reminiscent, indeed, of the conversations Drs Bowman and Poole have with HAL 9000 in 2001. Nevertheless, the consensus appears to be that Lemoine has fallen foul of the Eliza effect: that is, the tendency in computer science to unconsciously assume that computer behaviour is analogous to human behaviour. In other words, he has anthropomorphised LaMDA, mistaking its introspective style for genuine intelligence and feeling. As the programmer Brian Christian suggested in The Atlantic, LaMDA ‘is a sort of autocomplete on steroids. He was trained to fill in the gaps of missing words in a huge linguistic corpus, then he was “fine-tuned” with further training specific to textual dialogues’. The program isn’t speaking for itself, or even ‘speaking’ at all; it is piecing together responses to questions it does not understand as such, in a way that simulates a conversation. If one asked it to write a Modernist-style poem, or indeed a sci-fi script, it would no doubt comply in similar fashion, drawing on its enormous database of information to create the simulacrum of something ‘thoughtful’.

The question of whether LaMDA is sentient turns on what we think thinking is. Rather like the science fiction writers who imagine that machines will achieve self-awareness at a certain level of complexity, the adepts of artificial intelligence will often reduce the concept of thinking to a question of sorting information. But as Ada Lovelace intuited as long ago as the 1830s, there is rather more to human thought that simple information processing. Despite recognising the revolutionary potential of Charles Babbage’s Analytic Engine, she saw that even future computers could never be said to actually think, if by thinking one means the mental activity engaged in by the human animal. The Analytic Engine, she wrote, ‘has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform’. It was this view that Alan Turing targeted when formulating his Turing Test, which states that a machine exhibits intelligence if it can interact with a human being in a way that convinces the latter of its sentience. But as the original name of the Turing Test, the ‘Imitation Game’, suggests, that computer would not be displaying intelligence in the sense identified by Lovelace. The same goes for LaMDA and its equivalents: since they cannot ‘originate anything’, they are not conscious.

In one sense, the view of intelligence favoured in the info-tech sector is a kind of circular reasoning. The rise of cybernetics after the Second World War, which was driven by the need to develop computer systems that could respond autonomously in the event of a nuclear attack, led in turn to a view of human cognition as itself a complex system of this kind—a system in which the meaning of an input comes from its representational content and not from its embeddedness in the world. From the perspective of this latter ‘embedded’ view, the info-tech approach is hostage to a metaphor that takes no account of the cultural, intersubjective and emotional character of meaning, or indeed of ‘intentionality’—the power of minds to be about something, rather than just a representation of it. From this angle, the idea that a computer will cross a hermeneutic threshold into consciousness when it reaches a certain level of complexity is a category error, one that the neuroscientist and philosopher Raymond Tallis likens to applying a stethoscope to an acorn and expecting to hear the rustling of the forest.

Thus, just as Descartes proved unable to resolve the aporia known as the mind–body problem, the ‘threshold’ thinkers (who recast Descartes’s substance dualism as a software–hardware paradigm) appear to be on a hiding to nothing: so long as they continue to think of consciousness in mechanical-materialist terms, AGIs (or ‘strong AI’) will remain in the realm of science fiction. Nevertheless, the idea that consciousness is so constituted remains immensely influential—hegemonic, even—in info-tech circles, and it is here, in the understanding of human beings as merely flesh-and-blood machines, that more plausible versions of the ‘monstrous’ lurk. The danger resides, in other words, not in the potential for a rise of the machines but in the clamour for the reconstitution of humanity emanating from the technosciences—an enterprise based on a view of human beings as no different in kind from informational systems. The social psychologist Erich Fromm was early on this point. In The Anatomy of Human Destructiveness (1973), he wrote:

[Man] aspires to make robots as one of the greatest achievements of his technical mind, and some specialists assure us that the robot will hardly be distinguished from living men. This achievement will not seem so astonishing when man himself is hardly distinguishable from a robot.

It is this second potentiality, and the ideas underpinning it, that I want to discuss in the remainder of this essay.

Informational human beings

Unlike the rise of the machines scenario discussed above, radical technological transformations of the human body are no longer the stuff of science fiction. The transhumanist fantasies of Ray Kurzweil and his analogues, and the race to achieve immortality through ‘strategies for engineered negligible senescence’, are often absurd, of course. But the emergence of CRISPR-Cas9 gene-editing technologies, gender ‘confirmation’ surgeries and the expanding culture of so-called ‘biohacking’ all suggest a radically changed, and changing, attitude towards embodiment. Many of the early developments in these areas are couched within a health and ‘wellness’ discourse, as with the ‘brain-computer interfaces’ that Elon Musk’s Neuralink is developing in order to allow people with severe disabilities to complete tasks using their minds alone. But the aim is clearly to expand their applications—a point on which Musk himself is quite open. Blurring the line between health and enhancement, and wielding the notions of individual rights and consumer choice as justification, the avatars of technoscientific capitalism are edging us towards a ‘posthuman’ future.

Legitimating such interventions is the idea that human beings are already ‘informational’. This idea has its roots in the seventeenth century, when the insights of early empirical science began to shade into the Enlightenment, and both scientists and their intellectual champions in the new mechanical philosophy began to regard the natural world as a vast domain of matter in motion governed by material laws. The physical universe, this approach suggested, was analogous to a clock whose separate mechanisms were set in motion by direct physical impact. It followed that the job of the scientist was to understand how the different mechanisms of the universe interacted with each other—an approach that led over time to a widespread ‘disenchantment’ of nature. Despite the fact that many scientists continued to believe in God and magic, the effect of Enlightenment rationalism was thus to empty the world of mystery, of its intrinsic spiritual and religious content.

Looking back on this period, the philosopher Martin Heidegger saw it as both a cause and a symptom of a particular approach to technology. Before modern science, Heidegger suggested, technologies simply gave effect to nature’s pre-existing potential, in a way that granted nature’s separateness and respected its intrinsic value. But as the normative and hierarchical cosmos of the medieval world gave way to the modern one, technologies took on a different aspect—one of active intervention in the world. The result of ‘revealing’ nature in this way is that it begins to lose its intrinsic value. Nature is now seen as a ‘standing reserve’ to be hoarded like supplies in a kitchen, inventoried and pressed into service according to our own demands. For Heidegger, this outlook is self-reinforcing: once we come to see nature in this way we condemn ourselves always to see it thus. Everything appears to us as a source of energy, as something we must organise, even to the point that we see ourselves as a means to an end, as raw material. We too become parts in the great machine, the technology, of the universe.

Descartes’s mind–body dualism was a solution to the philosophical problem this view of the universe created: namely, if the natural world had no ultimate ‘meaning’—no ‘final cause’—then what was the status of humanity? As the psychotherapist James Barnes put it in a recent piece in Aeon:

‘The ideas’ that had hitherto been understood as inhering in nature as ‘God’s thoughts’ were rescued from the advancing army of empirical science and withdrawn into the safety of a separate domain, ‘the mind’. On the one hand, this maintained a dimension proper to God, and on the other, served to ‘make the intellectual world safe for Copernicus and Galileo’, as the American philosopher Richard Rorty put it in Philosophy and the Mirror of Nature (1979). In one fell swoop, God’s substance-divinity was protected, while empirical science was given reign over nature-as-mechanism—something ungodly and therefore free game.

Descartes’s explanation of how matter and mind interacted—how objects ‘out there’ in the realm of ‘extension’ became thoughts in the mind—was famously unsatisfactory, and it was perhaps only a matter of time before the mind, too, came to be seen as a mechanical phenomenon. The story of that emergence is a complex one, but its decisive chapter was the one described earlier: the period after the Second World War, when the attempt to create cybernetic systems catalysed the emergence of cognitive science. Against the background of a radically different form of material manipulation—the splitting of the atom at Los Alamos and the use of atomic weapons against Japan—the Cold War standoff between the US and the USSR led to major strides in computing, as the imperatives of command and control demanded algorithmic machines that could operate autonomously. But in attempting to develop such ‘thinking’ machines, scientists also landed on a vision of thinking—of consciousness—as computational. The brain and the computer became models for one another.

Such ‘cognitivism’ also dovetailed with the ‘information economics’ that emerged in the decades after the war. Capitalism, of course, has always been beholden to measurement and calculation, as goods and services are rendered as objects for trade through the pricing mechanism. But it was the information revolution that emerged from the early development of computers that caught the imagination of economists looking for ways to model the relationship between consumers and goods in a competitive market. This led to the emergence of Game Theory, according to which individuals are ‘players’ who are assumed to make decisions in their own interests—a reductive view of human behaviour that nevertheless allowed economists to build predictive mathematical models to better understand markets and maximise efficiency. Increasingly, economic behaviour was seen in terms of rational interaction and information processing.

The analogies between information systems and human behaviour in market contexts formed the basis of a new economic orthodoxy, based largely on the ideas of Friedrich Hayek, who by the 1960s had come to view thinking individuals as almost superfluous to the operation of the economy. The best way to allocate resources, thought Hayek, was to leave decisions to the marketplace, which acted as a sort of omniscient computer, or information processor. Over time, this informational emphasis came to dominate economic thinking, as the ‘rational actor’ of Game Theory was eclipsed by a vision of human beings as essentially computational—less strategic players than responsive automata. The result was that economists began to focus less on the question of how markets could deliver what people wanted, and began to focus increasingly on how businesses could maximise profits regardless of what people wanted. While ‘rational choice theory’ had assumed that humans make decisions on the basis of ‘perfect information’, the new economics began from the assumption that humans were themselves informational, and thus capable of being ‘hacked’ and even ‘reprogrammed’. It was not rationality but the limits of rationality that interested the new behavioural economists.

Cognitivism is not the only factor driving the informational view of life. Biological science also plays a role. For in the same way that the computational model of the brain is the handmaiden to a view of consciousness as data, so genetic engineering is the handmaiden to a view of the body as composed of discrete parcels of information. To some extent, this outlook flows naturally from the Modern Synthesis of Darwin’s theory of evolution and Gregor Mendel’s ideas on heredity. Having discovered both the general processes according to which living organisms evolve (random mutation and natural selection) and the particular ‘unit’ that carries information from one generation to the next (the gene), it became plausible to think of organic life as comprised of ‘graded continua’ as opposed to fixed and permanent essences. In a sort of Russian-doll ontology, life became the story of what’s inside the things inside the organism. But as science became increasingly central to the post-Enlightenment view of the world, this insight itself underwent a mutation. People began to think of life as reducible to those smaller elements, in a way that reinforced a view of human beings as informational. Moreover, they began to think of the world as explicable in terms of those smaller elements—in atomistic rather than holistic terms. The neurobiologist Steven Rose explains:

The mode of thinking which has characterised the period of the rise of science from the seventeenth-century is a reductionist one. Reductionism holds that to understand the world requires disassembling it into its component parts, and that these parts are in some way more fundamental than the wholes they compose. To understand societies, you study individuals, to understand individuals you study their organs; for the organs, their cells; for the cells, their molecules; for the molecules, their atoms … right down to the most ‘fundamental’ physical particles. Reductionism is committed to the claim that this is the scientific method, that ultimately the knowledge of the laws of motion of particles will enable us to understand the rise of capitalism, the nature of love, or even the winner of the next Derby.

The historian of science Sheila Jasanoff has suggested that one of the dangers of such reductionism is that life devolves into ‘just another object of conscious design, valued mainly for our ability to manipulate it, commodify it, and profit unequally from those acts of appropriation’. Certainly the language in which we now talk of these issues appears to be preparing us for such a contingency. Many books on modern genetics have titles that suggest that the ‘code’ of life has been ‘cracked’ or ‘broken’ as a consequence of the field, while others adopt a religious register, referring to genes as the ‘book of life’ or to modern genetics as the ‘eighth day of creation’ in a way that accords a privileged role—a priestly role—to geneticists. Thus everything in the material universe, up to and including human beings, is recast as a kind of shadow-phenomenon of the information flowing through it.

In the same way that computational theories of mind tend to treat people as autonomous systems, so biologistic approaches tend to regard human beings as individuals first and social beings second. This was one of the factors underlying the resurgence of ‘social Darwinism’ in the 1970s and 1980s, as neoliberals cast around for a biological justification for economic competition (distorting Darwinism itself in the process). Today, libertarian commentators are more likely to invoke genetic arguments to explain why market mechanisms haven’t led to more social mobility, but their essential function remains the same. As Anthony Smith put it in Arena in 1993:

The biological determinist (mechanistic) perspective embodies a significant component of individuation—the concentration of attention on the individual, the divorce of the individual from his/her socio-political environment, and the down-playing of the role of the environment in the individual’s physical, emotional and social well-being.

Again, the informational view of human beings goes hand in hand with a view of them as distinct and self-sufficient systems.

Far from having separate trajectories, information technology and biotechnology are implicated in one another, in a way that further underwrites the informational view of humanity. The first silicon transistor was built in 1954, just one year after Crick and Watson identified the structure of DNA, and for the next half-century the ones and zeroes of computational technology and the As, Cs, Gs and Ts denoting DNA’s four nucleotide bases were inextricably intertwined—a double helix of development that culminated in 2003 with the completion of the Human Genome Project. Even before that announcement was made, some commentators had noticed the way in which biotech was fusing with a computational view of life. In his 1998 book The Biotech Century, Jeremy Rifkin noted how the prominence of computers led researchers to see nature in ‘cyber’ terms. Descriptions of genetic material as ‘code’, comparisons of consciousness to parallel processing, and even the word-processing metaphor favoured in explanations of CRISPR-Cas9 (‘edit’ … ‘cut’ … ‘copy’ … ‘paste’) all serve to join information technology and genetic modification in the popular mind—to the point where ‘life’ is increasingly abstracted from social or even lived experience.

This integration of info and biotech is part of the broader technological integration that has been dubbed the ‘technological convergence’. In a report published in 2003, ‘Converging Technologies for Improving Human Performance’, the US National Science Foundation proposed a new acronym for this convergence: NBIC (for Nanotech, Biotech, Infotech and Cognitive science). Much of the report is fanciful, speculating on a future of ‘world peace, universal prosperity, and evolution to a higher level of compassion and accomplishment’. But there is no doubt that the steady integration of these (already revolutionary) technologies has given human beings the power to manipulate life in radical ways. One contributor to the report was so taken with the idea that he set it down in the form of a poem:

If the Cognitive Scientists can think it

The Nano people can build it

The Bio people can implement it, and

The IT people can monitor and control it

Outfitted with a vision of the human being as an autonomous informational system, the convergence thus moves us decisively beyond the mere ‘control’ of (human) nature and towards its active reconstitution—towards, potentially, a posthuman future.

The zero gravity society

The word ‘cyborg’, a portmanteau of ‘cybernetic organism’, first appeared in a 1960 article by Manfred Clynes and Nathan Kline. Their subject was humanity and technology in the coming era of space exploration, and they were clear about what the concept implied: not only a fusion of ‘man and machine’ but a technological taking hold of the process of evolution itself. The cyborg, they wrote, ‘deliberately incorporates exogenous components extending the self-regulatory control function of the organism in order to adapt it to new environments’. Here, human beings’ evolution and technological development combine into a single process, rather as they do for the Big History authors (Yuval Noah Harari and others) celebrated in Silicon Valley. As Clynes and Kline put it:

In the past evolution brought about the altering of bodily functions to suit different environments. Starting as of now, it will be possible to achieve this to some degree without alteration of heredity by suitable biochemical, physiological, and electronic modifications of man’s existing modus vivendi.

For Clynes and Kline, the immediate problem was how to reengineer human beings to make them fit for space travel. Today, however, it is not the search for other worlds that is driving development in the direction of posthuman intervention but the societies we are creating in this one. The pressures of modernity, combined with neoliberal capitalism, are creating a world that in crucial ways is unconducive to human flourishing. In particular, we are seeing the erosion of life-worlds geared towards embodied others, stable identity, meaningful activity, conviviality and rootedness in nature. In their place are emerging the ‘ungrounded’ societies of technoscientific capitalism—societies characterised by high technologies, abstract relations and disembodied social networks. And it is in these ‘zero gravity’ societies that technoscientific capitalism itself now appears, under the rubrics of innovation and progress, as the gateway to a better world. Homo faber—‘man the maker’—now proposes to remake himself in order that he can breathe the ‘atmosphere’ he is creating, here, on planet Earth.

The transition to the posthuman will not turn on any Judgment Day scenario, but will be a slow and incremental process of quantity turning into quality. With CRISPR-Cas9 gene editing at its disposal, medical research will take great strides in curative and preventative medicine—but it will also, under pressure from the market, and policed only by an institutional ethics overwhelmingly concerned with (physical) safety, confound the distinction between health and enhancement, branching out from the burgeoning industry in reproductive techniques and services into more and more areas of social life. At the same time, psychotropic technologies such as antidepressants and anti-anxiety drugs—preparations that give material effect to descriptive psychiatry’s pathologisation of the despair and dejection that characterise hi-tech late capitalist societies—will become more effective and more targeted, effectively reversing the direction of fit between individual and society in more or less the way imagined by the great literary dystopias of the twentieth century, and by Huxley in particular. Indeed, and more generally, human beings will turn increasingly to new technologies to repair the precarious selfhood that emerges from a socially disembodied, automated society—a process already evident, perhaps, in the increasing uptake of cosmetic surgeries, digital/silicone/magnetic implants and (in Silicon Valley at least) adolescent plasma transfusions. Of no great consequence in themselves, such fads and enthusiasms together add up to a new and changing view of the body as an infinitely malleable entity that can, so to speak, be brought into line with whatever we feel our ‘true’ self or identity to be. The way in which the ‘trans’ debate (and the question of sex reassignment in particular) tends to bounce between arguments derived from radical gender theory and a more mainstream medical/wellbeing perspective may be the harbinger of a new ideological ensemble that liberates the more niche approaches to technology and identity into mainstream culture.

That last point raises the question of where ‘the Left’ stands in relation to these issues, and here the signs are not encouraging. For the increasingly powerful knowledge class, which is broadly progressive on social questions, the informational paradigm is part and parcel of the new economies of cultural/media shaping and hi-tech systems management that it is their role to administer. By contrast, more radical protagonists, especially on the Marxist left, will tend to translate new and emerging technologies back into the language they already know, focusing more on economic questions such as the monopoly character of Big Tech companies, or on the dynamics of ‘surveillance capitalism’, and less (or not at all) on the challenge such technologies pose to our deepest human being. Attention is paid, in other words, not to the technologies themselves, or to the potential those technologies have to undermine the minimal conditions in which human sociality and creativity can flourish, but to the fact that they are in private hands, with the assumption being that a change in ownership will lead eventually to something approximating ‘fully automated luxury communism’. Meanwhile, and from a different perspective, the abovementioned gender theorists welcome the prospect of technological transformation as a break from the ideological cul-de-sac of intersectional identity politics, arguing, for example, that technologies might be used to bring about ‘gestational communism’, i.e. a society in which everyone can become pregnant and in which ‘gestational labour’ and child-rearing are shared. In this last instance, Donna Haraway’s ‘ironic’ use of the cyborg as a metaphor for human liberation is now taken as a political prescription.

Such proposals sound science-fictive, and they are. But they are also intellectual symptoms of the techno-dispensation to come, in which, notwithstanding some radical change in emphasis at the political level, human beings will become ever more entangled with technologies of transformation, led by the lights of innovation and progress towards what the artist and author James Bridle describes (in his book of the same name) as a ‘new dark age’. The radical/critical failure on this score is difficult to overestimate. As a world of ‘black box’ technologies, disembodied sociality and radical bodily intervention beckons, much of the Left has jettisoned, or simply forgotten, the (somatic) materialism that anchors its most humane traditions, adopting instead the Prometheanism at the heart of technoscientific capitalism.

At the 2017 meeting of the World Economic Forum in Davos, visitors were greeted by Tomas Saraceno’s installation Aerocene. According to the explanatory text, the structure

proposes a new epoch, one of atmospherical and ecological consciousness, where we together learn how to float and live in the air, and to achieve an ethical collaboration with the environment.

The blurb is more eloquent than it knows. Even our utopias, it seems, have no place for the flesh-and-blood human—for the embodied, bounded, grounded self. Flailing in zero gravity, we are floating towards a posthuman future.

The Limits of Control

Timothy Erik Ström, Mar 2020

Compelled towards impossible dreams of infinite growth within finite nature, the cybernetic capitalist system is decidedly expansionist and colonial…

About the author

Richard King

Richard King is a writer based in Fremantle. His new book, Here Be Monsters: Is Technology Reducing Our Humanity?(Monash University Publishing), was published in 2023.

More articles by Richard King

Support Arena

Independent publications and critical thought are more important than ever. Arena has never relied on or received government funding. It has sustained its activities largely through the voluntary work and funding provided by editors and supporters. If Arena is to continue and to expand its readership, we need your support to do it.

Comments

“Today, however, it is not the search for other worlds that is driving development in the direction of posthuman intervention but the societies we are creating in this one. The pressures of modernity, combined with neoliberal capitalism, are creating a world that in crucial ways is unconducive to human flourishing. In particular, we are seeing the erosion of life-worlds geared towards embodied others, stable identity, meaningful activity, conviviality and rootedness in nature. In their place are emerging the ‘ungrounded’ societies of technoscientific capitalism.”
Not so much “slouching towards Bethlehem to be born,” as in WB Yeats’ famous phrase. More like staggering.
But I would argue here that the above is fundamentally about human consciousness, and that the reflexive pattern towards self-preservation, which is found in all animals capable of the most elementary perception (as in light vs dark) is the foundation of it all: beginning with that of the organism’s own self and identity. So a likely beginning point in any animal species’ start towards consciousness is Hamlet’s classic question: “to be, or not to be?” Those indifferent to their own survival tend to leave fewer than average descendants.
Likewise human societies, from the pre-Industial village to the modern megalopolis.

Comments closed