Save Humanity! In 30,000 AD! With Crypto!

Simon Cooper

3 May 2024

Effective Altruism’, tech bros, utilitarianism as ideology

More good is more good … It’s not like you did some good, so good doesn’t matter anymore. But how about money? Are you able to donate so much that money doesn’t matter anymore?

Sam Bankman-Fried

Distance in time is like distance in space. People matter even if they live thousands of miles away. Likewise, they matter even if they live thousands of years hence.

William MacAskill, What We Owe the Future

When cryptocurrency trading platform FTX crashed spectacularly in 2022, and its shaggy-haired youthful CEO Sam Bankman-Fried was extradited to the United States from the Bahamas on multiple charges of fraud over unaccounted funds of $8 billion, most people assumed it was a case of simple greed that had acquired a megalomaniac cast. Not the case, as it turned out. Bankman-Fried and the young coterie of executives he had assembled were devotees of the philosophy of ‘effective altruism’, or EA—the proposition that highly talented people should use their powers to supercharge the capacity to raise charitable funds and make a real difference in global social life.

At first glance it’s a long way from pondering the moral value of saving a drowning child to being caught up in the largest case of cryptocurrency fraud to date, worrying about AI apocalypse and dreaming of colonising space, but such is the journey of EA. Until recently, EA had remained slightly to the side of the media spotlight, although its rate of growth and influence had already created significant ruptures in the charity and NGO sectors. With the downfall of Bankman-Fried—who was an ambassador of sorts for the movement—EA became known for all the wrong reasons. Despite attempts by key figures in EA to distance themselves from the massive fraud and criminal deceit exposed by the FRX crash, a whiff of corruption about the whole intellectual enterprise remained. This was particularly so when the extent of the wealth of those professing belief in EA became known, and especially as it became clear that the recipients of EA giving were, increasingly, EA organisations rather than NGOs in poor and underdeveloped parts of the world. Simultaneous with that development had beenEA’s shift to extreme ‘longtermism’—a viewpoint in which the lives of living people are held to be unimportant because a giver could have greater impact on the lives of the not-yet-born. Inevitably, one faction of EA began to embrace the radical transhumanist belief that humans should positively meld with machines. This in turn created, or exacerbated, a division between those who argue EA has lost its way and those who are fully signed up to the longtermist and transhumanist beliefs of EA leaders.

Despite its recent PR difficulties, EA remains highly influential, and ought not to be dismissed as a waning trend or simply a charity-washing exercise by the techno-elite. EA organisations have a staggering number of financial resources at their disposal, and have the ear of political and industry leaders. Despite protests to the contrary, however, the FTX scandal ought to be seen as a consequence of applying the basic assumptions of EA rather than as an aberration. Similarly, the transhuman aims of EA cannot be downplayed as merely the whimsical fancies of a few billionaires, given their lobbying power and capacity to shape the direction of new technologies. Whatever component you choose, from the establishment of EA as a global charitable enterprise to funding the development of artificial humans and space colonisation, the movement’s flaw lies not in its extreme manifestations but in its underlying philosophical approach: that of consequentialist moralism and technocratic rationality. The theoretical basis of EA is attractive to the wealthy precisely because it doesn’t engage with the structures and mechanisms that have produced inequality, oppression or social division. EA accommodates, even encourages, the very conditions that have caused the problems it claims to be ‘effectively’ alleviating.

* * *

Inspired by the work of Peter Singer, who considers himself a ‘parent’ of the movement, Oxford University philosophers William MacAskill and Toby Ord developed the concept of effective altruism in the early twenty-first century. Singer’s famous ‘drowning child’ example encapsulates the principles behind the EA movement. Most people, according to Singer, would not hesitate to stop and save a drowning child. But this moral act is limited; it ignores the greater quantity of suffering elsewhere, especially in developing or poor areas. Those invisible sufferers have the same moral value as the drowning child, and so we ought to direct our efforts to achieve the best possible consequences from our actions for the greatest number. Singer claims that ‘the moral point of view requires us to look beyond the interests of our own society’. Taking a phrase from nineteenth-century utilitarian Henry Sidgwick, Singer argues that moral reflection needs to transcend local relations and take up the ‘point of view of the universe’: that is, to be disengaged and not guided by the immediacy of any given situation.

EA combines Singer’s work with quantitative metrics and aims to get the best altruistic ‘return’ on any charitable investment. It is a scaling-up of utilitarian philosophy, using data analysis to maximise the good by comparing how effective various courses of moral action are. Is it better to give money to a hospital in Africa or provide malaria nets? Is it better to donate money to schools to train more doctors or to develop vaccines to prevent the spread of disease? Because in EA the consequences of actions are prioritised, EA needs to be able to readily identify the consequences of those actions. The focus is thus on measurable impact as the key criterion for valuing any particular course of action.

Singer’s work remains perennially popular, and the EA movement has—recent scandals notwithstanding—become very wealthy. For instance, the non-profit organisation 80,000 Hours, founded by MacAskill, had by 2021 was said to have amassed around US $46 billion (it turns out this figure was wildly exaggerated, and a substantial part was assumed to be coming via FTX. The real figure  today is in the multi-millions but not billions). There are likely a number of reasons for this. EA’s moral framework is simple and pragmatic, while a philosophy that values strangers on the other side of the globe as much as family and friends aligns with (some) left and neoliberal privileging of open borders and free trade. At another level, the claim that EA can get the best return for your charitable investment appeals to the savvy consumer with a vestigial conscience.

However, what is ‘good’ or of ‘value’ is a complex question, one radically narrowed by the utilitarian, quantitative approach, which assumes that what is good is what can be measured. EA’s calculus for effective moral action radically reduces the range of possible interventions to easily identifiable single acts, thereby ignoring other contexts. At an empirical level, the results of some of the studies EA has relied upon have been proven to be inaccurate. There are also problems with prioritising what is most valuable. Singer’s work has been controversial in relation to the rights of the disabled, where he has argued for euthanasia in certain circumstances. Ultimately what is ‘good’ tends to be those actions that promote health and raise income, because these are measurable. Within this narrow calculus, life itself is reduced to the polarities of good and harm, ignoring broader questions of what makes life meaningful or worth living. Whole areas of philosophy concerning these dimensions of life are simply ignored.

Singer’s 2015 book The Most Good You Can Do encapsulates the individualism of EA. Morality is a question of individual choice: the most good that you can do. Collective solutions, whether modest (taxing the rich) or more radical (changing oppressive structures) are invisible or rejected by EA. Singer is dismissive of Marxism, or indeed any other anti-capitalist critique. This is not merely a theoretical difference between utilitarianism and more radical philosophy. As Nobel Prize winner Angus Deaton has observed, ‘the evidence for development effectiveness, for “what works”, mostly comes from the recent wave of randomized experiments, usually done by rich people from the rich world on poor people in the poor world, from which the price lists for children’s lives are constructed’. The wider context for conditions such as poverty and illness are not part of EA’s calculative matrix, which is imbued with an almost sociopathic reductionism. Nor is the possibility of unintended consequences, because only the immediate effect of intervention is measured. For example, Deaton notes that:

In today’s Rwanda, President Paul Kagame has discovered how to use Singer’s utilitarian calculus against his own people. By providing health care for Rwandan mothers and children, he has become one of the darlings of the industry and a favorite recipient of aid. Essentially, he is ‘farming’ Rwandan children, allowing more of them to live in exchange for support for his undemocratic and oppressive rule. Large aid flows to Africa sometimes help the intended beneficiaries, but they also help create dictators and provide them with the means to insulate themselves from the needs and wishes of their people.

EA’s focus on ‘welfarism’ rather than a broader commitment to justice results in precisely these sorts of unintended consequences. By leaving social and economic structures untouched, EA not only ignores what creates inequality but tends to perpetuate the processes that create it.

While having some appeal as a globalist ethic, the ‘point of view of the universe’ adopted by Singer and EA philosophers ignores any question of where moral values or obligations come from, and whether such an ‘inhuman’ standpoint can be sustainable. Is doing good simply a matter of conforming to abstract principles and paying attention to the data? Or do values and the spur to action occur as much though concrete encounters, phenomenological experience, lived traditions and the like? If such experiences are open to contestation, it is another thing to discount their significance entirely. Once we take away the experiential dimensions of social life, all that is left is an abstract calculus that leads to decisions about moral acts based on quantitative measurement. No wonder EA enthusiasts are so fascinated by AI, the disembodied calculations of which represent either the perfection of effective altruism—finally a sufficiently detached point of view from which to judge the ‘most good’—or a complete disaster.

The ‘God-like perspective’ that asks us to ignore what is in front of us and dispassionately apply the metrics has a certain affinity with the interplanetary schemes of Silicon Valley billionaires as they obsess over the fate of AI and dream of escaping the constraints of human embodiment and earthly existence. Such fascination with transcendence is hardly surprising among the techno-elite. Technology creates profits precisely by selling us ‘freedom’ from our natural and social moorings, with communication technologies enabling the transcendence of face-to-face settings and biotech promising to liberate us from human biology. Perhaps the journey from the drowning child to the colonisation of space is not as long as we initially imagined.

The most good … why be an aid worker when you could be a merchant banker?

The close relationship between EA and Sam Bankman-Fried has been well documented. So too has the fact that many in the EA movement were aware that Bankman-Fried and FTX were engaged in increasingly dodgy trades, and that his image as a humble billionaire who drove a battered old Corolla and slept on a bean-bag chair was belied by his mansion in the Bahamas, use of private jets and opulent lifestyle. The key question is, however, whether his actions represented a ‘misuse’ of EA principles. Emile Torres has observed how William MacAskill justifies working for ‘evil organisations’, using the argument that if you are an effective altruist and work for a petrochemical company, that is morally better than someone who isn’t because at least the effective altruist will give a decent portion of their earnings to charity. Given that there will always be someone willing to work for such companies, it’s better if those who do are effective altruists. Moreover, as a general principle, it’s preferable to work for a petrochemical company rather than a non-profit organisation because the amount of good you can do is quantitatively greater.

In practice, EA organisations are passionate advocates of market capitalism and tech entrepreneurship. Not only do they receive vast sums of money from the founders of Facebook, Skype and so forth, but EA consistently encourages its followers to pursue tech entrepreneurship, and ranks it as the most important career choice. In a 2023 article entitled ‘Effective Altruism and the Dark Side of Entrepreneurship’, Michael Olumekor, Muhammad Mohiuddin, and Zhan Su record how EA organisations provide:

career advice to their adherents, asking them to choose careers that can bring about the most good in the world. However, careers such as working for non-profit organizations are rarely encouraged. Instead, proponents of EA argue that taking up careers in places such as banking and hedge funds can be more impactful, if people donate a portion of their wages to charitable causes

In an interview with The New Yorker, Peter Singer opined that we will always have billionaires, and it’s better to have ones like Bill Gates and Warren Buffett, who at least donate significant sums to charity. EA leaders have gone beyond this pragmatic acceptance of a certain state of affairs in the world to actively encouraging it, exhorting its members to become bankers, traders and tech entrepreneurs.

Not long after the mass protests of Occupy Wall Street and the notion of the ‘1 per cent’ being the chief cause of global misery became popular, effective altruists came up with the philosophy of ‘earning to give’, redeeming the 1 per cent as charitable givers. Yet the creation of ‘more money’ has done little to relieve the plight of the developing world, and the system that allows the creation of maximalist wealth creation has made things worse. The WHO notes that the cost of ‘simple’ interventions for things like malaria escalated so massively after the GFC that many could no longer afford them. Governments were required to pay increasing amounts off their loans thanks to inflation, meaning less could be spent to alleviate suffering, while climate-change effects—caused primarily by the industrialised world and the selfsame ‘elites’—increased disease due to cyclones, flooding and other ‘natural’ disasters. Even on their own narrow terms, EA’s medical and health interventions made in the ‘real world’ were dwarfed by the manifold consequences of global capitalism.

If the early period of EA was shot through with elitist assumptions—we use our metrics to decide what’s best for others—EA organisations soon turned to the production of more elites. While this might sit uneasily with what appears to be the flattening, democratic dimension of utilitarian ethics, the cultivation of bankers and entrepreneurs is entirely in keeping with the ‘maximalist’ philosophy of EA. If EA is about the most good you can do, then it follows that wealthy elites are able to do ‘more good’ because they can donate larger sums of money. This focus on wealth as providing the means to do more good means the massive wealth of many EA organisations is not a perversion but rather a logical continuation of the EA approach. As Sophie McBain observes:

This is how the movement that once agonized over the benefits of distributing $1 de-worming pills to African children ended up owning two large estates: the $3.5m Chateau Hostačov in the Czech Republic, purchased in 2022 by the small EA-affiliated European Summer Program on Rationality with a donation from Bankman-Fried’s FTX Foundation; and Wytham Abbey, a 15th-century manor house near Oxford, bought for £15m to host EA retreats and conferences.

While to outsiders such tales of largesse reveal a hypocrisy within the upper echelons of the movement, this is ultimately less important (and less morally problematic) than EA’s turn to the philosophy of longtermism, and the attempt to create influence according to its principles. EA is channelling billions of dollars into research into ‘longtermism’ while working hard to lobby politicians, industry and the United Nations about the importance of the longtermist cause. The Economist found that by 2022, 40 per cent of EA’s funding was directed towards such longtermist projects as developing new and safe AI and colonising space. In one sense, this seems a long way from a philosophy of charitable giving. But there is a logic that connects the moral duty towards generations centuries from now with the altruistic desire to save those we have never met in the present, even if the effects are contradictory.

From charity nerds to transhumanists

To the uninitiated, the idea of bearing responsibility for future generations seems a worthy aim, especially as the ravages of anthropogenic climate change begin to take hold. However, the longtermism embraced by many of the key figures in EA points towards something different. For writers like William MacAskill, Toby Ord and Nick Bostrom, longtermism expands the circle of moral obligation towards future generations, possibly millions of years into the future. This 2020 mission statement from the Centre for Effective Altruism reveals how the desire to do ‘more good’ now extends across time as well as geographical space:

Many people believe that we should care about the welfare of others, even if they are separated from us by distance, country, or culture. The argument for the long-term future extends this concern to those who are separated from us through time. Most people who will ever exist, exist in the future.

Longtermism hummed along for some time in the background of EA’s concerns. Recently, however, EA philosophers have made longtermist themes a key element of the movement’s identity. The publication of Ord’s The Precipice (2020) and MacAskill’s What We Owe the Future (2022) received widespread coverage and acclaim, with The New Yorker and The New York Times running features that speculated on longtermism as the future of ethics. Indeed, longtermism seems to capture something of the zeitgeist, most likely the combination of fears about technologies such as AI, recognition of intergenerational responsibility precipitated by climate change, and the ceaseless fascination with billionaire prophets such as Elon Musk (who praised MacAskill’s book). This concern for the future—for generations that might live possibly millions of years into the future—created a new tension within the EA movement. If early versions of EA dismissed proximity as a guide to moral responsibility (it is better to be concerned with invisible sufferers elsewhere), the shift to a temporal calculus threatened to displace the needs of present humanity to those of future unborn generations. What are the starving millions in Africa when compared to the welfare of trillions of humans in the future? Once the maximalist goals of EA, where the goal is to create the maximum amount of value in the universe, are fully thought through, all kinds of perverse consequences arise.

Key to the work of longtermism is the concept of existential risk, or ‘x-risk’. Bostrom argues that since it is difficult to predict how to trigger good outcomes in the future, one should instead focus on decreasing so-called existential risks, ‘where an adverse outcome [an existential catastrophe] would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential’. Some might find it odd that climate change is not considered an existential risk while rogue AI and engineered pathogens are, but this is because Bostrom, Ord and MacAskill claim it is unlikely that climate change will wipe out all humanity. While significant losses are likely to occur, humanity ought to be able to recover and thrive, and we must remember that they are thinking along a very long timeline. Again, we encounter the ‘point of view of the universe’—the God’s-eye perspective that has underpinned EA ever since Singer’s early work. This perspective allows Bostrom to remark on the degree of suffering in events such as two world wars, the AIDS pandemic and the Chernobyl nuclear meltdown in the following way: ‘tragic as such events are to the people immediately affected, in the big picture of things … even the worst of these catastrophes are mere ripples on the surface of the great sea of life’.

When someone like MacAskill talks about climate change, he resolutely avoids discussing the need to change social or economic arrangements in order to avoid the worst. In What We Owe the Future he mentions things like new technologies, alternative energy sources and carbon trading schemes as means to mitigate suffering, but goes no further. Part of the reason for this is the technophilia endemic to EA thinkers. Their solutions are always techno-scientific, but one cannot help but suspect that longtermist thinking initiates new hierarchies between the wealthy and the poor in the present while assigning different moral value to those living in the present and those living in the future.

We have already seen a privileging of material wealth and ‘elite’ knowledge in EA’s practices. From ‘earning to give’ to the recruitment and career development of financiers and entrepreneurs, from EA’s spiritual home in Silicon Valley to Peter Singer’s endorsement of ‘benign’ billionaires, there is a division between the economically privileged and everyone else. This allows distinctions to creep back into what is ostensibly a ‘neutral’ moral calculus. Sometimes this is made explicit, such as in the statement of longtermist theorist Nick Beckstead that:

Saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries … It now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal.

Such a statement is at once completely against the principles of EA and a logical continuation of them: if consequences are what matter, lives that can have greater ‘ripple effects’ come to matter more.

When longtermists contemplate the future, this is framed around the idea of ‘potential’. Thus ‘existential risk’ denotes any future event that would prevent humanity from reaching or sustaining a state of ‘technological maturity’. For Bostrom this means ‘the attainment of capabilities affording a level of economic productivity and control over nature close to the maximum that could feasibly be achieved’. The idea of potential is wrapped in the tropes of modern capitalism, of economic productivity and technological mastery, where the latter is deployed to secure the former against all challenges. Indeed, Bostrom once proposed a global mass surveillance system as the only means of saving humanity from a future technology that might fall into the wrong hands. By saving humanity, he does not mean current populations (so long as some people survive) so much as future generations. He has even argued that present lives are less valuable than future ones:

One might consequently argue that even the tiniest reduction of existential risk has an expected value greater than that of the definite provision of any ‘ordinary’ good, such as the direct benefit of saving 1 billion lives.

In this ‘strong’ version of longtermism, nothing matters more, ethically speaking, than fulfilling our potential as a species of ‘Earth-originating intelligent life’. When pressed, some longtermists such as MacAskill will retreat to a weaker version that holds that future generations are only one aspect of a broader field of effective altruism. However, there is no doubt that nearly all of EA’s key thinkers privilege longtermism, with all its hierarchies of value—overt or hidden—as a primary concern. We can see this not only in recent public statements and the redirection of EA funding to longtermist causes, but also in their connection to the transhumanist movement.

Transhumanism is the idea that humanity should evolve beyond its biological limits, primarily through technology. This might appear confusing in relation to EA, which also sees technology as the primary source of x-risk. Indeed, MacAskill, Ord and Bostrom all warn of the dangers of such technologies as AI and of genetic engineering and publicly advocate their regulation, stressing that humanity must be kept safe from any malignant outcomes of techno-science. So, in what sense are they transhumanists? It turns out that EA’s key intellectual figures were all directly or peripherally involved in transhumanism at an earlier time. What has happened since then is that transhumanism, at least publicly, has changed focus. Mollie Gleiberman argues that longtermism is a redirection of transhumanist discourses away from earlier transhumanist positions, which stressed liberation and celebration through new technologies, towards a discourse of safety and concern. The earlier phase was saturated with techno-optimism; remember talk of uploading our minds into computers, bio-enhancement and living forever though technology? It was also anarchic and hyper-individualistic—an extreme variant of the Silicon Valley ideology that regarded regulation and the state as hindrances to evolution. The ‘new’ transhumanism is, on the surface, more cautious. No longer wildly utopian, it invokes the discourse of responsibility and advocates working with governments and industry to develop new technologies like AI with care. The key point is, however, that they very much want to develop them.

In fact, to not develop technologies for a transhumanist future is regarded as immoral. Bostrom, who once wrote a chapter entitled ‘Why I Want to Be Posthuman When I Grow Up’, suggested that ‘the permanent foreclosure of any possibility of this kind of transformative change of human biological nature may itself constitute an existential catastrophe’. Similarly, Ord asserts that ‘forever preserving humanity as it is now may also squander our legacy, relinquishing the greater part of our potential’.

For this strand of EA thinkers, the potential of humanity lies in the posthuman escape from embodiment. Bostrom claims ‘The mind’s cellars have no ceilings’, whereas ‘your body is a deathtrap’. Similarly, MacAskill refers to the idea of non-biological digitally instantiated consciousness. Along with artificial minds, longtermists stress the imperative to colonise space. In this view, the continued delay of space colonisation amounts to an ‘astronomical waste’ of value. Bostrom again: ‘The potential for approximately 1038 human lives is lost every century that colonization of our local supercluster is delayed’. In The Precipice, Ord argues that attaining humanity’s long-term potential ‘requires only that [we] eventually travel to a nearby star and establish enough of a foothold to create a new flourishing society from which we could venture further’. In this way future generations could make ‘almost all the stars of our galaxy … reachable’ since ‘each star system, including our own, would need to settle just the few nearest stars [for] the entire galaxy [to] eventually fill with life’.

What are we to make of all this? It’s easy enough to dismiss these visions as the fantasies of a few privileged philosophers working in tandem with the Silicon Valley elite, in which overt technophilia is redeemed through the discourse of morality. It’s a repackaged version of EA’s charity-washing, where billionaires feel good about themselves developing radically transformative, but safe, technologies, adding ‘responsible visionaries’ to their list of achievements. Many left-leaning critiques of EA go beyond this cynicism to situate the movement within the context of neoliberalism, noting the conditions that allow EA to be successful irrespective of the validity, or otherwise, of EA’s arguments. So the turn to ‘small’ government, the growth of NGOs to fill the space left, the expansion of the free market, the absence of regulation, all allow EA to fill a ‘moral gap’ once occupied by the state and various religious and cultural institutions. Additionally, the criticisms that EA does nothing to alter oppressive structures—that it is a ‘capitalist-friendly’ form of altruistic giving and that its rise as a global mega-charity is only possible due to the atrophy of the public sphere and lack of counter discourses to the market—are all convincing.

But these objections don’t go far enough, in the sense that they don’t fully take hold of the deeper transformations occurring. There are many on the left/progressive side of politics who are comfortable with some version of the technologised and administered world offered by EA, even if they may be put off by recent developments in EA. This is largely due to the changed conditions in which many people now work and live, and how they come to manage this world. The expanding category of the intellectually trained—knowledge workers, techno-scientific practitioners, the managerial class, creatives and influencers, many of whom regard their politics as progressive—work within a world largely governed by processes of abstraction, whereby a whole series of more ‘concrete’ settings—the body, primary face-to-face sociality, tangible relations with objects and things—are being reshaped though media and biotechnologies. In this more fleeting and unstable world, where new levels of abstraction permeate all aspects of life, basic features of social life are undermined and reconstituted. Everything from individual identity, to social relationships, to work and career has to be continually made and remade. For a time, towards the end of the twentieth century, this was celebrated as a series of freedoms. But more recently the discourses of trauma and safety have emerged as a means to regulate this new paradigm of instability. Such concepts, as used today, manifest precisely through a utilitarian framework, in legislation, or in the development of discursive or technological surveillance that aims to prevent as much harm as possible within the affective flows of techno-capitalist society. For many there seems to be no alternative.

If this argument holds, the broader framework developed by Singer and other utilitarians for deciding how to create good and prevent suffering appeals not so much because of the persuasive power of their arguments but because the resources for developing alternative approaches are shrinking. Such approaches do exist; even within the Anglo-analytic philosophical traditions there is a whole swag of objections to the arguments offered by Singer and other utilitarians. These critics, including Alisdair Macintyre, Bernard Williams and Phillipa Foot, argue that utilitarian theories construe human beings too narrowly, regarding them less as intrinsic entities than as vessels that can experience quantities of happiness and suffering, and consider this an abstraction from what it is to be a human person. As Williams put it, this new language of ethics has ‘too few feelings and thoughts to match the world as it really is’. There is no acknowledgement that suffering could have other dimensions: for example, that suffering might lead to wisdom or to change, or that in some cases what is judged morally right might necessarily involve suffering. Instead, in the utilitarian frame, suffering must be regarded purely as a negative on life’s balance sheet. The over-reliance on abstract reason means that utilitarianism disavows the very relationships upon which it is parasitic. Instead of seeing reason as a set of principles that derive from and interact with biological or socio-cultural promptings, Singer abolishes the latter, leaving us with abstract reason alone. Categories such as compassion, care, embodiment and ‘being-with’, for example, are regarded as morally suspect. This detached point of view of the universe deprives us of the resources we need to recognise what matters.

This is not simply a matter of philosophical difference, however, as the reconstruction of life under techno-capitalism provides the material complement to Singer’s utilitarian calculus. Globalised capital, information technologies and biotech have combined to erode the older forms and settings through which we might engage with the world. Categories such as embodied being, place and mutual co-presence as a ground for social life have been reconstituted by the work of high-tech markets and the ideologies that accompany them. If these material processes propel us towards the posthuman, and serve as the catalyst for the fantasies of EA longtermists, they are mirrored in the ideas of Singer and other utilitarians. The moral injunction to concentrate on those who need help the most and to bypass the needs of those immediately in front of us is easier to follow when the flows of capital, people and information that characterise the form of life today allow us to ignore those more grounded settings of place, body and community. Glued to your phone, you probably wouldn’t notice the drowning child anyway.

Peter Singer has recently attempted to distance his work from the extremes of longtermist transhumanism with a call for a return to the ‘original’ values of EA. But these remain unconvincing. On the contrary, it is arguable that Singer’s utilitarian framework was already posthuman, as it rejects pretty much everything that would constitute the grounds of human being. Once you dismiss the importance of embodied co-presence as a source of determining what is of value, once you replace embodied—and socialised—reactions as legitimate bases for right actions with a universalised, abstract mechanism for quantitatively determining what is the most good, then on what grounds might you object to the sort of transhumanist ideas of Bostrom and others? By never exploring the importance of the social and cultural settings through which we live and are formed, all we are left with is a series of maxims that we are invited to operationalise. As if AI were already making all the decisions.

About the author

Simon Cooper

Simon Cooper is an Arena Publications editor.

More articles by Simon Cooper

Support Arena

Independent publications and critical thought are more important than ever. Arena has never relied on or received government funding. It has sustained its activities largely through the voluntary work and funding provided by editors and supporters. If Arena is to continue and to expand its readership, we need your support to do it.

Leave a Reply