Loving Machines: Mental Health by Algorithm Is Reshaping Care and Sociality

Online stores currently offer hundreds of mental health applications. These apps promise to enhance your coping mechanisms, relieve your stress, make you happy, fix your depression, and much more. As popular and diverse as these digital gizmos are, however, they occupy only one corner of the ‘mental health digital space’. In the rapidly changing field of mental health care, these emerging technologies are of special interest to cash-strapped state-funded mental health services, non-government agencies and private health companies alike.

In what follows, I examine a variety of mediated forms of mental health care, ending with the latest development, which is a particularly profitable commercial combination: the marriage of artificial intelligence (AI) with the industry’s most popular form of psychotherapy—cognitive-behavioural therapy (CBT). This pairing is noteworthy for at least two reasons. First, it has implications for the user-client’s sense of self and for the mode of sociality it calls the person to enact. Second, it points to an immensely lucrative market. Mental health is a huge and growing industry: the Kaiser Foundation estimates that around $200 billion is spent on mental health disorders in the United States every year. This has made mental health the ‘top’ cost category among medical conditions.

These latest developments have emerged in a field that is already roiled in controversy and contradiction. Even the notion ‘mental health’ is contested. Is it about wellness, an absence of mental illness, resilience, vitality, normality, quality of life, the demonstration of correct attitudes, genetics, biology? Yet the digital applications discussed below all work with a restricted notion of what mental health might be, as easily read in their limited view of the processes that constitute a self and how they understand a person’s ‘mental’ difficulties.

That CBT has a highly individualistic and mechanistic understanding of the person is in turn highly relatable to neoliberal ideology, which has been working its way through the institutions for decades, allocating the carriage of care to the lowest-level entity—the private self. In this allocation we find a kind of mental hygiene approach that is the grooming of ‘resilience’, an increasingly high-priority personal concern. The assumption is that, yes, life can be stressful, but it is the responsibility of each citizen to identify, manage and eventually overcome the ‘challenges’ that inevitably come their way.

Now the existing mix of mediated care-giving sees the introduction of newly minted forms of online care: delivered by bots, according to algorithms. These mediated forms, but especially the new digital techniques, are significantly de-territorialising the location-based, in-person setting of traditional therapeutic services. Among a range of impacts, a ‘keep-your-distance’ imperative seems to be slipping into place, disrupting received expectations of the respective roles and responsibilities of those who seek and those who offer help. Any form of mental health care presupposes certain ‘technologies of the self’, as Nikolas Rose reminds us, but the question is: what will the new technological forms of mental health care bring?

Under COVID

Technologically mediated therapy is not new, but during COVID it has been reframed in terms of necessity, and as something that should perhaps be embraced even after lockdowns end. This turn to digital is revolutionary. A hundred years’ worth of assumptions mandating the primacy of face-to-face, here-and-now interaction are in the process of being upended It is remarkable how quickly even the most august segments of the business adapted: within months of the pandemic’s arrival the International Journal of Psychoanalysis had published an editorial, ‘The Current Sociosanitary Coronavirus Crisis: Remote Psychoanalysis by Skype or Telephone’, advising members how to practise in non-face-to-face settings. Practical perhaps, but setting the context for more incursions to come.

More than simply pragmatic, it has been argued elsewhere that the digital turn in care is also moral. Australian policy grandees Ian Hickie and Stephen Duckett advised mid-2020 in The Conversation that ‘Australia’s governments must seize the opportunity that COVID-19 has created. Digital systems must now be viewed as essential health infrastructure, so that the most disadvantaged Australians move to the front of the queue’. It’s a proposition that packs ethical and rhetorical punch, and the equalitarian impulse is not to be rejected outright. But there are other problems with this kind of approach, as seen in the language of the queue, a recycling of the framework in which health services are treated as commodities. As Dylan Riley observed recently in New Left Review, health services are not goods, and they are not all of a type. In the field of mental health especially, care is not a ‘deliverable’—a product that can be shipped along a supply chain. Frameworks that position us as ‘consumers’ and ‘providers’ deny the reciprocities present in any health setting, but especially mental health settings. This interactionless vocabulary de-natures the actors, rendering one executant, the other passive recipient. As will be discussed, a lexicon of providers and recipients makes the shift into care-via-AI not only possible but even, it would seem, ‘logical’.

Of course, in the short term, and in some specific cases, there may be advantages in de-territorialising the therapeutic situation. Some clients have reported that online offers comfort: there’s benefit in the distance technology imposes. If the ‘other’ is mediated by a machine/screen, it is easier to quell anxieties regarding one’s interlocutor’s thoughts, feelings and judgements. On the ‘supplier’ side, some therapists see online as useful, with cuts in travel time and rental costs, and potential danger minimised as well.

So, for some patient-clients—the young person exploring their sexual diversity; the neuro-atypical person seeking non-corporeal contact—the advantages of distance might outweigh what others think of as the costs of being removed or hidden behind a screen. Similarly, questions of accessibility and convenience cannot be flatly dismissed. But if the normative condition of accountability and the complexity of ‘recognition’ that is built into face-to-face communication is put under threat by the extension and naturalisation of technological models, then we are entering new terrain. What does it mean that the helping ‘other’ is not present—or, in the most extreme example, that the other is an algorithm?

Mediated and online

A variety of online and other mediated formats are presently in use, each with their risks and rewards. Telehealth consultations are frequently described as ‘digital medicine’, and there has been huge growth in their use during COVID. To an extent this is a misleading description, as telehealth retains a person-to-person, if not face-to-face, relationship. It involves real-time contacts and, while mediated, they are synchronous. Asynchronous, non-face-to-face interactions and asynchronous written exchanges are less common, but they are on the rise. Examples are the ‘mood tracking’ apps such as the Monash University and beyondblue collaboration ‘MoodPrism’, in which clients map their states of mind.

A key difference between traditional care practices and the disembodied distance of telephone or video-based contact concerns touch. More precisely, what discriminates between mediated and face-to-face modes of relating is the actuality in the former, and the absence in the latter, of the possibility of touch. In one mode there is the possibility that kinaesthetic contact might occur; in technologically distanced contexts, even the possibility is precluded. Put more subtly, in situations of physical co-presence, as in face-to-face therapeutic situations, there is a sensitivity to the non-specific qualities of communication. Where there is person-to-person immediacy, the ambit of intimacy, what couples therapist Tom Paterson termed the ‘co-ordination of meanings’ is more likely. One can feel closer, or, alternatively, that the other, or oneself, has emotionally moved away. The potential for actual touch may be slight—it is unlikely your therapist will hug you, or attack you, unexpectedly halve or double the physical distance between you—but a degree of the tactile is provisionally built into the face-to-face therapeutic relationship, and meaning takes shape in the reference points it provides.

By contrast, in technologically mediated contexts there is an absolute guarantee that participants will not touch—will not ever physically interact. Because this is materially precluded, a particular ‘realm’ is created, one where a property that was once outside the control of participants is no longer so, and may now even be elevated as a right: the ‘right’ to be insulated from the possibility of contact.

Of course, differences between face-to-face and digital interaction go way beyond the matter of touch. For example, in-person encounters present a dense and dynamic milieu within which participants have to filter, and respond to, a multi-dimensional mix of external communications and inner experience. The psycho-social dance may be patterned by custom, but in the immediacy of here-and-now, in-person exchanges, there is always the potential for surprise. Depending on one’s disposition, that there is an edgy quality in co-presence can be seen as a formidable challenge, which needs to be acknowledged, and abated; or, the possibility of non-linearity in intersubjective situations might be welcomed as an element in, if not the sine qua non of, in-depth human relating.

In a promotional video for the MoodPrism app, we are told that daily monitoring and ‘mapping’ of an individual’s mood can allow the person to gain control over the patterns that produce feelings of anxiety or depression and thereby gain a greater sense of freedom. ‘Autonomy’ is on offer. The video ends with the friendly injunction: ‘MoodPrism, map your mood and learn more about yourself’. But here, the Enlightenment motto sapere aude, ‘Dare to know’, applied to the self, becomes an injunction to dare to know one’s data! Critical commentators such as Catherine Loveday, professor of cognitive neuroscience at the University of Westminster, point out the potential, much larger costs of this framing when she argues that the apparent freedom these applications offer is associated with self-preoccupation and a state of ‘degraded social interaction’.

As noted, then, some people experience the technologically mediated realm as ‘safer’, as less intense than the demands of unpredictable, in-the-moment, face-to-face encounters. Technological mediation appears to confer the advantage of a buffer, a diminution of circumstances that could lead to awkwardness. Offering a sense of control, distance means less communicational load, lightening the processing demands of embodied situations. Metaphorically, and sometimes literally, mediated exchanges even give anonymity; less judged and less pressed at a distance, and especially if the exchange is asynchronous, participants may have a very strong sense of freedom and release. But if this becomes the norm, it follows that clients will not only tend towards interpersonal wariness but will also miss out on the positive effects of relational co-presence on which care and therapeutic situations have typically depended.

Of a different order to these mediated forms is a more transformational technology again, one that introduces an even greater discontinuity with past practice. What happens when the client’s interlocutor is not human—when your interlocutor is an AI-driven application that simulates a form of sentience?

Cognitive-behavioural therapy

CBT, and its sibling rational-emotive therapy (RET), share the basic premise that the person is an autarkic unit whose presenting problem—depression, anxiety; for some, even psychosis—is produced by an individual’s problematic pattern of thought. This rests on the assumption that rational thought pursues what is best for the self. What is described as ‘rational’, ‘logical’, ‘correct’ or ‘positive’ thought is not to be evaluated against some ultimate standard; rather, it is to be judged on a ‘hedonic calculus’. Jettisoning the murkiness of intersubjectivity, the rational thinker calculates, and the user/consumer of this therapy will be taught to be more effectively self-centred. Albert Ellis, founder of rational emotive therapy (RET), and Windy Dryden put it this way in The Practice of Rational Emotive-Therapy: ‘rigid absolutism is the very core of human disturbances’. Alongside ‘flexibility’, ‘acceptance of uncertainty’ and a commitment to ‘long-term hedonism’, mental health is seen as conditional on the person’s ability to sustain ‘scientific thinking’, as ‘nondisturbed individuals tend to be more objective, rational, and scientific than more disturbed ones’.

According to CBT advocates, non-calculating thought is the result of defective ‘automatic’ thought patterns. These seem to be a variety of programming error, which can be corrected by standardising the subject’s inner cognitive life. The image is of a machine, either functional or dysfunctional; in the latter case, thinking is beset with operational errors, termed ‘distortions’ or ‘inaccuracies’, and the subject needs to be reprogrammed with a different form of automatic thinking, a regime that is properly ‘rational’, ‘correct’, ‘functional’.  Signing up to the project—admitting that your problems are due to the patterns of thought you take for granted—is the first step. In the critical literature this is referred to as ‘client socialisation’ and seen as deeply problematic; for CBT/RET practitioners it is called ‘insight’ or more simply ‘buying in’.

Unlike either the biological perspective on mental illness or analytic approaches exploring deep intra-psychic processes, CBT posits a clockwork-like inner process that is readily accessible verbally. This purportedly regular and predictable set of processes means the source of the problem—and appropriate intervention steps—can be rendered as reproducible segments of dialogue between client and therapist. Abstracted as protocols that absorb individual variations to pursue a known end, CBT assumes a kind of robotic organisation of the human mind. This assumed quality now seems to be attractive to the various corporations advancing AI machines dedicated to mental health interventions.

Alison Darcy, a psychologist and the CEO of Woebot, a high-profile private provider of online mental health services, has developed Woebot as an ‘automated conversational agent’. Leaving the Stanford Artificial Intelligence Lab to start up her company, she sees Woebot as ‘the future of mental health’. In the official publicity, the company intends

to bring the art and science of effective therapy together in a portfolio of digital therapeutics, applications and tools that automate both the content and the process of therapy. To develop technology capable of building trusted relationships with people, so that we can solve for gaps along the entire health care journey, from symptom monitoring to episode management.

CBT is not only practical but is the essence of a modern, evidence-based psychological method, according to Darcy. Opposing what she sees as a mystification muddling the psychotherapeutic field, Darcy describes CBT as accessible and structured. For her, and for others, this means CBT ‘lends itself well to being delivered over the internet’. Programmers are now busy proceduralising CBT: processing it into algorithmic form. Users type in responses to questions, are sent prompts, and receive guidance in the form of messages, emojis and videos. On official Woebot sites it is claimed that the online application is as effective as in-person CBT. Even more:

it may be easier to share your stress with a non-judgmental nonentity than friends, family, or mental health professionals, especially if you’re a person who spends all your time online and has come to find personal interaction offensively intimate.

The above is key:

the program’s non-human disposition [is] a surprising asset in comforting millennials. [In the trials] Testers were more willing to disclose personal information to an artificially intelligent virtual therapist than they were to a living breathing clinician.

Many individuals (and especially men), reports Darcy, are ‘not able or ready to speak to another human’. Part of it is shame, the other part is fear of stigma, which has often been considered a barrier to entry into therapy. ‘There is no risk of managing impressions. [Robots] are not going to judge you’, explains Darcy. ‘We’ve removed the stigma by completely removing the human.’

But it goes further. Awkwardness is minimised by the interlocutor’s machine status; but does this other have a persona? As a piece on fastcompany.com describes:

Darcy and her colleagues assigned a non-gender specific identity to their creation, which they infused with a dorky personality described as a mix between Kermit the Frog and Dr. Spock. But users quickly and repeatedly imprinted one on the digital pen pal. They referred to Woebot as ‘he’, ‘little dude’ and ‘friend’.

A taste of what is offered is signalled in the introduction, delivered by a therapist pictogram to first-use customers: ‘I’ll teach you how to crush self-defeating thinking styles’. Darcy is committed to ‘mak[ing] great psychological tools radically accessible’. And as with earlier programs where humans interacted with minimally sentient machines—the 1960s ELIZA experiments at MIT Artificial Intelligence Lab, for example—the Woebot official site claims users bond with their robot therapist.

Is there research on this claim? In a paper titled ‘The Digitalization of MH Support’ delivered to a 2020 lockdown conference, UK-based academic Ian Tucker presented his research into the use of AI-driven chatbots providing a mental health service to members of a community-based peer-support hub. He reported that service users overwhelmingly valued the use of AI-driven chatbots. The reasons given included ‘support [was] available 24/7’; service users ‘did not feel judged’; chatbots delivered ‘automated empathy’; and, rather than feeling stressed about ‘being on call to others’, it was good with a machine because there was ‘no expectation of reciprocity’.

Although this was a small study, might these responses suggest a particular orientation to self and other? Respondents appear to be needy and vulnerable (‘round the clock help is what we want’); sensitive to embarrassment (‘it’s good the machine did not look down on me’); wanting to be listened to (‘the machine is good at understanding me’); and reluctant to be there for others (‘I’m stressed and fragile, so give me a break: real people want too much of me’). This last take-out is especially interesting, as the hub these young people were attending describes itself as a ‘peer support’ service.

Might the format and content of this automated service be playing a part in shaping the subject position, and the understanding of respective roles and responsibilities, of its consumers? Does the use of a CBT-fuelled chatbot encourage mutuality, camaraderie and a sense of accountability, or might it summon self-concern, vulnerability, entitlement and the valorisation of convenience? Far from the assumed benefits of peer support, this instance of technological mediation—CBT-with-chatbot—may well be fostering I-centred, non-accountable forms of selfhood and promoting interpersonal illiteracy.

Ironically, while CBT maintains the importance of avoiding absolute and static understandings of selfhood, it has facilitated forms of digital mental health care centred on dogmatic attachments to diagnostic identity, together with repetitive cognitive actions. ‘Hey, if I’m not feeling good, it must be because I’m failing to put myself first with sufficient focus and assertion’; ‘I’m failing to complete the homework set by my app e-therapist’; ‘I’m failing to maintain the lifestyle associated with effectively managing my diagnosis’. Mental health care in this form is encouraging recipients to see themselves as automata that can regulate their states by modifying their inputs—their thinking sequences.

Is this just a different form of relating?

How does this square with the realities of the offline world?

Pierre Bourdieu emphasised that attitudes inform practice. It is also true that what is practised requires less dedicated attention than what is only occasionally performed. Like driving a car or piloting oneself about, the more this is practised the easier, the more naturalised, this activity becomes. For example, in map reading one has to dynamically engage in a series of reflexive operations, a-conscious processes of scaling up/scaling down, in order to make sense of the correspondence between map and territory. The more this is done, the easier it is. Conversely, the more a person relies on Google Maps, on a voice or a simple visual signal, to navigate, the less competent—and, arguably, the less engaged with nature, or place—one becomes. The same can be said for being able to navigate social situations: what is practised tends to become easier, and what is not practised tends to become more difficult and feels less natural.

In face-to-face relationships—in any real relationship—no single party is in control; the rules are often opaque, even invisible; and you can’t really ‘drop out’  (just exit) if you feel uncomfortable. Mostly this is the opposite of what happens online, and now, as online manners and relations become normalised, the offline world is an increasingly foreign territory. In this now-almost-foreign place, interactions tend to feel awkward, even unsafe. They present a lack of predictability. ‘I feel it’s bumpy. I’m vulnerable, I feel trapped, confused.’ The real world becomes ‘inferior’, ‘strange’, ‘unwelcome’ and ‘unsafe’: face-to-face is intense, just too demanding.

Perhaps this critique lacks compassion. Service users don’t choose to have mental health issues. Indeed they are beset by some problem, condition, syndrome, affliction, disorder, disease—identifying the appropriate form of address is itself a difficulty. However labelled, given one has been involuntarily visited by a ‘trouble’ of some kind, one is a victim of misfortune rather than guilty of an offence. But as Sarah Schulman writes in Conflict Is not Abuse: Overstating Harm, Community Responsibility, and the Duty of Repair, we appear to be in a cultural moment where we view our entitlement to compassion as requiring a certain self-pathologisation. We are given culturally to seeing ourselves as victims, in need of protection from psychically threatening forces and contexts. One of the issues here is that, as digital mental-health therapies become more popular, and given their apparent cost-effectiveness, all manner of personal experiences, even those that range out to political conflicts, could be recast as mental-health issues.

Woebot may sit at the outer rim of digital mental health technologies, but we can see the shape and trajectory it implies for understanding mental health, the care relation and, beyond that, the person generally. Where the CBT-AI combination sets up a relentlessly positive artificial other—an interlocutor that is never awkward or demanding, and becomes naturalised as ‘what I like’—real-world relationships are bound to be found unsatisfactory. At the least, it is likely that our capacity to ‘read’ and respond to the other diminishes where the structures and protocols of digital communication take over.

This is consistent with the aggregate effect of the use of online mental health products, which draw the user’s horizon of awareness away from any other and more and more tightly around ‘the me’—my sensitivities, my needs, my entitlements. Given that in these applications the vitality of personal accountability and the expectation of reciprocity typical of the in-person therapeutic setting is depleted, if not expressly dismissed, it would not be surprising to find that the other more-or-less disappears beyond the event horizon. One might observe the same dynamic in a range of fields where the offline world lifts the individual and persons generally out of the conditions and challenges of embodied life and relationships. In this we might observe that the conditions that have led to this reconstitution of the self–other relationship, and the profound implications if this situation is normalised, go well beyond the realm of technologised ‘mental health’.

Sham Diagnosis

David Ferraro, Mar 2021

…denial of care to a particular group of patients is not the application of some apolitical, medical procedure. It is thoroughly reactionary, and continues the worst traditions of psychiatric care, updated for the neoliberal age.

About the author

Mark Furlong

Mark Furlong is an independent scholar, and thinker-in-residence at the Bouverie Centre, La Trobe University: .

More articles by Mark Furlong

Categorised: Arena Quarterly #9

Tagged: , ,

Comments closed

Support Arena

Independent publications and critical thought are more important than ever. Arena has never relied on or received government funding. It has sustained its activities largely through the voluntary work and funding provided by editors and supporters. If Arena is to continue and to expand its readership, we need your support to do it.