Automating the all-about-me: On Apple’s new Mental Health App

Let go. No longer do you have to carry the burden on your own. Progress now allows you—nay, summons you—to delegate responsibility for your mental health to a caring, permanently by-your-side, unbiased companion. Relax. You can be told what moves you, what brings you down, what leaves you feeling positive. Provided with the crucial data only a loving machine can forward to you, your self-care is in safe hands.

This spiel was delivered on behalf of Apple’s new mental health app by Nine News’s ‘lifestyle and health’ reporter Sarah Berry in a recent feature in The Sydney Morning Herald. Cheerfully informing the reader that the app is a ‘natural extension of the health features [Apple] released a decade ago’, Berry’s account assures the reader that she’d profitably ‘spent the past few weeks testing the features, which allow you to track your state of mind, logging daily moods—from very unpleasant to very pleasant—and considering what might be affecting the way you feel’.

Only a cynic could doubt the warrant of Berry’s first-person verification. That understood, might some hands-off, more objectively placed authority endorse the idea that Apple’s easy-to-use app helps better your mental health? Fortunately, the writer has this possibility in the bag: the feature heavily quotes Dr. Laueen Cheung from Apple’s—I kid you not—‘clinical team’. This expert is quoted as saying that diagnoses for depression and anxiety have increased 25 per cent worldwide in the last three years, and that in response to this alarming rise in pathology, ‘(w)e are designing tools that really help give our users insights into both and the connections between the two’.

So much is problematic in this advertorial that it is difficult to know where to begin. For starters, what is meant when Berry uses the term ‘mental health’ in her feature? A beginning point is that mental health—whatever that term may mean—should not be conflated with an absence of diagnosed mental illness. The feature does not acknowledge the crucial complication that the term ‘mental health’ is highly ambiguous in its current usage, as is clear in the following three cases: in everyday speech many people use it as a synonym for mood, as in ‘how are you feeling right now?’; in policy discussions it has come to be used as a polite euphemism for ‘serious mental illness’, e.g. the primary focus of the recent Mental Health Royal Commission was the crisis in the system designed to serve those with serious mental ill-health; and in philosophic discussions it has a broader ambit, which is closer to larger constructs such as ‘quality of life’ or ‘overall life appraisal’. At a different level of comment, once narrowly technical in its province, the term has become mainstreamed and accrued a certain status and moment, so that it has a cachet imbued with legitimacy, even reverence. Not to be gainsaid or questioned, ‘mental health’ is a private shibboleth and a public policy priority. Whatever it is, it has standing and is a signifier.

Perhaps associated with the significance that Berry attaches to the generic ‘mental health’, in the feature no attention is paid to the inevitable tension that exists between care and the control dimensions that the app raises. This elision is most apparent in the blithe statement that ‘if someone logs a number of negative moods over the course of a month, the Apple feature prompts them to take an assessment (the Generalised Anxiety Disorder or the Patient Health Questionnaires) and seek help. The app also directs users to local help services (in Australia, it’s Beyond Blue) and provides fact sheets about mental wellbeing, as well as tips for managing it’. Such a practice requires comment, yet none is offered.

Distinct from issues relating to language and coercion, in the feature there is absolutely no discussion of how the app ‘reads’ the user’s mental state. In this version the reader—the potential app user—is assumed to be abjectly credulous. Just as your heart rate and blood pressure can be rated as properties that can be objectively measured, so too the state of your inner self. Take it from us. We know what is at issue and what is best for you.

The somewhat longer online version of Berry’s feature does offer a limited discussion of the app’s method. ‘Mood tracking’ is the term used for the approach it (and many other apps) use to assess the subject’s mental state. In this version, the reader is assured by Dr. Cheung that ‘(w)e spent a lot of time really thinking about the design of the images [corresponding to different moods] because those visuals need to ensure that the unpleasant moods feel just as acceptable to a user as the pleasant ones’. Berry adds, ‘It takes less than a minute to log your mood each day, sliding along the bar and clicking on the relevant mood. Neutral is reflected by emanating blue circles, very unpleasant emanates a violet octagram while very pleasant emanates an orange flower. Prompts then ask you to click on the word that best describes the feeling and consider what, from a list of options, is having the biggest impact’.

The premise is that there is a universal character to human response. This starting point ignores the fact that personal history, family of origin, generational status, gender, class and culture, amongst a larger suite of variables, mediate the complex relationship between what is presented to us and how this event is processed. Narrowing this concern to the app in question, one does not need to be Erasmus to know that there is no linear relationship between feelings, moods and mental health/ill-health. More broadly, in the emerging field of EDR (emotion detection and recognition)—a rubric inclusive of ‘emotion AI’, ‘affective computing’ and ‘tone analysis’—there can be no straightforward, non-controvertible way to interpret voice quality, facial expression, gestures, eye movements, dermal states, or any combination of psycho-behavioural-physiological measures that can score the subject with respect to a nominated category of analysis such as honesty or sadness.

Interpreting the connections between meaning and referent may be necessarily problematic, but this has not dissuaded companies such as Dynatrace from claiming its machine can discern seven distinct emotions: frustration, impoliteness, sadness, sympathy, politeness, satisfaction, excitement. Perhaps this might be considered an outlandish claim, but it pales in comparison to the mob that reckons its methodology can ‘(d)ifferentiate 37 kinds of facial movement that are recognized as conveying distinct meanings’. Caveat emptor. The emerging field which sets out to recognise, interpret and simulate human emotions may or may not be a pseudo-science, but, more than any other specific qualm, it is a jostling field that should be critiqued as a commercial rhizome.

Some earlier references come to mind when thinking about mood tracking and the interpretation of emotions by machines. The mythical Voight-Kampff test in the film Blade Runner was said to be able to distinguish humans from replicants; in the movie, measures of heart rate, eye movement and galvanic response are rated in response to certain verbal cues to discern if the subject can experience, or not experience, empathy. Earlier, in the mental health field, ‘high expressed emotion’—a construct made up of critical comments and emotional tone that observers scored with respect to family interaction—was theorised, and sometimes critiqued, as a predictor of relapse in schizophrenia. Currently, what is more at issue is that the interface between the human and the contrived, between the natural and its representation, is increasingly embroiled. For example, Amazon recently marketed a feature on its Alexa platform that it termed ‘Hunches’, which sought to track, and then pre-empt, certain user moods. Such practices are far from unproblematic. Judy Wajcman, a noted UK sociologist, and her colleagues are investigating such technologies from a feminist perspective. In a recent piece, Wajcman wrote that Hunches ‘mistake(s) the appearance of care with real empathy and genuine personal interaction’. This comment, more generally, leads into considering the slippery question of what is real, irreal and not real in the realm of the human.

A recent controversy brings this question into focus. In 2018, two advertisements were presented during the high-profile Superbowl play-off in the United States. According to the USA Today Ad Meter, one of these—an ad that used celebrity voices to substitute for Amazon’s Alexa assistant—was rated more effective than its rival, an ad that spruiked Diet Coke’s Groove using a model who danced oddly after drinking a can of Diet Coke Twisted Mango. This finding was challenged by Paul Zak, who leads a research team that studies the consumer’s ‘neurologic immersion’, a method that assesses levels of emotional engagement as quantified by changes in oxytocin levels. On that calculus, Zak’s team concluded that the Groove ad had the better outcome. According to Zak, there is ‘zero correlation’ between what people say and how they subconsciously feel. Asked to explain such divergent results, Zak’s reply was emphatic: ‘People lie, their brains don’t’.

This assertion raises the radical contention that machines may know more about us than we know about ourselves. This possibility brings us back to the more prosaic case of Apple’s mood tracker. Could it be that, rather than a presumption arrogation, the key claim made by the developers of this app—that it reliably assesses our mood and mental health—is more credible than a critical first pass might suggest? The answer here depends less on the state of sophistication of the machinery being used than it does on how we posit the self and self-awareness, how the unconscious is defined, and a whole heap of other highly theoretical questions. This said, some of us are more ready to hand over responsibility for our inner lives to an ‘it’ than others.

A narrower, and perhaps more topical, concern is the ethics at the centre of mood tracking/mental health apps such as Apple’s. Such technologies exclusively place the individual user at the centre of their calculus: how does this or that circumstance influence my feelings, my mood, my mental health? If the circumstance is pleasing, or it if is antagonistic, this is what is counted. Such an its-all-about-me criterion atomises thinking and is anti-social in its operation. Hey, it’s not good for me to be stressed. My self-care is the issue. In so much as this form of logic has hegemony, there is a rationale—a scientifically valid and publicly endorsed alibi—for me to not put myself out for you and your interests. This conclusion can be recycled in one’s inner dialogue, and recited to the other, with a straight face. Who can argue with the truth that mental health is a legitimate concern?

I-Me-My-Mine: The Expansion of Mental Health and Individualisation

Mark Furlong, 16 Jun 2022

This pre-occupation with my hurts, my dreams, my entitlements—with the organizing principle that I am, and should be treated as, special—is the garden bed into which the du jour varietal ‘have regard for your mental health’ has been planted.

About the author

Mark Furlong

Mark Furlong is an independent scholar, and thinker-in-residence at the Bouverie Centre, La Trobe University: .

More articles by Mark Furlong

Categorised: Arena Online

Tagged:

Add Comment

Support Arena

Independent publications and critical thought are more important than ever. Arena has never relied on or received government funding. It has sustained its activities largely through the voluntary work and funding provided by editors and supporters. If Arena is to continue and to expand its readership, we need your support to do it.

Leave a Reply