
Expert Comment: What should we do about chatbots?
Professor Edward Harcourt, Director of the Institute for Ethics in AI (Interim) and Professor of Philosophy at the University of Oxford, reflects on what our relationships with chatbots reveal about what is distinctively human. He argues that we must educate users to see these systems not as new members of the human community, but as tools that risk reducing both therapy and relationships to one-dimensional, instrumental exchanges.
Professor Edward HarcourtSpecialist therapy chatbots may help, trained on expert advice rather than on whatever’s out there on the web. But arguably we need more: to educate users of this technology – especially young users – about what it is and, more importantly, isn’t.
A striking fact about us – though one we rarely call to mind – is that we don’t see each other as food, and not just in the way most vegetarians don’t see non-human animals as food: as philosopher Cora Diamond pointed out years ago (Philosophy 53:206 (1978), pp 465-79.), our attitude to people in this connection is of an entirely different order.
And there are so many other facts which, as Diamond put it, ‘go to build our idea of a human being’: the fact that we give human beings names rather than numbers, and not just any old names but names that locate us in further networks of significance (think of ancestry.com). That we treat even dead human bodies with special solemnity; that we have infinitely mobile faces in which we can read second by second changes of mind; or that we bring to everything we do the knowledge that we have all been born, and are going to die.
This complex of natural dispositions comes first, the concept of the human second.
In this complex lies the link to AI ethics. The very term ‘AI’ smuggles in an assumption, that there’s one thing – intelligence – possessed by both humans and machines.
One of the most exciting things about AI ethics is the way AI prompts us to bring to awareness what’s unique to ourselves. And isn’t it obvious – as soon as we ask the question – that the complex of natural dispositions that ‘goes to build our idea of a human being’ does not create a fellowship between us and artificial entities.
One of the most exciting things about AI ethics is the way AI prompts us to bring to awareness what’s unique to ourselves. And isn’t it obvious – as soon as we ask the question – that the complex of natural dispositions that ‘goes to build our idea of a human being’ does not create a fellowship between us and artificial entities, however good they are at producing strings of words, mimicking facial expressions and so on? (Animals, fascinatingly, seem to be half in and half out of this fellowship. But that’s a topic for another day.)
But not so fast: as those tragic events remind us, many teenagers seem to find it easier to confide in chatbots than they do in real live human therapists, perhaps even than in friends. Why?
Well, there are no waiting lists for chatbots. You can’t try their patience. You can’t fret that the bot talks to other clients, so maybe you aren’t really that special; fret that it has a whole life – of loved ones, colleagues, outside interests – that is invisible to you, and in which you have no place; that it might cancel on you because something more important has come up; or that it might die, indeed die before your therapy has reached a meaningful conclusion.
And after talking to a chatbot some people may well end up less troubled than before.
Doesn’t that show that Diamond’s observations about our tangle of hard-wired dispositions is all deeply contingent and now – thanks to advances in AI – machines are ripe to be admitted to the fellowship we might previously have thought was humans-only?
What’s going on with teenagers and chatbots is instrumentalization: we lift a desire at one end of a human relationship – to be free of anxiety, say – out of a more complex whole, and at the other end substitute something that reduces to its power to gratify the desire.
The answer is no. What’s going on with teenagers and chatbots is instrumentalization: we lift a desire at one end of a human relationship – to be free of anxiety, say – out of a more complex whole, and at the other end substitute something that reduces to its power to gratify the desire.
The opportunity to instrumentalize distinctively therapeutic relationships is new. But instrumentalization is not new at all: it’s also, familiarly, what’s going on with pornography, and pornography is as old as the hills.
Now there’s no pretending pornography doesn’t ‘work’, if that’s the word for it: if it didn’t, it would have ceased to exist centuries ago. But users of pornography surely know that they are not on one end of a real human relationship: many turn to it because they can’t enjoy real human relationships, or because what they want is something simpler than real human relationships can readily provide, or both.
That’s a significant contrast with chatbots, where many people seem – so far - unable to see that instrumentalization is what’s going on, so much so that the popularity of chatbots has prompted excited speculation about new, artificial members of the human fellowship when the very one-dimensionality of our relationship with them shows that that’s exactly what they aren’t.
So what would happen if we routinely asked young people – as part of their school education, perhaps – some questions: do you think a chatbot is making time to see you? The chatbot doesn’t grab your hand to comfort you – is that because it’s frosty, or (alternatively) observing proper boundaries between you? When the chatbot talks with you for three hours at a stretch without a change of mood, is that because it is very patient? A chatbot can’t skip a session because one of its children is ill, so can it empathize with you in the same way as a creature that shares your vulnerability?
So what would happen if we routinely asked young people – as part of their school education, perhaps – some questions: do you think a chatbot is making time to see you? The chatbot doesn’t grab your hand to comfort you – is that because it’s frosty, or (alternatively) observing proper boundaries between you? When the chatbot talks with you for three hours at a stretch without a change of mood, is that because it is very patient? A chatbot can’t skip a session because one of its children is ill, so can it empathize with you in the same way as a creature that shares your vulnerability?
My guess is that if we designed the questions right, sooner or later the answer we’d get to all of them would be ‘no’.
And if despite the negative answers, young people still chose chatbots over real people – which they might well do - that would be because instrumentalization is sometimes a convenience, not – thankfully - because they had been taken in by ‘Seemingly Conscious AI’.
Read an extended version of Professor Harcourt's Expert Comment on the Institute for Ethics in AI blog.
For more information about this story or republishing this content, please contact [email protected]