
Expert Comment: Ethics-washing and Us-washing
Professor Edward Harcourt, Director of the Institute for Ethics in AI (Interim) and Professor of Philosophy at the University of Oxford, argues that a serious ethical response to AI must do more than challenge superficial ethical labels: it must also recognise how easily criticism of AI can become a way of obscuring our own responsibility for the values and behaviours these systems embody.
Professor Edward HarcourtTrickling into a linguistic channel already worn smooth by ‘green washing’ and before it of course ‘whitewashing’, ethics-washing means something like this: appearing to take some ethical anxiety about AI seriously and then tactfully dismissing it by applying a meaningless kitemark. It’s the AI ethics equivalent of getting a clean bill of health from a dishonest doctor.
An expression people will likely be less familiar with – or so I hope, because I think I have just invented it – is ‘us-washing’. This too is to be avoided, but because it’s a less familiar idea, some explanation is needed.
Us-washing is like ethics-washing except that what gets ethically whitewashed is not AI, but ourselves. It’s an important notion because it’s important to avoid an implication that’s all too easy to draw from my (justified) disdain for ethics-washing, that the bit of the world occupied by AI – a bit of the world that’s growing all the time – is ethically black and white: on one side big bad AI, and on the other good little us, with only ethics to defend ourselves. A serious AI ethics – indeed the serious ethical examination of anything – should not be in the business of lending respectability to these simplistic oppositions or to the idea that it is always the business of ethics to oppose AI, any more than it should be in the business of ethics-washing.
But why ‘us-washing’? The answer is that the simplistic picture which pits big bad AI against good little us ethically flatters us, and flatters us because to the extent that AI is bad – and surely it sometimes is and sometimes isn’t – its badness is our badness. Wagging our fingers at AI is thus a convenient way of whitewashing ourselves.
But why ‘us-washing’? The answer is that the simplistic picture which pits big bad AI against good little us ethically flatters us, and flatters us because to the extent that AI is bad – and surely it sometimes is and sometimes isn’t – its badness is our badness. Wagging our fingers at AI is thus a convenient way of whitewashing ourselves.
The mechanism at work here is the same as in scapegoating: the badness of the community is loaded symbolically onto the sacrificial animal so the community comes out ritually purified. Psychoanalysts find this mechanism all over the place, especially perhaps in practices of blaming: we project disquieting aspects of ourselves onto others and condemn them at a safe distance. That’s a lot easier than facing up to something disagreeable that’s within us.
Here is a more concrete example of what I have in mind. A few years back I was paying for something at a railway station and leant across – this bit of the story shows it must have been pre-COVID – to get my change. It was then that I spotted a little notice taped to the back of the cash register, ‘smile when you give change’. But what is a smile to order really worth? Routinizing smiles in this way nullifies the value of the human presence by reducing it to an instrument – supposedly – of customer satisfaction. But now don’t be surprised that the same species that invented these working practices also invented LLMs, indeed LLMs with human faces, that ‘ask’ you how you are feeling, or ‘commiserate’ with you when you have lost a relative. Humans – that is to say, we – did our very best with the production line to reduce our conspecifics to machines. No surprise, then, that we have now invented actual machines to do the same thing. (Indeed, that might be an improvement, as long as the humans can be found something else to do.)
Nor should this line of thought be mistaken for a counterblast against greedy producers, in which consumers are the innocents. That would be us-washing all over again.
It’s still a puzzle to me why people ‘chat’ to LLMs, because the very fact that LLMs are machines should mean that their ability to ‘listen’ for hours on end isn’t a sign of patience, and so has nothing like the value a patient human being would have.
But our appetite as consumers for instrumentalization – that is, for reducing something to its ability to gratify our desires - which is so vividly on display here has been well honed already by pre-digital consumerism, starting with people who are all smiles at the checkout but also roping in the Lonely Hearts column – remember those? – in which people recite a checklist of what they’re after in their perfect romantic partner.
The latest AI-driven iterations of these things – whether chatbots, dating websites or some third thing - should be no surprise at all. If we are uncomfortable with those inclinations within us that lead us to instrumentalize our fellow human beings, we’re surely not wrong.
But we should rein in our inclination always to point the finger at AI and own those less flattering features of ourselves.
For more information about this story or republishing this content, please contact [email protected]