Taking Responsibility for AI

Speaker
Dr Max Kiener
Event date
Event time
13:00 - 14:00
Venue
Institute for Ethics in AI (Faculty of Philosophy)
Oxford
OX2 6GG
Venue details

Hybrid format, allowing a small number of guests to attend in person and everyone else online

Event type
Lectures and seminars
Event cost
Free
Disabled access?
Yes
Booking required
Required

Abstract: Artificial intelligence (AI) increasingly executes tasks that previously only humans could do, such as driving a car, fighting in war, or performing a medical operation.

However, as the very best AI systems tend to be the least controllable and the least transparent, some scholars argued that humans could no longer be morally responsible for some of the AI-caused outcomes, which would then result in a 'responsibility gap'.

In this presentation, I assume, for the sake of argument, that at least some of the most sophisticated AI systems do indeed create responsibility gaps, and I ask whether we can bridge these gaps at will, viz. whether certain people could take responsibility for AI-caused harm simply by communicating the intention to do so, just as people can give permission for something (via consent) simply by communicating the intention to do so. So understood, taking responsibility would be a genuine normative power.

I first discuss and reject the view of Champagne and Tonkens, who advocate a view of taking prospective liability. According to this view, a military commander can and must, ahead of time, accept liability to blame and punishment for any harm caused by autonomous weapon systems under her command. I then defend my own proposal of taking retrospective answerability, viz. the view that people can make themselves morally answerable for the harm caused by AI systems, not only ahead of time but also when harm has already been caused.