Making artificial intelligence ethical
Making artificial intelligence ethical

Image credit: Shutterstock

Making artificial intelligence ethical

Dr Paula Boddington is a research associate at Oxford’s Department of Computer Science, specialising in developing codes of ethics for artificial intelligence.

What drives you in your field?

I find the philosophical and ethical questions posed by developments in artificial intelligence fascinating.

There are visions of the development of AI that press us to ask questions about the limits and basis of our values – if AI radically changes the nature of work, for instance, perhaps even abolishing it for many, we have to reappraise what we do and don’t value about work, which raises questions about why we value any activity. Such questions about extending human intelligence and agency with AI are in fact, honing in on the most fundamental questions of philosophy, about the nature of human beings, about our place in the world and our ultimate sources of value. For me, it’s like finding a philosophical Shangri La to be working in this field.

What are the biggest challenges facing the field?

My work is focused on the implications of the technology, so the challenges include making sure that the power of AI does not simply amplify problems – such as our existing biases. There’s also a big issue in how we apply AI to problems. AI can be very powerful indeed in narrow areas. Whenever such narrow focus happens, there’s a danger that context will be missed, and that we’ll have found a solution, and so make all our problems fit the solution. It’s like Abraham Maslow said, when you have a hammer, everything begins to look like a nail.

There are certain tasks at which the AI we have now and in the near future can excel, so we must make sure that as we develop particular applications, we don’t find that our picture of the world starts to mould itself to what we can achieve with AI, especially given the hype that periodically surrounds it. That’s one of the reasons why we need as many people as possible involved in developing and applying AI, and thinking creatively about how it can best be used, and what else we need to achieve real benefits.

Why do you think it is important to encourage more women into the field?

Yes it’s important that women and men work in AI, but more than this, it’s important that there are people with diverse experiences and varied opinions and viewpoints in AI for a number of reasons.

We need to develop technology that actually caters to people’s needs, and where in practical applications, human beings will really benefit. Tailoring such tech is complex and it needs really good design sensitive to the context of a myriad of different circumstances.

What research are you most proud of?

I try not to really ‘do’ being proud of things, I’ve always been taught that ‘pride comes before a fall’. But I’m most pleased to be involved in work that might have a practical impact to improve lives for people. For instance, I’m also working right now on a project based in Cardiff University collaborating with a group of medical sociologists and others, on the care of people living with dementia – see This might seem a million miles away from AI and the impact of the development of new technology, but in fact the philosophical and ethical issues overlap considerably – how do we translate abstract ideas such as respect for persons, and humane, dignified care, into making a concrete difference to the lives of those such as people living with dementia, who have various challenges such as difficulties in communicating?

This work is aimed at producing practical recommendations to improve lives. We’ve just started a project looking at continence care. A world away from the glamour of AI, but essential work. And I see a great opportunity for technology to think about some important and common problems, for example, perhaps with working towards better detection of pain, which is greatly under-treated for those with dementia, or assisting with access to fluids and access to the toilet, which is often a problem in hospital wards. In the end, it’s this kind of careful, detailed ethnographic work that my colleagues in Cardiff are carrying out, which examines what’s really going on and what’s needed, that needs to be married up with developments in tech, in order to produce technology that will really benefit people.

Are there any AI research developments that excite you or that you are particularly interested in?

I’m particularly interested in the possibilities for AI in medicine, such as helping with disease diagnosis and the interpretation of medical images, and also its deployment in applications such as in the use of mobile technologies for health management. With these developments people are increasingly able to monitor and learn about their own health conditions. These are particularly exciting for use in remote areas or where medical staff are in short supply, but also simply for increasing the knowledge and control that individuals have over their own conditions and hence over their own wellbeing.

There are, quite understandably, fears that AI will take away jobs, but in the context of medicine, I think that’s unlikely. Think about how overstretched medical staff are at the moment. Helping them to make faster, more accurate diagnoses, tailored to individuals, will not only help patients, it should, hopefully, help to relieve time pressures and other stressors from doctors, if applied thoughtfully.

The evidence so far seems to indicate that AI works best as an addition to the skills of medical practitioners, not as a replacement for them. With all these developments, however, we need to keep looking very carefully at how we can get the best out of such technologies. For example, the early diagnosis of disease can be a big advantage in some conditions – but not such an advantage in others. In any context, and medicine is a good example of this, information is just information. It’s not knowledge, and it’s certainly not wisdom. That’s where the human skills of medical practitioners will always have a vital role.

What drew you towards a career in science?

Our whole family was always really excited about science. As children, my siblings and I were always glued to the television whenever Tomorrow’s World was on.
I came to dislike school a lot and used to bunk off and go to the library and read philosophy instead. I was really interesting in how the Arts’, social sciences and general STEM worked together.

I’ve always been focused on applying abstract ideas to concrete reality, and having an understanding of, say, the science behind developments in genomics. From my work in ethical questions in medical technology it was a short step to working in issues in artificial intelligence.

Who inspires you?

Of the many possible answers, I’d have to say members of my family. My father always told me that I could do anything I wanted in life. His own mother had started out life as the illegitimate daughter of a Victorian barmaid, brought up in Tiger Bay in Cardiff, and she became the headmistress of a girl’s Grammar School. So Dad had a great belief in women’s abilities. On my Mum’s side, her grandmother was the first woman in Cardiff to have her own alcohol licence and ran her own pub, also in Tiger Bay. She had six children, and during the Depression when work was hard to find, she started doing pub lunches to provide income for them - the family always claim that she invented the pub lunch. Whether that’s strictly true or not, ‘get an education, get an education’ was like a mantra breathed in the air, the idea that education was a key to success and that family was crucial too, and that yes, you can get around obstacles and make a go of things.

Dr Boddington is the author of the book Towards a Code of Ethics for Artificial Intelligence.

Learn more about the research referenced in this article.

Find out more about Dr Boddington and her research interests.