Researchers at the Oxford Internet Institute (OII) are helping to strengthen the governance of emerging technologies through research, public engagement, and the development of new tools and approaches.
‘But such technologies can breach privacy in their use of historical data. And algorithms – the rules used to analyse information and make decisions – can make biased or discriminatory calculations,’ explains OII’s Professor Sandra Wachter. ‘We wanted to explore how to strengthen governance of emerging technologies across a range of issues.’
The OII’s multi-disciplinary Governance of Emerging Technologies (GET) research programme investigates these and other innovative technologies from the perspective of law, ethics, and computer science, and determines what protections already exist, what protections people should be entitled to, and how to put these protections in place with technical solutions.
In recent years the team created a new method for explaining how highly complex and opaque AI systems make critical decisions that fundamentally affect people’s lives and opportunities. So-called ‘counterfactual explanations’ provide simple information to people affected by AI about how their cases were decided, explaining what would have needed to be different to have received a different decision.
‘If someone has been turned down for a loan or a job, for instance, they want to understand why,’ says Wachter. ‘A counterfactual explanation would indicate, for example: you didn’t get the loan because your income is too low; had your income been £10,000 higher, you would have got the loan. It helps explain how a particular decision was reached and can help create trust and accountability in the decision-making process.’
The method was one of the first technical solutions to offer easily understandable explanations of AI decisions. Google, IBM, Accenture and Vodafone South Africa have all acknowledged the team’s work in the development of their products to provide explanations of automated decisions.
In other work, the team has:
- Found that ‘inferred personal data’, which is the fundamental building block of many online platforms, services and businesses, is given little protection by existing privacy and data protection law. They proposed a ‘right to reasonable inferences’ in response which has been cited in legal commentaries and policy proposals on the EU General Data Protection Regulation and UK Data Protection Act.
- Highlighted the fact that most existing measures of AI fairness hide and exacerbate pre-existing social biases and inequalities, which has influenced reforms to EU non-discrimination law.
- Developed a legally compliant test to measure discrimination in AI which has influenced the EU’s Artificial Intelligence Act and is now used by Amazon.
The OII team has shared its research and demonstrated the practical importance of its work by publishing in prestigious academic journals, responding to calls for evidence from the UK government, and making direct contributions to reports from bodies such as the Centre for Data Ethics and Innovation. They have also delivered numerous talks and presentations and generated significant media coverage in the UK and international press.
‘There is a real appetite for greater accountability and user protections in emerging technologies, and we are delighted to see industry and public bodies responding so positively to our work,’ says OII’s Professor Brent Mittelstadt. ‘Our approaches have already been implemented in a number of public tools and commercial products and we hope our research will continue to influence the development and deployment of AI and algorithmic systems.’
‘The potential of AI and ML are huge, from virtual reality to autonomous vehicles and the Internet of things, and the technologies are changing all the time,’ he continues. ‘But people still expect their rights to be protected. Effective governance of emerging technologies is vital to protect the interests of society, businesses, public bodies and, above all, citizens, and we are delighted that our blue-sky academic research is finding practical application in these areas.’
Sandra Wachter is Professor of Technology and Regulation at the Oxford Internet Institute
Brent Mittelstadt is Director of Research, Associate Professor and Senior Research Fellow at the Oxford Internet Institute
GET is coordinated by Professor Brent Mittelstadt, Professor Sandra Wachter and Dr Chris Russell.
Funders: British Academy, EPSRC, The Luminate Group, The Miami Foundation, Alfred P. Sloan Foundation, Wellcome Trust, Department of Health and Social Care via NHSx.