What happens when new artificial intelligence (AI) tools are integrated into organisations around the world?
For example, digital medicine promises to combine emerging and novel sources of data and new analysis techniques like AI and machine learning to improve diagnosis, care delivery and condition management. But healthcare workers find themselves at the frontlines of figuring out new ways to care for patients through, with - and sometimes despite - their data. Paradoxically, new data-intensive tasks required to make AI work are often seen as of secondary importance. Professor Gina Neff calls these tasks data work, and her team studied how data work is changing in Danish and US hospitals (Moller, Bossen, Pine, Nielsen and Neff, forthcoming ACM Interactions).
Based on critical data studies and organisational ethnography, this talk will argue that while advances in AI have sparked scholarly and public attention to the challenges of the ethical design of technologies, less attention has been focused on the requirements for their ethical use. Unfortunately, this means that the hidden talents and secret logics that fuel successful AI projects are undervalued and successful AI projects continue to be seen as technological, not social, accomplishments.
In this talk we will examine publicly known “failures” of AI systems to show how this gap between design and use creates dangerous oversights and to develop a framework to predict where and how these oversights emerge. The resulting framework can help scholars and practitioners to query AI tools to show who and whose goals are being achieved or promised, through what structured performance using what division of labour, under whose control and at whose expense. In this way, data work becomes an analytical lens on the power of social institutions for shaping technologies-in-practice.