Features
In the final part of our women in AI series, Dr Vidya Narayanan, a researcher at the Oxford Internet Institute and post-doctoral researcher on the Computational Propaganda Research Project, discusses her work understanding the effects of technology and social media on political processes in the United States and in the UK.
What is AI (In your own personal view)?
In my view, Artificial Intelligence is the ability of computer programs to make independent decisions with little or no human intervention and to adapt to new situations.
AI has been a subject for intense speculation and rigorous academic study since the 1950s, when Alan Turing asked if machines could think. While the theory of AI has seen continuous development since then, it’s only in the past decade, with our access to vast stores of data and the advent of graphics processing units that can process this data in parallel, that applications of AI seem finally ready to step out of the pages of science fiction books and have a profound impact on our everyday lives.
What are the biggest AI misconceptions that you have encountered?
As with any relatively new and powerful technology, AI too has the power to split opinion among technocrats, policy makers and the general public. To me, it feels as though we still lack the evidence to categorise specific notions about AI as misconceptions. There is little consensus among academicians and other AI researchers on a timeline for the development of Artificial General Intelligence (AGI) – a level of intelligence that allows a computer to handle any intellectual task that a human can. The onus is on us, as academicians, to continually assess the state of art of AI and communicate these findings in an accessible manner to the general public and build an informed consensus about AI among people.
What do you think can be done to encourage more women into AI and what has your own personal experience been in the field?
Dr Vidya Narayanan is a post doctoral researcher at the Oxford Internet Institute
Image credit: OU
This is a very important question and one that concerns me vitally as both as a woman and as a mother. The place to start is at school and work towards creating an environment where girls can interact with technology in a peer group setting. It’s vital to encourage them to think that they can be both consumers as well as developers of technology. I have been very fortunate to have worked with very supportive colleagues and mentors and have had an extremely positive experience working as a woman in AI. On a personal level, I would like to support endeavours to create such conducive work environments for women across the globe, particularly in technology.
What drew you towards a career in AI?
Back in the day when I was a graduate student at Pennsylvania State University in the US, the research team were working on decision-making problems in distributed systems that couldn’t be solved by conventional optimisation techniques. The paradigm of multi-agent systems that use reinforcement learning techniques to make decisions in dynamic and uncertain environments were among the various techniques we considered. This was my introduction to Artificial Intelligence and I moved to the UK to pursue a Phd in Computer Science in the Intelligence, Agents and Multimedia lab at Southampton, which was doing pioneering work in the area. Since then, I have been acutely aware of the immense potential of AI to kick start a new technological revolution and change our lives.
As a scientist, I wished to play an active part in creating some of these methods and this drew me towards a career in AI. More recently, I have been motivated by a need to use AI for social good and harness its capacity to solve some of the most pressing problems in the world, including equitable sharing of resources across the globe, examining the impact of social media interactions on democratic processes, the effect of private companies acquiring vast amounts of personal data and the potential for this to be misused by political campaigners - particularly in fragile democracies.
What research are you most proud of?
I returned to academia in November 2017 after a career break to care for my young children. Since then I have joined the Computational Propaganda project, exploring the role of social networks in spreading fake news and influencing electoral processes around the world.
My colleagues and I have been studying the effects of technology and social media on political processes both in the US and in the UK. In particular we have looked at bot activity on Twitter during the Brexit referendum and the spread of junk news among audience groups on both Twitter and Facebook. This is a fascinating area that brings together the disciplines of Political Science, Sociology and Computer Science, to strengthen democratic processes. I’m very motivated to extend this study by creating and using state of the art technologies to study political polarisation, junk news spread on social media platforms and misinformation campaigns by state and non-state actors to influence elections around the world.
What are the biggest challenges facing the field?
The biggest challenges for the field are to address the risks of AI which are very well documented. For example; disruption to jobs, wealth creation for a few individuals widening the social and economic divides and the issue of most of the innovation in AI being driven by private companies. I also think there is a need for policymakers to regulate the development of AI, so that we can make algorithms accountable and rid automated decision-making systems of inherent biases against sub-populations. We need to really harness the power of AI to create egalitarian societies around the world.
What motivates you most in your research?
I enjoy the challenge of using mathematical techniques, computer science and data sets to find solutions to real life problems. I’m acutely conscious of the fact that while in some parts of the world we are on the brink a ‘Fourth Industrial revolution’, there are others who haven’t benefited even from the first industrial revolution and lack access to food, water and electricity. My primary motivation is to build AI powered applications that address these issues by developing fundamental advances in the theory of AI as a computer scientist at Oxford University.
Who inspires you?
There are a number of people who inspire me: Ada Lovelace, Bertrand Russell, Martin Luther King and Alan Turing. I also loved the film Hidden Figures; the true story of Katherine Jonson, Dorothy Vaughan and Mary Jackson, three brilliant African-American women that worked at NASA and played a key role in the space race, getting John Glenn to orbit the Earth. It has a brilliant cast (Janelle Monae, Taraji P. Henson and Octavia Spencer) and is incredibly inspirational – particularly for women in STEM.
FIND OUT MORE ABOUT THE COMPUTATIONAL PROPAGANDA RESEARCH PROJECT
Professors Peter Brown and Rafal Bogacz in the Nuffield Department of Clinical Neurosciences describe their research team’s discovery that a certain ‘hold your horses’ function in decision-making occurs in an extremely brief window of time, and involves bursts of a specific type of activity in a brain centre known as the subthalamic nucleus.
Are you a decider or a ditherer? When making decisions, we not only have to decide what to choose, but also how much time to spend making the decision. How long should we spend collecting relevant information to inform our choice?
Imagine, for example, that you are choosing which meal to pick up during a lunch break. Dwelling over this decision might mean that you miss out on valuable time that could be spent chatting with friends, whereas quickly choosing a menu option without proper thought might mean that you overlook a better alternative.
It was already thought that the subthalamic nucleus might play an important role in balancing the opposing demands of speed and accuracy during decision-making. Scientists suspected that it helped us delay decisions for the optimum amount of time, to enable the best choice to be made in any given situation. But our own research reveals that this part of the brain gets involved in adjusting these ‘decision thresholds’ at a very particular and brief moment during the process of deliberation.
The aim of our new study was to probe the mechanisms by which the subthalamic nucleus influences decision-making. We were able to do this using deep brain stimulation in Parkinson’s patients (an intervention which has been shown to be very successful in alleviating some of the movement symptoms related to this condition).
The research team asked ten patients to decide whether a cloud of moving dots appeared to move to the left or to the right on a computer screen. The percentage of dots moving coherently to one direction was either high or low, and participants were instructed to respond as fast or as accurately as possible. If it was difficult to determine the answer (i.e. the percentage of dots moving coherently in one direction was low), the response time was longer.
Participants responded more quickly when deep brain stimulation was applied during the difficult tasks. But this effect was confined to an incredibly brief moment during the time that people were trying to decide how to respond. Remarkably, if stimulation was applied later than 500 milliseconds after the task started, it had no influence at all on response time, even though most responses during difficult tasks were made later than 500 milliseconds into the task.
This result implies that deep brain stimulation interfered with a very particular time-limited process of setting the decision threshold to the required level according to task difficulty. This supports existing hypotheses that the decision threshold is set according to the difficulty of the task in a single abrupt change, and depending on information gathered in an initial period. This raises the possibility that it is this specific time-related mechanism that is dependent on the subthalamic nucleus.
Our observations add to the converging evidence that decision thresholds are adjusted through dynamic modulations of cortico-basal ganglia networks.
Words by Jacqueline Pumphrey, Peter Brown and Rafal Bogacz, NDCN.
Dr Paula Boddington is a research associate at Oxford’s Department of Computer Science, specialising in developing codes of ethics for artificial intelligence.
What drives you in your field?
I find the philosophical and ethical questions posed by developments in artificial intelligence fascinating.
There are visions of the development of AI that press us to ask questions about the limits and basis of our values – if AI radically changes the nature of work, for instance, perhaps even abolishing it for many, we have to reappraise what we do and don’t value about work, which raises questions about why we value any activity. Such questions about extending human intelligence and agency with AI are in fact, honing in on the most fundamental questions of philosophy, about the nature of human beings, about our place in the world and our ultimate sources of value. For me, it’s like finding a philosophical Shangri La to be working in this field.
What are the biggest challenges facing the field?
My work is focused on the implications of the technology, so the challenges include making sure that the power of AI does not simply amplify problems – such as our existing biases. There’s also a big issue in how we apply AI to problems. AI can be very powerful indeed in narrow areas. Whenever such narrow focus happens, there’s a danger that context will be missed, and that we’ll have found a solution, and so make all our problems fit the solution. It’s like Abraham Maslow said, when you have a hammer, everything begins to look like a nail.
There are certain tasks at which the AI we have now and in the near future can excel, so we must make sure that as we develop particular applications, we don’t find that our picture of the world starts to mould itself to what we can achieve with AI, especially given the hype that periodically surrounds it. That’s one of the reasons why we need as many people as possible involved in developing and applying AI, and thinking creatively about how it can best be used, and what else we need to achieve real benefits.
Why do you think it is important to encourage more women into the field?
Yes it’s important that women and men work in AI, but more than this, it’s important that there are people with diverse experiences and varied opinions and viewpoints in AI for a number of reasons.
We need to develop technology that actually caters to people’s needs, and where in practical applications, human beings will really benefit. Tailoring such tech is complex and it needs really good design sensitive to the context of a myriad of different circumstances.
What research are you most proud of?
I try not to really ‘do’ being proud of things, I’ve always been taught that ‘pride comes before a fall’. But I’m most pleased to be involved in work that might have a practical impact to improve lives for people. For instance, I’m also working right now on a project based in Cardiff University collaborating with a group of medical sociologists and others, on the care of people living with dementia – see storiesofdementia.com. This might seem a million miles away from AI and the impact of the development of new technology, but in fact the philosophical and ethical issues overlap considerably – how do we translate abstract ideas such as respect for persons, and humane, dignified care, into making a concrete difference to the lives of those such as people living with dementia, who have various challenges such as difficulties in communicating?
This work is aimed at producing practical recommendations to improve lives. We’ve just started a project looking at continence care. A world away from the glamour of AI, but essential work. And I see a great opportunity for technology to think about some important and common problems, for example, perhaps with working towards better detection of pain, which is greatly under-treated for those with dementia, or assisting with access to fluids and access to the toilet, which is often a problem in hospital wards. In the end, it’s this kind of careful, detailed ethnographic work that my colleagues in Cardiff are carrying out, which examines what’s really going on and what’s needed, that needs to be married up with developments in tech, in order to produce technology that will really benefit people.
Are there any AI research developments that excite you or that you are particularly interested in?
I’m particularly interested in the possibilities for AI in medicine, such as helping with disease diagnosis and the interpretation of medical images, and also its deployment in applications such as in the use of mobile technologies for health management. With these developments people are increasingly able to monitor and learn about their own health conditions. These are particularly exciting for use in remote areas or where medical staff are in short supply, but also simply for increasing the knowledge and control that individuals have over their own conditions and hence over their own wellbeing.
There are, quite understandably, fears that AI will take away jobs, but in the context of medicine, I think that’s unlikely. Think about how overstretched medical staff are at the moment. Helping them to make faster, more accurate diagnoses, tailored to individuals, will not only help patients, it should, hopefully, help to relieve time pressures and other stressors from doctors, if applied thoughtfully.
The evidence so far seems to indicate that AI works best as an addition to the skills of medical practitioners, not as a replacement for them. With all these developments, however, we need to keep looking very carefully at how we can get the best out of such technologies. For example, the early diagnosis of disease can be a big advantage in some conditions – but not such an advantage in others. In any context, and medicine is a good example of this, information is just information. It’s not knowledge, and it’s certainly not wisdom. That’s where the human skills of medical practitioners will always have a vital role.
What drew you towards a career in science?
Our whole family was always really excited about science. As children, my siblings and I were always glued to the television whenever Tomorrow’s World was on.
I came to dislike school a lot and used to bunk off and go to the library and read philosophy instead. I was really interesting in how the Arts’, social sciences and general STEM worked together.
I’ve always been focused on applying abstract ideas to concrete reality, and having an understanding of, say, the science behind developments in genomics. From my work in ethical questions in medical technology it was a short step to working in issues in artificial intelligence.
Who inspires you?
Of the many possible answers, I’d have to say members of my family. My father always told me that I could do anything I wanted in life. His own mother had started out life as the illegitimate daughter of a Victorian barmaid, brought up in Tiger Bay in Cardiff, and she became the headmistress of a girl’s Grammar School. So Dad had a great belief in women’s abilities. On my Mum’s side, her grandmother was the first woman in Cardiff to have her own alcohol licence and ran her own pub, also in Tiger Bay. She had six children, and during the Depression when work was hard to find, she started doing pub lunches to provide income for them - the family always claim that she invented the pub lunch. Whether that’s strictly true or not, ‘get an education, get an education’ was like a mantra breathed in the air, the idea that education was a key to success and that family was crucial too, and that yes, you can get around obstacles and make a go of things.
Dr Boddington is the author of the book Towards a Code of Ethics for Artificial Intelligence.
Learn more about the research referenced in this article.
Find out more about Dr Boddington and her research interests.
In part three of our women in AI series, Professor Marta Kwiatkowska, a Polish computer scientist at Oxford’s Department of Computing discusses her research specialism in developing modelling and analysis methods for complex systems. This work includes those arising in computational networks (which are applicable to autonomous technology), electronic devices and biological organisms.
Are there any AI research developments that excite you or that you are particularly interested in?
Robotics, including autonomous vehicles, and the potential of neural networks, such as autonomous vehicles, image and speech recognition technology. For example, developments like the Amazon Alexa-controlled Echo speaker have inspired me to work on techniques to support the design and specifically safety assurance and social trust, of such systems.
What can be done to encourage more women in AI?
I think women should have the same opportunities as men and we should raise awareness of these opportunities, through networking, female role models and the media. AI is embedded in all aspects of our lives and we need all sections of society to contribute to the design and utilisation of AI systems in equal measure, and this includes women as well as men.
What research projects are you currently working on?
I am following several strands of work of relevance for autonomous systems, mobile devices and AI, including developing formal safety guarantees for software based on neural networks, such as those applied in autonomous vehicles. This involves formalising and evaluating the social trust between humans and robots. A social trust model is based on the human notion of trust, which is subjective. To make the model applicable to technology you have to develop 'correct by construction' techniques and tools for safe, efficient and predictable mobile autonomous robots. That means building personalised tools for monitoring and the regulation of affective behaviours through wearable devices.
Professor Marta KwiatkowskaIn your opinion what are the biggest challenges facing the field?
Technological developments present my field with tremendous opportunities, but the speed of progress creates challenges around formal verification and synthesis - particularly the complexity of the systems to be modelled. We therefore need to develop techniques that can be accurate at scale, deal with adaptive behaviour and produce effective results quickly.
What motivates you in your field?
I like working on mathematical foundations and gaining new insight from that, but my main motivation is to make the theoretical work applied through developing algorithms and software tools: I refer to this as a "theory to practice" transfer of the techniques.
What research are you most proud of?
I was involved in the development of a software tool called PRISM (www.prismmodelchecker.org) , which is a probabilistic model checker. It is widely used for research and teaching and has been downloaded 65,000 times.
Who inspires you?
I have been inspired by several leading academics in my career, but one particular female scientist and my fellow countrywoman has been a role model and an inspiration for me throughout, Maria Sklodowska-Curie, because she combined a successful career with family.
Learn more about Professor Kwiatkowska’s research here and here.
In the second of our 'Women in AI' series, Dr. Sandra Wachter, a lawyer and Research Fellow in Data Ethics, AI, robotics and Internet Regulation/cyber-security at the Oxford Internet Institute discusses her work negotiating the legal pitfalls of algorithm-based decision making and an increasingly tech-led society.
What drew you towards a career in AI?
I am a lawyer and I specialise in technology law, which has been a gateway into computer science and science in general.
I’ve always been interested in the relationship between human rights and tech, so a law career was a natural fit for me. I am particularly interested in and driven by a desire to support fairness and transparency in the use of robotics and artificial intelligence in society. As our interest in AI increases I think it is important to design technology that is respectful of human rights and benefits society. I work to ensure balanced regulation in the emerging tech framework.
Law is generally a very male-dominated field, and tech-law even more so. The general view of what a tech-lawyer ‘is’ is not very diverse or evolved yet. There is a lot of work to done to shift this mind-set.
Dr. Sandra Wachter is a lawyer and Research Fellow in Data Ethics, AI, robotics and Internet Regulation/cyber-security at the Oxford Internet Institute
Image credit: OUWhat research projects are you currently working on?
The development of AI-led technology for healthcare is a key research interest of mine. I’m also very interested in the future of algorithm based decision-making, which has become increasingly less predictable and the systems more autonomous and complex. I’m interested in what that means for society.
At the moment I am working on a machine learning and robotics project that addresses the question of algorithmic explainability and auditing. For example, how can we design unbiased non-discriminatory systems that give explanations for algorithm-led decisions, such as, whether individuals should have a right to an explanation for why an algorithm rejected their loan application? I have reviewed the legal framework for any loopholes in existing legislation that need immediate consideration and then urged policy makers to take action where needed.
As our interest in AI increases I think it is important to design technology that is respectful of human rights and benefits society. I work to ensure balanced regulation in the emerging tech framework.
What interests you most about the scope of AI?
I am interested in developing research-led solutions that can mitigate the risks that come with an increasingly tech-led society. Supporting transparency, explainability and accountability will help to make machine learning technology something that progresses society rather than damaging it and holding people back.
AI in healthcare has the potential to have a massive positive impact on society, such as the development of products for disease prediction, treatment plans and drug discovery.
It is also an exciting time for healthcare robotics, the emerging fields of using surgical robotics for less invasive surgeries and assisted-living robotics are fascinating.
What are the biggest challenges facing the field?
On a very basic level an algorithm is a predetermined set of rules that humans can use to learn something about data and make decisions or predictions. AI is a very complex, more autonomous and less predictable version of a mundane algorithm. It can help us to make more accurate, more consistent, fairer, and more efficient decisions. However, we cannot solve all societal problems with technology alone.Technology is about humans and society, and to keep them at the heart of future developments you need a multi-disciplinary approach. To use AI for the good you need to collaborate with other sectors and disciplines, such as social sciences, and consider issues from all angles, particularly ethical and political responsibility, otherwise you get a skewed view.
The development of AI-led technology for healthcare is a key research interest of mine. I’m also very interested in the future of algorithm based decision-making.
What research are you most proud of?
I published research around the use of algorithms for decision making and showed that the law does not guarantee a right to an explanation for individuals. It shed light on loopholes and potential problems within the existing structure that will hopefully prevent legal problems in the future. In following work we proposed a new method “counterfactual explanations” that could give people meaningful explanations, even if highly complex systems are used.
To use AI for good you need to collaborate with other sectors and disciplines, such as social sciences, and consider issues from all angles - particularly ethical and political responsibility, otherwise you get a skewed view.
As a woman in science and a woman in law how would you describe your experience?
Law is generally a very male-dominated field, and tech-law even more so. People are often surprised when I go to events and they find out that I am the keynote speaker for the day. The general view of what a tech-lawyer ‘is’ is not very diverse or evolved yet, and there is a lot of work to done to shift this mind-set.
I think it would help to create more opportunities for women to have more visibility, such as speaking at events. People need to see from a young age that something is as much for one sex as it is for another. I still remember when I was at high school, the Design Technology subjects were split by gender, with boys taking woodwork, while girls learned knitting and sewing. I desperately wanted to do woodwork and build a birdhouse with the boys, but my teacher’s response when I asked was simply that ‘girls don’t do that.’ Young girls need to be supported and encouraged instead of told that they can’t do something.
Who inspires you?
I am very lucky, my grandmother was one of the first women to be admitted to a tech-university, so I grew up with a maths genius as one of my role models. People need to see that gender isn’t a factor in opportunity, it is about passion, dedication, and talent.
It is the University's first AI Expo tomorrow, what would you like the event’s legacy to be?
This event is a very important step forward for the University and I hope that it will inspire more events like it in the future. AI is a rapidly emerging field and it is really important to raise awareness and show the world that Oxford not only takes it seriously, but that we are working to use AI for good and are mindful of the consequences that come with it.
Further information about Dr Wachter and her research interests are available here
Find out more about our AI Expo showcase
In part three of the series we meet a computational scientist involved in redesigning complex networks with AI
- ‹ previous
- 77 of 252
- next ›




