Features

Decider or ditherer? How we make decisions

Professors Peter Brown and Rafal Bogacz in the Nuffield Department of Clinical Neurosciences describe their research team’s discovery that a certain ‘hold your horses’ function in decision-making occurs in an extremely brief window of time, and involves bursts of a specific type of activity in a brain centre known as the subthalamic nucleus.

Are you a decider or a ditherer? When making decisions, we not only have to decide what to choose, but also how much time to spend making the decision. How long should we spend collecting relevant information to inform our choice?

Imagine, for example, that you are choosing which meal to pick up during a lunch break. Dwelling over this decision might mean that you miss out on valuable time that could be spent chatting with friends, whereas quickly choosing a menu option without proper thought might mean that you overlook a better alternative.

It was already thought that the subthalamic nucleus might play an important role in balancing the opposing demands of speed and accuracy during decision-making. Scientists suspected that it helped us delay decisions for the optimum amount of time, to enable the best choice to be made in any given situation. But our own research reveals that this part of the brain gets involved in adjusting these ‘decision thresholds’ at a very particular and brief moment during the process of deliberation.

The aim of our new study was to probe the mechanisms by which the subthalamic nucleus influences decision-making. We were able to do this using deep brain stimulation in Parkinson’s patients (an intervention which has been shown to be very successful in alleviating some of the movement symptoms related to this condition).

The research team asked ten patients to decide whether a cloud of moving dots appeared to move to the left or to the right on a computer screen. The percentage of dots moving coherently to one direction was either high or low, and participants were instructed to respond as fast or as accurately as possible. If it was difficult to determine the answer (i.e. the percentage of dots moving coherently in one direction was low), the response time was longer.

Participants responded more quickly when deep brain stimulation was applied during the difficult tasks. But this effect was confined to an incredibly brief moment during the time that people were trying to decide how to respond. Remarkably, if stimulation was applied later than 500 milliseconds after the task started, it had no influence at all on response time, even though most responses during difficult tasks were made later than 500 milliseconds into the task.

This result implies that deep brain stimulation interfered with a very particular time-limited process of setting the decision threshold to the required level according to task difficulty. This supports existing hypotheses that the decision threshold is set according to the difficulty of the task in a single abrupt change, and depending on information gathered in an initial period. This raises the possibility that it is this specific time-related mechanism that is dependent on the subthalamic nucleus.

Our observations add to the converging evidence that decision thresholds are adjusted through dynamic modulations of cortico-basal ganglia networks.

Words by Jacqueline Pumphrey, Peter Brown and Rafal Bogacz, NDCN.

Making artificial intelligence ethical

Dr Paula Boddington is a research associate at Oxford’s Department of Computer Science, specialising in developing codes of ethics for artificial intelligence.

What drives you in your field?

I find the philosophical and ethical questions posed by developments in artificial intelligence fascinating.

There are visions of the development of AI that press us to ask questions about the limits and basis of our values – if AI radically changes the nature of work, for instance, perhaps even abolishing it for many, we have to reappraise what we do and don’t value about work, which raises questions about why we value any activity. Such questions about extending human intelligence and agency with AI are in fact, honing in on the most fundamental questions of philosophy, about the nature of human beings, about our place in the world and our ultimate sources of value. For me, it’s like finding a philosophical Shangri La to be working in this field.

What are the biggest challenges facing the field?

My work is focused on the implications of the technology, so the challenges include making sure that the power of AI does not simply amplify problems – such as our existing biases. There’s also a big issue in how we apply AI to problems. AI can be very powerful indeed in narrow areas. Whenever such narrow focus happens, there’s a danger that context will be missed, and that we’ll have found a solution, and so make all our problems fit the solution. It’s like Abraham Maslow said, when you have a hammer, everything begins to look like a nail.

There are certain tasks at which the AI we have now and in the near future can excel, so we must make sure that as we develop particular applications, we don’t find that our picture of the world starts to mould itself to what we can achieve with AI, especially given the hype that periodically surrounds it. That’s one of the reasons why we need as many people as possible involved in developing and applying AI, and thinking creatively about how it can best be used, and what else we need to achieve real benefits.

Why do you think it is important to encourage more women into the field?

Yes it’s important that women and men work in AI, but more than this, it’s important that there are people with diverse experiences and varied opinions and viewpoints in AI for a number of reasons.

We need to develop technology that actually caters to people’s needs, and where in practical applications, human beings will really benefit. Tailoring such tech is complex and it needs really good design sensitive to the context of a myriad of different circumstances.

What research are you most proud of?

I try not to really ‘do’ being proud of things, I’ve always been taught that ‘pride comes before a fall’. But I’m most pleased to be involved in work that might have a practical impact to improve lives for people. For instance, I’m also working right now on a project based in Cardiff University collaborating with a group of medical sociologists and others, on the care of people living with dementia – see storiesofdementia.com. This might seem a million miles away from AI and the impact of the development of new technology, but in fact the philosophical and ethical issues overlap considerably – how do we translate abstract ideas such as respect for persons, and humane, dignified care, into making a concrete difference to the lives of those such as people living with dementia, who have various challenges such as difficulties in communicating?

This work is aimed at producing practical recommendations to improve lives. We’ve just started a project looking at continence care. A world away from the glamour of AI, but essential work. And I see a great opportunity for technology to think about some important and common problems, for example, perhaps with working towards better detection of pain, which is greatly under-treated for those with dementia, or assisting with access to fluids and access to the toilet, which is often a problem in hospital wards. In the end, it’s this kind of careful, detailed ethnographic work that my colleagues in Cardiff are carrying out, which examines what’s really going on and what’s needed, that needs to be married up with developments in tech, in order to produce technology that will really benefit people.

Are there any AI research developments that excite you or that you are particularly interested in?

I’m particularly interested in the possibilities for AI in medicine, such as helping with disease diagnosis and the interpretation of medical images, and also its deployment in applications such as in the use of mobile technologies for health management. With these developments people are increasingly able to monitor and learn about their own health conditions. These are particularly exciting for use in remote areas or where medical staff are in short supply, but also simply for increasing the knowledge and control that individuals have over their own conditions and hence over their own wellbeing.

There are, quite understandably, fears that AI will take away jobs, but in the context of medicine, I think that’s unlikely. Think about how overstretched medical staff are at the moment. Helping them to make faster, more accurate diagnoses, tailored to individuals, will not only help patients, it should, hopefully, help to relieve time pressures and other stressors from doctors, if applied thoughtfully.

The evidence so far seems to indicate that AI works best as an addition to the skills of medical practitioners, not as a replacement for them. With all these developments, however, we need to keep looking very carefully at how we can get the best out of such technologies. For example, the early diagnosis of disease can be a big advantage in some conditions – but not such an advantage in others. In any context, and medicine is a good example of this, information is just information. It’s not knowledge, and it’s certainly not wisdom. That’s where the human skills of medical practitioners will always have a vital role.

What drew you towards a career in science?

Our whole family was always really excited about science. As children, my siblings and I were always glued to the television whenever Tomorrow’s World was on.
I came to dislike school a lot and used to bunk off and go to the library and read philosophy instead. I was really interesting in how the Arts’, social sciences and general STEM worked together.

I’ve always been focused on applying abstract ideas to concrete reality, and having an understanding of, say, the science behind developments in genomics. From my work in ethical questions in medical technology it was a short step to working in issues in artificial intelligence.

Who inspires you?

Of the many possible answers, I’d have to say members of my family. My father always told me that I could do anything I wanted in life. His own mother had started out life as the illegitimate daughter of a Victorian barmaid, brought up in Tiger Bay in Cardiff, and she became the headmistress of a girl’s Grammar School. So Dad had a great belief in women’s abilities. On my Mum’s side, her grandmother was the first woman in Cardiff to have her own alcohol licence and ran her own pub, also in Tiger Bay. She had six children, and during the Depression when work was hard to find, she started doing pub lunches to provide income for them - the family always claim that she invented the pub lunch. Whether that’s strictly true or not, ‘get an education, get an education’ was like a mantra breathed in the air, the idea that education was a key to success and that family was crucial too, and that yes, you can get around obstacles and make a go of things.

Dr Boddington is the author of the book Towards a Code of Ethics for Artificial Intelligence.

Learn more about the research referenced in this article.

Find out more about Dr Boddington and her research interests.

Redesigning complex networks with AI

In part three of our women in AI series, Professor Marta Kwiatkowska, a Polish computer scientist at Oxford’s Department of Computing discusses her research specialism in developing modelling and analysis methods for complex systems. This work includes those arising in computational networks (which are applicable to autonomous technology), electronic devices and biological organisms.

Are there any AI research developments that excite you or that you are particularly interested in?

Robotics, including autonomous vehicles, and the potential of neural networks, such as autonomous vehicles, image and speech recognition technology. For example, developments like the Amazon Alexa-controlled Echo speaker have inspired me to work on techniques to support the design and specifically safety assurance and social trust, of such systems.

What can be done to encourage more women in AI?

I think women should have the same opportunities as men and we should raise awareness of these opportunities, through networking, female role models and the media. AI is embedded in all aspects of our lives and we need all sections of society to contribute to the design and utilisation of AI systems in equal measure, and this includes women as well as men.

What research projects are you currently working on?

I am following several strands of work of relevance for autonomous systems, mobile devices and AI, including developing formal safety guarantees for software based on neural networks, such as those applied in autonomous vehicles. This involves formalising and evaluating the social trust between humans and robots. A social trust model is based on the human notion of trust, which is subjective. To make the model applicable to technology you have to develop 'correct by construction' techniques and tools for safe, efficient and predictable mobile autonomous robots. That means building personalised tools for monitoring and the regulation of affective behaviours through wearable devices.

Professor Marta KwiatkowskaProfessor Marta Kwiatkowska

In your opinion what are the biggest challenges facing the field?

Technological developments present my field with tremendous opportunities, but the speed of progress creates challenges around formal verification and synthesis - particularly the complexity of the systems to be modelled. We therefore need to develop techniques that can be accurate at scale, deal with adaptive behaviour and produce effective results quickly.

What motivates you in your field?

I like working on mathematical foundations and gaining new insight from that, but my main motivation is to make the theoretical work applied through developing algorithms and software tools: I refer to this as a "theory to practice" transfer of the techniques.

What research are you most proud of?

I was involved in the development of a software tool called PRISM (www.prismmodelchecker.org) , which is a probabilistic model checker. It is widely used for research and teaching and has been downloaded 65,000 times.

Who inspires you?

I have been inspired by several leading academics in my career, but one particular female scientist and my fellow countrywoman has been a role model and an inspiration for me throughout, Maria Sklodowska-Curie, because she combined a successful career with family.

Learn more about Professor Kwiatkowska’s research here and here.

Image credit: Shutterstock

The legal challenges of a robot-filled future

Lanisha Butterfield | 26 Mar 2018

In the second of our 'Women in AI' series, Dr. Sandra Wachter, a lawyer and Research Fellow in Data Ethics, AI, robotics and Internet Regulation/cyber-security at the Oxford Internet Institute discusses her work negotiating the legal pitfalls of algorithm-based decision making and an increasingly tech-led society.

What drew you towards a career in AI?
I am a lawyer and I specialise in technology law, which has been a gateway into computer science and science in general.

I’ve always been interested in the relationship between human rights and tech, so a law career was a natural fit for me. I am particularly interested in and driven by a desire to support fairness and transparency in the use of robotics and artificial intelligence in society. As our interest in AI increases I think it is important to design technology that is respectful of human rights and benefits society. I work to ensure balanced regulation in the emerging tech framework. 

Law is generally a very male-dominated field, and tech-law even more so. The general view of what a tech-lawyer ‘is’ is not very diverse or evolved yet. There is a lot of work to done to shift this mind-set.

Image credit: OUDr. Sandra Wachter is a lawyer and Research Fellow in Data Ethics, AI, robotics and Internet Regulation/cyber-security at the Oxford Internet Institute Image credit: OU

What research projects are you currently working on?

The development of AI-led technology for healthcare is a key research interest of mine. I’m also very interested in the future of algorithm based decision-making, which has become increasingly less predictable and the systems more autonomous and complex. I’m interested in what that means for society.

At the moment I am working on a machine learning and robotics project that addresses the question of algorithmic explainability and auditing. For example, how can we design unbiased non-discriminatory systems that give explanations for algorithm-led decisions, such as, whether individuals should have a right to an explanation for why an algorithm rejected their loan application? I have reviewed the legal framework for any loopholes in existing legislation that need immediate consideration and then urged policy makers to take action where needed.

As our interest in AI increases I think it is important to design technology that is respectful of human rights and benefits society. I work to ensure balanced regulation in the emerging tech framework. 

What interests you most about the scope of AI?

I am interested in developing research-led solutions that can mitigate the risks that come with an increasingly tech-led society. Supporting transparency, explainability and accountability will help to make machine learning technology something that progresses society rather than damaging it and holding people back.

AI in healthcare has the potential to have a massive positive impact on society, such as the development of products for disease prediction, treatment plans and drug discovery.

It is also an exciting time for healthcare robotics, the emerging fields of using surgical robotics for less invasive surgeries and assisted-living robotics are fascinating.

What are the biggest challenges facing the field?
On a very basic level an algorithm is a predetermined set of rules that humans can use to learn something about data and make decisions or predictions. AI is a very complex, more autonomous and less predictable version of a mundane algorithm. It can help us to make more accurate, more consistent, fairer, and more efficient decisions. However, we cannot solve all societal problems with technology alone.Technology is about humans and society, and to keep them at the heart of future developments you need a multi-disciplinary approach. To use AI for the good you need to collaborate with other sectors and disciplines, such as social sciences, and consider issues from all angles, particularly ethical and political responsibility, otherwise you get a skewed view. 

The development of AI-led technology for healthcare is a key research interest of mine. I’m also very interested in the future of algorithm based decision-making.

What research are you most proud of?
I published research around the use of algorithms for decision making and showed that the law does not guarantee a right to an explanation for individuals. It shed light on loopholes and potential problems within the existing structure that will hopefully prevent legal problems in the future. In following work we proposed a new method “counterfactual explanations” that could give people meaningful explanations, even if highly complex systems are used.

To use AI for good you need to collaborate with other sectors and disciplines, such as social sciences, and consider issues from all angles - particularly ethical and political responsibility, otherwise you get a skewed view. 

As a woman in science and a woman in law how would you describe your experience?
Law is generally a very male-dominated field, and tech-law even more so. People are often surprised when I go to events and they find out that I am the keynote speaker for the day. The general view of what a tech-lawyer ‘is’ is not very diverse or evolved yet, and there is a lot of work to done to shift this mind-set.

I think it would help to create more opportunities for women to have more visibility, such as speaking at events. People need to see from a young age that something is as much for one sex as it is for another. I still remember when I was at high school, the Design Technology subjects were split by gender, with boys taking woodwork, while girls learned knitting and sewing. I desperately wanted to do woodwork and build a birdhouse with the boys, but my teacher’s response when I asked was simply that ‘girls don’t do that.’ Young girls need to be supported and encouraged instead of told that they can’t do something.

Who inspires you?
I am very lucky, my grandmother was one of the first women to be admitted to a tech-university, so I grew up with a maths genius as one of my role models. People need to see that gender isn’t a factor in opportunity, it is about passion, dedication, and talent.

It is the University's first AI Expo tomorrow, what would you like the event’s legacy to be?
This event is a very important step forward for the University and I hope that it will inspire more events like it in the future. AI is a rapidly emerging field and it is really important to raise awareness and show the world that Oxford not only takes it seriously, but that we are working to use AI for good and are mindful of the consequences that come with it.

Further information about Dr Wachter and her research interests are available here

Find out more about our AI Expo showcase 

In part three of the series we meet a computational scientist involved in redesigning complex networks with AI

Multilingualism

In a guest post for Arts BlogKatrin Kohl, Professor of German Literature and Lead Researcher on the Creative Multilingualism research project, writes about recent calls for all British citizens to be able to speak English.

Should we be for or against British citizens having to be able to speak English? What makes a British artist sing in Cornish when she could be communicating so much more usefully in English – not only the lingua franca of England and the British Isles, but a language that's now spoken and sung across the world? Why is the Irish language such a politicised issue when some claim that there are now more Polish than Irish speakers in Northern Ireland? And where do users of sign language fit into these debates?

The fact is this: the UK has always been, and will remain, multilingual. And this is no more incompatible with everyone being able to speak English in the UK than it would be in India. Many people switch between different languages every day, and we all, at the very least, keep different linguistic registers in play as we move between different spheres and groups of people at home, at work or at school.

Louise Casey recently asserted that the UK should set a date for everyone to speak English. She's surely not wrong when she argues that additional funding should be provided for fostering English language skills, or that building linguistic bridges between communities can promote integration. But integration isn't helped by imposing a single language top-down or assuming that diversity is best eradicated. Languages are neither confined to what is useful nor just about what the majority speaks – we need look no further than the establishment of Welsh as an official language of the UK to appreciate this fact.

Languages are about lives, as the production of Gwenno Saunders' Cornish album shows us. While her linguistic heritage may be unique (with a Cornish poet and a Welsh language activist as parents), she's not alone in being able to draw on diverse languages as a personal treasure trove. All across the UK, people cherish the languages that are part of their heritage or that they have come into contact with in other, often very individual ways. Communities pass on their languages in religious practice, supplementary schools and cultural events, and individuals make something linguistically new from cross-cultural marriages and culturally diverse school environments. A language is a special emotional resource, a voice within which embodies memories of conversations with loved (and hated) ones past and present.

This personal, emotional dimension of languages has been sidelined in the way foreign languages have come to be taught in the UK – if the value of knowing a language is reduced to its practical function, it becomes unclear why we should bother with the hard graft of learning a new language when we can make ourselves understood in English. By the same token, it then seems sufficient to promote English as the sole passport to global success, whatever other languages children might already be familiar with. Many children are made to feel ashamed of knowing another language, and some schools indeed prohibit their speaking anything other than English on the assumption that they are thereby doing the children a favour – English is imposed as part of a lifelong school uniform.

Fortunately, many schools instead embrace the multilingualism of their students, enable them to take qualifications in their home languages, and allow them to discover their own linguistic resources in creative writing that extends beyond linguistic boundaries. Creative Multilingualism has been working with Oxford Spires Academy and with Haggerston School in Hackney to find out how children respond to exploring new language spaces. Modern foreign languages can be taught as part of that process and in interaction with it. This fosters a spirit of community that isn't confined to a single language, but characterised by shared variety and enhanced understanding of the potential that linguistic diversity holds for us all: each language is a subtly different window on to the world and a different link with other groups of people. As a preparation for life in an increasingly global world, this is hard to beat.

The UK rightly takes pride in its exuberantly diverse creative talent, but there's currently little appreciation of the ways in which languages enrich the country's creative identity. The UK music scene isn't just culturally and ethnically tremendously diverse, but linguistically too. Take Punch Records, a company set up to work with emerging Black British and British Asian artists who have grown up in urban contexts where varieties of English routinely mingle with other languages. The Slanguages exhibition project serves to showcase hip-hop, grime and rap as multilingual forms with a political edge. Birmingham school playgrounds have here served as seedbeds for adventurous modes of communication that offer exciting scope for developing new rhythms, speech forms and gestural language.

The UK's extraordinarily varied linguistic heritage is an invaluable national resource. At a time when the country wants to project itself as being more than Little Britain, and more than a country on the edge of Europe, it makes sense to value all those languages that have entered the UK over the decades, centuries and indeed millennia. Each of them has left its audible traces in the population, and together they open up a multitude of living pathways to other parts of the world. We might as well celebrate our flourishing abundance of languages – they're certainly not likely to go away.

Creative Multilingualism is funded by the Arts and Humanities Research Council as part of the Open World Research Initiative.