Features
Professor John Tasioulas has been appointed as the first Director of Oxford University’s Institute for Ethics in AI. You can read about his appointment here. Ahead of starting his new role in October, he sat down with us to explain why he is excited about the job and what he hopes the Institute will achieve.
Professor Tasioulas is currently the inaugural Chair of Politics, Philosophy and Law and Director of the Yeoh Tiong Lay Centre for Politics, Philosophy & Law at King’s College London. He has strong links to Oxford, having studied as a Rhodes Scholar, completed a doctorate in philosophy and taught philosophy from 1998-2010. He is also a Distinguished Research Fellow of the Oxford Uehiro Centre and Emeritus Fellow of Corpus Christi College, Oxford. He has held visiting appointments at the Australian National University, Harvard University, the University of Chicago, the University of Notre Dame, and the University of Melbourne, and acted as a consultant on human rights to the World Bank.
What role do you envision for the Institute?
'My aim is for the Institute to bring the highest standards of academic rigour to the discussion of AI ethics. The Institute is strongly embedded in philosophy and I do not know of any other centre along those lines. At Oxford, we have the largest Philosophy department in the English-speaking world and it has historically been a very powerful presence in the discipline. We will also draw on other disciplines like literature, medicine, history, music, law, and computer science. This is a radical attempt to bridge the divide between science and humanities in this area and Oxford is uniquely placed to pull it off.'
Why Oxford a good place for the Institute?
'Oxford is an outstanding environment for the Institute not only because of its great academic strengths generally, and especially in philosophy, but also because in Oxford the study of philosophy at undergraduate level has always been pursued in tandem with other subjects, in joint degrees such as PPE, Physics and Philosophy, and Computer Science and Philosophy. The Institute can reap the benefits of this long historical commitment to the idea that the study of philosophy is enriched by other subjects, and vice versa. Add to this the interdisciplinary connections fostered by the collegiate system, and also the high regard in which Oxford held throughout the world, and I think we have the ideal setting for ambitious interdisciplinary project of this kind.'
Why is AI ethics important?
'AI has a transformative potential for many parts of life, from medicine to law to democracy. It raises deep ethical questions – about matters such as privacy, discrimination, and the place of automated decision-making in a fulfilling human life – that we inevitably have to confront both as individuals and societies. I do not want AI ethics to be seen as a narrow specialism, but to become something that anyone seriously concerned with the major challenges confronting humanity has to address. AI ethics is not an optional extra or a luxury, it is absolutely necessary if AI is to advance human flourishing and social justice.
'Given that AI is here to stay, we must raise the level of debate around AI ethics and feed into the wider democratic process among citizens and legislators. AI regulation and policy are ultimately matters for democratic decision-making, but the quality of the deliberative process is enhanced by the arguments and insights of experts working on questions of AI ethics.'
How does COVID-19 make you think about AI ethics and the Institute?
'COVID-19 demonstrates that it is never going to be enough just to "follow the science’. There are always value judgements that have to be made, about things like the distribution of risk across society and tradeoffs between prosperity and health. Science can tell us the consequences of our actions but it does not tell us which goals we should pursue or what sacrifices are justified to achieve them. In so far as we are going to have AI as part of the technological solution to societal challenges, we inevitably have to address the ethical questions too. AI ethics is a way to get clearer about the value judgements involved and to encourage a more rigorous and inclusive debate.'
What are your priorities for the Institute?
'There are many things I want to get done. I want to embed within Oxford the idea of AI ethics as an important, high quality area of research and discussion that is open to all interested parties. Not everyone has it at the forefront of their minds, but I want people to become aware that there is a lively and rigorous discussion going on about the very pressing questions it raises, one which bears on the topics they are already interested, such as health care, climate change, migration, and so on. If we can secure this high-quality culture of research and debate, it will be the platform on which we can achieve everything else. Vital to all this is getting serious intellectual buy-in from the broader Oxford community.'
At King’s, you led and developed a centre that was also new when you became the Director. What lessons can you bring from that experience?
'The first challenge is getting people from different disciplines to talk to each other in a productive way. This is not easy because the meanings of words, and the methods adopted, can differ significantly from one discipline to another, so people can talk past each other. And then there is just the inertia of staying in your intellectual comfort zone. We need to generate an environment of goodwill in which people feel comfortable talking about things with those from other disciplines and to learn from each other.
'Another important challenge is that this discussion must not be confined to academics. It is important that whatever we do must also be presented in a way that is accessible to a broader community, whether that is legislators, scientists or ordinary citizens. However profound or sophisticated our research is, we must convey it in a way that can be engaged with by a non-specialist community. Otherwise we will not be fulfilling our task. I want us to hold events where the general public feels very free to come along, engage and make points in the discussions.'
What aims do you have for teaching AI ethics in Oxford?
'It looks like AI will become an inescapable feature of ordinary human life. In so far as an undergraduate degree equips students to cope with life in a critical and intelligent way, it would seem natural that the ethical dimension of AI is one of the aspects of life they should be able to engage with in the course of their degrees. AI ethics can be seen through the lens of any given discipline, whether it is classics or medicine or something else.'
What is your aim for the field of AI ethics as a whole?
'Bioethics is a good example of the role of ethics in tackling major issues facing society, but it is also a cautionary tale. Bioethics has truly outstanding figures with a strong philosophical background who drew on deeper expertise in moral and political philosophy in order to advance that discipline. But at the moment, a lot of what you hear about AI ethics lacks this kind of depth, too much is a rehash of the language of corporate governance, or even just soundbites and buzzwords. A sustainable AI ethics needs to be grounded in something deeper and broader, and that must include philosophy and the humanities more generally. The Institute can serve to channel this intellectual rigour and clarity into the sphere of public debate and decision making.
'In the past, philosophers have played an active role in government reports on matters such as censorship, IVF or gambling, but no philosopher was involved in the recent House of Lords report on AI, for example. This is unfortunate and can lead to an unnecessarily limited perspective. Often what happens is people are tempted to use the law as a framework for evaluating various options in AI. Law is, of course, extremely important as a tool of regulation. But ethics goes deeper than law, because we can always ask whether existing law is acceptable, and because we need more than legal rules in order to live good lives.'
Finally, how do you feel about "returning" to Oxford?
'Although this is a new and exciting challenge, it’s also a homecoming because I have always regarded Oxford as my intellectual home. I have such great admiration for Oxford because it manages to combine a commitment to the highest intellectual standards with a broadly democratic academic culture. In that sense, too, I think Oxford is unique in the world and this combination equips us well to pursue our aims for the Institute.'
You can find more information on the Institute here.
Professor John Tasioulas, the inaugural Director of the Institute for Ethics in AIThis year’s winner of the Premier League Fantasy Football competition and Oxford Mathematician, Joshua Bull, talks to Oxford Science blog about strategy and how maths can help us understand the infinite complexity of the world – from football to cancer.
Joshua Bull
I’ve had a team for a few years. We have a family league, called The Bullfight, because of our name.
Did you apply mathematics to your fantasy football team?
I had some strategies that I was trying to follow, but I don’t know if they’re mathematically the right strategies. I’ve since done some analysis, and although I didn’t use maths to get to these conclusions, I’m asking myself if I can use maths to show whether they’re the right conclusions or not? Can I improve on my strategy?
Was your instinct underpinned by mathematical knowledge?
There’s a lot of strategic thinking in the game which mathematicians tend to be quite strong at; the planning ahead with a view to maximising points - not just this week but for the next few weeks. Magnus Carlsen, who’s the world chess champion, was top of the fantasy league at one point this season. He finished within the top 10. People at the time were saying – this is someone who’s famous for thinking strategically, and then when I started doing well they said ‘Hang on, there’s an Oxford mathematician also doing well’, so they tried to make those links. There’s a lot in strategic thinking in it.
The talk that I’m giving on Tuesday 8 September is linking my day to day work to this win, where I try to use maths to try to tackle complex problems. The way that we do that is to break the complex problems down into smaller, simpler examples. By making those simplifications, you try to understand how one, specific thing might impact that big system. You can apply the exact same logic to fantasy football. So, you’ve got all of this data out there and you want to know how making your team choice is going to impact on your points. That’s the kind of thing that you can quite happily model mathematically. These are the things I was thinking about, even if I wasn’t writing down equations.
Are people writing algorithms for fantasy football teams?
Some people certainly do. There’s a lot of teams where people will train a neural network on the data of previous seasons and try to predict the best team. Some of those are more successful than others. But it’s not the case that it’s a game that a computer is better at than a person.
All the teams play once and then you can make transfers. You can only make one or two changes per week. If you want to make any more than that it starts costing you points – you have to pay a forfeit. So, there’s a real optimisation problem where there are players you might want to bring in, but it’s not necessarily easy to say ‘I want them in my team, so I’ll get them in my team.’
Everybody knows at the start of the year which players are going to get most points, but as a result they’re priced accordingly. So, you have the question, do I have a few very expensive people but have to make compromises for the rest of my team? Or do I have less of those expensive players and have a more balanced team? In itself – that’s a classic maths problem.
How much time did you spend on the league?
Not a massive amount. I did get into a routine – particularly as I started to do well – checking in with a very active online community. I’d spend probably 10-15 minutes a night checking what I’d missed, and then trying to take all of that information in and make the decision about what transfer to make over the course of the end of one week and start of the next.
Do you think there’s anything that transfers to the real world that might be helpful?
The skills that I was using, they’re not directly relevant for managers, but I think that maths itself is very relevant for football managers. We’re starting to see much more focus being put on looking mathematically at how you can improve your football team. The things that footballers traditionally think are important for scoring more goals and winning games are not necessarily the things that are statistically more important. For example – when should you take a shot as a footballer to give you the highest probability of scoring, from certain positions on the field? The best option statistically speaking isn’t necessarily the one that gets the crowd excited! It’s the same in fantasy football as well. You can use maths to show you an improved strategy to improve your odds, but you still need to take those risky shots that might pay off.
Could you tell me about your own research – what do you look at?
At the highest level, what I’m looking at is how we can improve cancer research using maths. So that’s a very broad interest. I focus on the locations of immune cells within tumours. It correlates to the patient prognosis and has all sorts of effects on treatments – particularly immunotherapies. I collaborate with groups in the Nuffield Department of Medicine and in clinical research as well – it’s very much a two-way street.
What got you interested in this area of research?
It’s a very long story, but ultimately, when I was doing my Masters, I actually got a brain tumour – it was benign and it was OK in the end but it was on my pituitary gland. So, I basically had no working memory for a few months and I had to drop out of my Masters – this was at Durham. They very kindly said: ‘if you get your memory back, come back next year and go again.’ I saw a course on mathematical biology including tumour modelling. I’d never heard of that before but thought I’d give it a go. I ended up applying here at Oxford for the PhD.
You recovered and your memory came back?
It did indeed. I didn’t notice anything at the time but everyone around me was saying – ‘Josh, you really need to go and talk to a doctor, you can’t remember anything.’ I would have conversations with my friends and say ‘What day is it today?’ and a few moments later I’d ask again. People thought I was joking at first and then realised I wasn’t and it was serious. Luckily there was a drug treatment that was able to shrink that particular tumour. From my point of view, it was like a miracle cure – within a few months I was back to normal. It was absolutely amazing. It really got me interested in applying maths to the real world. I never thought that people were doing it in biology, in cancer research. It does feel really good to do things which one day might have an impact on patients. I don’t think we’re there yet, but I think in 10-20 years’ time it will be much more routine that cancer treatment will be personalised based on mathematical predictions. I believe that’s the direction we’re moving in.
Are you committed to this area of research now?
One of the wonderful things about maths is that you can apply similar techniques to all sorts of different fields. I definitely want to keep working in cancer research, but, for example, the techniques I use can be applied to other problems like looking at immune cells in Covid. So, from a biological point of view it’s a completely different problem, but we can apply the same types of mathematics and hopefully understand a completely different system. I love that idea that, with maths, your main focus can be on tumours, but that you can basically do anything else. The world of biology is so big, there are so many things to look at. If you can describe something mathematically, you can understand it better with mathematical models. It’s true for cancer, it’s true for fantasy football.
Watch Joshua Bull deliver his talk ‘Can maths tell us how to win at Fantasy Football?’.
What if your boss was an algorithm? Imagine a world in which artificial intelligence hasn’t come for your job – but that of your manager: whether it’s hiring new staff, managing a large workforce, or even selecting workers for redundancies, big data and sophisticated algorithms are increasingly taking over traditional management tasks. This is not a dystopian vision of the future. According to Professor Jeremias Adams-Prassl, algorithmic management is quickly becoming established in workplaces around the world.
We aren’t necessarily defenceless or impotent in the face of machines – and might even want to (cautiously) embrace this revolution.
Should we be worried? Last month’s A-level fiasco has shown the potential risks of blindly entrusting life-changing decisions to automation. And yet, the Oxford law professor suggests, we aren’t necessarily defenceless or impotent in the face of machines – and might even want to (cautiously) embrace this revolution. To work out how we should go about regulating AI at work, he has been awarded a prestigious €1.5 million grant by the European Research Council.
This will require a serious rethink of existing structures. Over the course of the next five years, Professor Adams-Prassl’s project will bring together an interdisciplinary team of computer scientists, lawyers, and sociologists to understand what happens when key decisions are no longer taken by your boss, but an inscrutable algorithm.
Employers today can access a wide range of data about their workforce, from phone, email, and calendar logs to daily movements around the office – and your fitbit. Even the infamous 19th century management theorist Frederick Taylor could not have dreamt of this degree of monitoring. This trove of information is then processed by a series of algorithms, often relying on machine learning (or ‘artificial intelligence’) to sift data for patterns: what characteristics do current star performers have in common? And which applicants most closely match these profiles?
What we’re seeing now is a step change: algorithms have long been deployed to manage workers in the gig economy, in warehouses, and similar settings. Today, they’re coming to workplaces across the spectrum, from hospitals and law firms to banks and even universities.
‘Management automation has been with us for a while’, notes the professor. ‘But what we’re seeing now is a step change: algorithms have long been deployed to manage workers in the gig economy, in warehouses, and similar settings. Today, they’re coming to workplaces across the spectrum, from hospitals and law firms to banks and even universities.’ The Covid-19 pandemic has provided a further boost, with traditional managers struggling to look after their teams. As a result, the algorithmic boss is not just watching us at work: it has come to our living rooms.
That’s not necessarily a bad thing: algorithms have successfully been deployed to catch out insider trading, or help staff plan their careers and find redeployment opportunities in large organisations. At the same time, Professor Adams-Prassl cautions, we have to be careful about the unintended (yet often entirely predictable) negative side effects of entrusting key decisions to machine learning. Video-interviewing software has repeatedly been demonstrated to discriminate against applicants based on their skin tone, rather than skills. And that sophisticated hiring algorithm may well spot the fact that a key pattern amongst your current crop of senior engineers is that they’re all men – and thus ‘learn’ to discard the CVs of promising female applicants. Simply excluding gender, race, or other characteristics won’t cure the problem of algorithmic discrimination, either: there are plenty of other datapoints, from shopping habits to post codes, from which the same information can be inferred. Amidst a burgeoning literature exploring algorithmic fairness and transparency, however, the workplace seems to have received scant attention.
Understanding the technology is key to solving this conundrum: what information is collected, and how is it processed?
Existing legal frameworks, designed for the workplace of the last century, struggle to keep pace: they threaten to stifle innovation – or leave workers unprotected. The GDPR prevents some of the worst instances of people management (no automated sacking by email, as is the case in the US) – but it’s nowhere near fine-grained enough a tool. Understanding the technology is key to solving this conundrum: what information is collected, and how is it processed?
‘There’s nothing inherently bad about the use of big data and AI at work: beware any Luddite phantasies’, the professor insists. But employers should tread carefully: ‘Yes, automating recruitment processes might save significant amounts of time, and if set up properly, could actively encourage hiring the best and most diverse candidates – but you also have to watch out: machine learning algorithms, by their very nature, tend to punish outliers.’
Backed by the recently awarded European Research Council (ERC) grant, his team will come up with a series of toolkits to regulate algorithmic management. The primary goal is to take account of all stakeholders, not least by promoting the importance of social dialogue in reshaping tomorrow’s workplace: the successful introduction of algorithmic management requires cooperation in working out how best to adapt software to individual circumstances, whether in deciding what data should be captured, or which parameters should be prioritised in the recruitment process.
It’s not simply a question of legal regulation: we need to look at the roles of software developers, managers, and workers. There’s little point in introducing ‘AI for AI’s sake’, investing in sophisticated software without a clear use case. Workers will understandably concerned, and seek to resist: from ripping out desk activity monitors to investing in clever FitBit cradles which simulate your workout of choice.
‘There’s no such thing as the future of work’, concludes Professor Adams-Prassl. ‘When faced with the temptation of technological predeterminism, always remember to keep a strong sense of agency: there’s nothing inherent in tech development – it’s our choices today that will ensure that tomorrow’s workplace is innovative, fair, and transparent.’
Jeremias Adams-Prassl is Professor of Law in the University of Oxford, and a Fellow of Magdalen College. He tweets about algorithms, innovation, and the future of work @JeremiasPrassl.
The theory of thermodynamics, commonly associated with the steam engines of the 19th century, is a universal set of laws that governs everything from black holes to the evolution of life. But with modern technologies miniaturising circuits to the atomic scale, thermodynamics has to be put to the test in a completely new realm. In this realm, quantum rather than classical laws apply. In the same way that thermodynamics was key to building classical steam engines, the emergence of quantum circuits is forcing us to reimagine this theory in the quantum case.
In the same way that thermodynamics was key to building classical steam engines, the emergence of quantum circuits is forcing us to reimagine this theory in the quantum case.
Quantum thermodynamics is a rapidly advancing field of physics, but its theoretical development is far ahead of experimental implementations. Rapid breakthroughs in the fabrication and measurement of devices at the nanoscale are now presenting us with the opportunity to explore this new physics in the laboratory.
Whilst experiments are now within reach, they remain extremely challenging due to the sophistication of the devices needed to replicate the operation of a heat engine, and due to the high-level control and measurement sensitivity that are required. Dr Ares’ group will fabricate devices at nanometre scales, merely a dozen atoms across, and hold them at temperatures far colder than even deepest outer space.
These nanoscale engines will give access to previously inaccessible tests of quantum thermodynamics.
These nanoscale engines will give access to previously inaccessible tests of quantum thermodynamics and they will be a platform to study the efficiency and power of quantum engines, paving the way for quantum nanomachines. Dr Ares’ will build engines in which the “steam” is one or two electrons, and the piston is a tiny semiconductor wire in the form of a carbon nanotube. She expects that exploring this new territory will have as great a fundamental impact on how we think of machines as previous studies in the classical regime have had.
This research could also uncover unique behaviours that open the way for new technologies such as new on-chip refrigeration and sensing techniques or innovative means of harvesting and storing energy.
The main question that Dr Natalia Ares’ recently awarded European Research Council (ERC project) seeks to answer is: what is the efficiency of an engine in which fluctuations are important and quantum effects might arise? The implications of answering this question are far ranging and could for example inform the study of biomotors or the design of efficient on-chip nanomachines. This research could also uncover unique behaviours that open the way for new technologies such as new on-chip refrigeration and sensing techniques or innovative means of harvesting and storing energy. By harnessing fluctuations, the requirements to preserve quantum behaviour might become less demanding.
Dr Ares’ findings will have applications in both classical and quantum computing. In the same way that Joule’s experiment demonstrated that motion and heat were mutually interchangeable, Dr Ares aims to link the motion of a carbon nanotube with the heat and work produced by single electrons. She is excited to exploit devices with unique capabilities to discover the singularities of quantum thermodynamics.
For the last six months, in every country, on every continent, politicians, policymakers and scientists have been convulsed by trying to locate and then do the ‘right thing’ in the face of COVID-19 – and very often, apparently, they have been failing.
For the first time, in a very long time, philosophical considerations have become the stuff of political debate and everyday conversation. Is it right to deprive people of their liberty or not; to dictate personal behaviour or not; to close borders or not; to protect life or the health service or the economy, or not?
For the first time, in a very long time, philosophical considerations have become the stuff of political debate and everyday conversation....The world seems stymied by ethical considerations: is there a right thing and, if so, what is it?
The world seems stymied by ethical considerations: is there a right thing and, if so, what is it? These are not everyday questions, for most people and many politicians in particular stand accused of having done the wrong thing, taken the wrong decisions. But the Oxford Professor of Medical Ethics, (Dr) Dominic Wilkinson, is someone for whom these are everyday questions and he does not rush to judgement. He says, ‘Philosophy can help inform what we ought to do, given what we know.’
The trouble is, Professor Wilkinson says, the ‘facts’ appear to have changed in terms of our understanding of COVID-19 as time has progressed. What we know now, compared with what we knew even three months ago, is vastly different. And, says Professor Wilkinson, ‘You couldn’t make decisions based on what you didn’t know. You can only make decisions [and be judged] on what it was reasonable to do at a particular point in time....You can look back in two, five or ten years and see how things turned out. But even if a decision turns out badly – that doesn’t make it the wrong decision to have made at the time.’
‘Consequentialism’, as it is known in philosophy, commends considering what will follow (the consequences) when you make a decision. You consider what will (or may) happen if you take certain actions. And because of the imperfections of our understanding, Professor Wilkinson says, ‘Sometimes you have to make a decision in good faith.’
Clearly, from the multiplicity of approaches around the world to the pandemic, different governments and policymakers have come to different conclusions – both about the ‘right thing’ to do and the right thing to consider when making those decisions. Most, if not all, will have sought to preserve life. But whose life? A COVID-sufferer’s, a cancer patient’s, a person who loses their job? And mixed in with the question have been other considerations: should we prioritise saving the NHS and flattening the curve over individual liberty – and would this, anyway, achieve the over-arching aim of preserving life?
One canard which has dropped into the debate has been the notion that politicians are merely ‘following the science’. Although beloved by policymakers, Professor Wilkinson insists that science cannot make policy decisions, ‘In some limited instances, it may be ethically obvious what conclusion should follow from ‘following the science’. But with a novel virus, this is not the case....’
One canard which has dropped into the debate has been the notion that politicians are merely ‘following the science’. Although beloved by policymakers, Professor Wilkinson insists that science cannot make policy decisions, ‘In some limited instances, it may be ethically obvious what conclusion should follow from ‘following the science’. But with a novel virus, this is not the case....’
He adds, ‘Decisions involve values....There may be an obvious ethical answer to a straightforward question. But when you’re making an ethical and political decision, all sorts of different values are at stake – how to protect the well-being of people with COVID or of the unemployed or someone with cancer.
‘Science cannot tell us what values we should put weight on. These are ethical decisions – not scientific ones...What is more, science is messy and complicated and very often says different things and science will evolve over time.’
So how do we make sense of countries’ attempts to tackle the pandemic? Is anyone doing the right thing? According to Professor Wilkinson, ‘There isn’t a single right answer, it depends how you weigh up your choices. You need to distinguish between a number of things.’
Does this mean, then, that all decisions are equally valid...? No, says Professor Wilkinson, ‘Context matters...Philosophers, justifiably reject the idea of ethical relativism. It might be difficult to work out the reasonable, right approach but there are definitely wrong choices
Does this mean, then, that all decisions are equally valid – another philosophical standpoint: ‘relativism’? No, says Professor Wilkinson, ‘Context matters, what might be the right thing in the UK or the US may not be the right thing somewhere else. But that doesn’t mean it is just a matter of opinion. Philosophers, justifiably reject the idea of ethical relativism. It might be difficult to work out the reasonable, right approach but there are definitely wrong choices.’
For example, Professor Wilkinson, who is also a qualified doctor, says that ‘recommending non-evidenced based’ interventions such as chloroquine, or bleach could be seen as ‘morally wrong’ choices. But he says, ‘We will all make mistakes. There are some things, however, which are not just a matter of someone’s opinion.’
At some point in the future, when the pandemic and the policy decisions are reviewed and blame is apportioned, it may be possible to look back and say that some decisions were made in good faith, given the knowledge at the time, even though they cost lives – meanwhile, others will look wrong.
Consistency, says Professor Wilkinson, is key to ethical decision-making. Where governments and politicians have failed to show consistency, it becomes difficult to justify decisions. But does that mean, henceforth, that the entire purpose of society should be given to preserving life – our national income should be entirely directed towards curing cancer?
At some point in the future, when...blame is apportioned, it may be possible to look back and say that some decisions were made in good faith, given the knowledge at the time, even though they cost lives – meanwhile, others will look wrong
‘No,’ says Professor Wilkinson. ‘We knew COVID was different from influenza [and needed to be approached differently]. But this is a novel epidemic rather than an endemic condition (such as malaria or TB) and so it is justified to treat it in a different way to the way we treat other healthcare threats.’
Key to the treatment of COVID-19, he says, was the fact that many people were going to be unwell at the same time, whereas cancer is a long-standing threat that is not going to go away. But, with fears of a second wave coming, Professor Wilkinson says, policymakers will soon have a different set of decisions, since it ‘may not be possible’ politically to take the same actions again in the face of a renewed virus. With concerns mounting about the impact on the economy and the reluctance of many younger people to be contained, the priority, he says, must be to ‘save lives’. But the mere number of lives saved is not the only thing that matters. ‘You need to consider the length of life and how the lives of the population are diminished [by intervention measures].’
These are hard questions for anyone, politicians included. It is not just a question of ‘following the science’, ‘this is about making an ethical decision about what might happen. And ethical decisions can be wrong’. There has been little time or opportunity for reflection, but says Professor Wilkinson, ‘Politicians have to balance a range of priorities, think seriously about how to act.’
Whether modern politicians are equipped for such considerations, is not something on which a good philosopher will venture an opinion. But trust is essential, Professor Wilkinson says, ‘Issues of credibility arise when there is inconsistency. We demand of our politicians a high standard.’
Whether modern politicians are equipped for such considerations, is not something on which a good philosopher will venture an opinion. But trust is essential, Professor Wilkinson says, ‘Issues of credibility arise when there is inconsistency. We demand of our politicians a high standard
Since the beginning of the crisis there have been frequent comparisons with wartime embattlement. From a philosophical point of view, it raises similar questions, ‘You have to balance costs and face ethical questions in much the same way...There are lots of parallels with the profound and difficult questions that countries face when they are at war.’
When all this is over, will there be the new world, the new normal of which so much is heard? As a doctor, Professor Wilkinson, believes there could be, ‘Many people who have faced serious illness reflect on their priorities...it helps to put their life into perspective.’
But, he says, ‘The trickiest time is still ahead. We could be facing something worse than the first wave and we will need to take decisions on things such as who gets the vaccine first...there are many more ethical decisions than just the lockdown. We don’t know yet what people will tolerate – what they will do.’
The blame game has a long way to run – particular for those whose decisions do not stand up to scrutiny.
- ‹ previous
- 38 of 252
- next ›




