UK AI Safety Summit Day One in block writing against a background of blue merging into yellow.
The UK Government hosted the UK AI Safety Summit at Bletchley Park on 01 and 02 November, 2023. Picture by Marcel Grabowski / UK Government / Identity

Expert comment: Oxford AI experts comment on the outcomes of the UK AI Safety Summit

Over 2 days, the UK AI Safety Summit brought together approximately 150 representatives from across the globe including government leaders and ministers, and industry, academia and civil society leaders. Oxford academics comment on the outcomes.

Ciaran Martin is Professor of Practice in the Management of Public Organisations, University of Oxford.

“It’s easy to criticise, but don’t let the perfect be the enemy of the good. This was a good initiative and the British Government deserves credit for its global leadership. The alternative was not a better event – the alternative was nothing at all, and a repeat of the mistakes of a generation ago when we allowed ourselves to become dependent on technology which was built without security in mind.”

“There remains an underlying tension between – very much in evidence at Bletchley –what some are calling AI ‘doomers’ who talk about existential risks, and optimists, who take a more benign view. For me personally, I am not a ‘doomer’. Killer robots are not top of my worry list and I think hype is harmful. I think AI technologies give rise to multiple security and safety challenges of varying severity and urgency, from disinformation to public service bias to advanced cyber-attacks to easier access to biochemical weapons. They all require different solutions and a global community working together to secure them. Going forward, we will need to broaden the conversation and make sure it’s not captured by the existing tech giants. But Bletchley was a good start and the Prime Minister deserves credit for doing it.”

Dr Ana Valdivia, Lecturer in AI, Government & Policy, Oxford Internet Institute (OII), University of Oxford:

“The summaries from the eight Roundtables held on November 1, 2023, strongly highlight the necessity for AI regulation in the UK. Despite discussions on AI risks such as misuse, unpredictable advances, and loss of control, Rishi Sunak argued it was premature for the UK government to legislate on AI. However, considering the international context where the EU, US, and China, amongst others, are already implementing regulations to mitigate algorithmic risks, it's imperative for the UK to follow suit. Concretely, Table six emphasised the challenge of balancing risks and opportunities in AI development, advocating for regulation and innovation to work hand in hand. The UK Prime Minister should seriously consider initiating AI regulation within the UK; the time might not be too early but potentially too late.”

John Tasioulas, Professor of Ethics and Legal Philosophy; Director of the Institute for Ethics in AI, University of Oxford:

“As anticipated, the concept of “safety” is stretched in the Declaration to include not only avoiding catastrophe or threats to life and limb, but also securing human rights and the UN Sustainable Development Goals etc. Pretty much all values under the sun. Hopefully this amounts to a de facto recognition that the doomster ‘existential risk’ framing was unduly restrictive if not also deeply misleading. Or maybe it was just an attention-grabbing PR move to begin with or an attempt to find some sufficiently political neutral notion under which different factions could gather.”

“In the face of all this, the value of the Declaration may be largely symbolic. It signals that political leaders are aware that AI poses serious challenges and opportunities and that they are prepared to take appropriate action. But the heavy lifting still needs to be done of translating the values into effective regulatory structures. In my view, this requires serious democratic participation, it cannot be a top-down process dominated by technocratic elites.”

“Finally, perhaps the biggest achievement of the summit was that China was brought into this discussion. This is absolutely vital, since there cannot be the meaningful global regulation of AI that is needed without the participation of China.”

Associate Professor Carissa Véliz, Faculty of Philosophy and Institute for Ethics in AI, University of Oxford:

“To what extent the AI Summit has been a success or a failure is yet to be seen. Thus far, the event has had a symbolic function. However, sometimes symbolism weighs enough to make a difference. If this event leads to an adequate and binding international agreement on AI ethics, then it will have been a success. For that to happen, the international community needs to do a better job at making the conversation more inclusive. If all we are left with are a few nice photos, a toothless and vague declaration, and well wishes, then it will have been a failure."

"We already knew that AI is a risky technology, not only because there are future dangers, but also because we are already experiencing harms from AI. Serious researchers have been working on AI ethics for years. Agreeing that AI is risky is no progress. The work needed to make AI respectful of human rights and supportive of fairness and democracy is yet to be done. In that sense, the United States’ White House Executive Order on AI is a much more successful effort to get us closer to where we need to be. If the UK wants to be taken seriously, it would do well to regulate AI within its own borders as a first step. It would also do well in giving less prominence to tech executives who, by definition, cannot regulate themselves—their financial conflict of interest disqualifies them. The interview between Rishi Sunak and Elon Musk was not only bizarre—it was a mistake."

Professor Brent Mittelstadt, Director of Research at the Oxford Internet Institute:

“The main outcomes of the AI Safety Summit were the signing of a declaration by 28 countries to continue meeting and discussing AI risks in the future, the launch of the AI Safety Institute, and a general agreement that more research is needed to make AI safe in the future. At first glance these seem like significant achievements, but in the context of research on AI ethics, regulation, and safety, they feel more like a step in the wrong direction.”

“These topics have been intensively researched over the last decade and a wide range of technical and policy solutions have been developed to tackle the real risks posed by today’s AI systems. The decision to focus the Safety Summit on frontier AI, and to define safety narrowly around long-term existential risks, cybersecurity, and terrorism, means that this exceptional portfolio of research is effectively being ignored.”

“We know the risks that AI poses now, and we’ve developed ways to address them, so why is there such a reluctance to take any steps towards hard regulation? More research will certainly be needed on combatting the risks of generative AI, foundation models, and future AI advances, but what about the risks we already face? Frontier AI should not be used as an excuse to avoid regulating the well-established harms of today’s AI systems.”

Professor Robert Trager, Director, Oxford Martin AI Governance Initiative

“The UK has shown real leadership with the AI Safety Summit, and has concrete outcomes to show for it. The establishment of the safety institutes is one. The appointment of Yoshua Bengio to lead a ‘state of the science’ consensus report is another - this project can play a role similar to the IPCC in the climate area. A key to success there will be producing findings more quickly than the IPCC does, and happily Bengio appreciate this urgency. The UK also deserves credit for bringing China to the Summit and thereby creating a continuing forum for broad engagement on these issues, rather than replicating a G7-like forum. Probably the most important achievement is the consensus on the security risks of the technology alongside other important societal risks. A specific achievement of the second day of the summit in this respect is the recognition of the need for monitoring not just of deployment but of development as well.” 

“What the summit has not done is establish a consensus path forward in establishing international standards and oversight of advanced AI. The hard work of building consensus of what areas of standards should be internationalized and through what institutional mechanisms, remains to be done. It remains unclear what actors will, and what actors should, take on the challenge of forging this consensus.”

Dr Keegan McBride, Departmental Research Lecturer in AI, Government, and Policy

“The AI Safety Summit was the UK’s attempt at setting the rules for AI development – trying to position itself as the global leader in AI Safety. Unfortunately for the UK, in this, the Summit failed, being undercut both by the G7’s Hiroshima AI Process and the USA’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Supporters of the summit may point to the signing of the Bletchley Declaration as a successful and tangible outcome, in terms of impact it is unlikely to amount to much.”

Professor Jakob Foerster, Department of Engineering Science, University of Oxford

“It is great that regulators are taking AI safety seriously and pushing for international coordination on this topic. The internet and social media caught regulators on the backfoot and, given the rapid pace of technological improvement in AI, we simply cannot afford to make the same mistakes here.”

“At the same time, it is crucial that regulation is focussed on levelling the playing field between the private sector and the public sphere. Regulation should never be a vehicle that stifles innovation and thus increases the technological lead of a small number of private enterprises.”

“In my opinion, not just the research into and application of AI safety procedures but into leading AI technology should happen within the academic sector, rather than being outsourced to tech companies. It is also vital to foster an open-source community, like it exists in the area of cyber security. Security through obscurity is simply not a long-term option.”

“We need to drastically increase the support for public institutions in large-scale AI research, as well as AI Safety research, to ensure that AI becomes a technology for the common good for centuries to come. Public institutions should not be delegated to the role of a referee or commentator but a key player in this crucial area."

Xiaolan Fu, Professor of Technology and International Development, Oxford Department of International Development:

“It is a milestone in human history in our development, use and supervision of AI. I am in particular glad to see the announcement of a £80 million collaboration to accelerate development in world’s poorest countries using AI. When AI is bringing transformative power to the human society, there is a danger that Africa and the low-income countries are left behind again. This collaboration shall be a start of such global efforts to make sure that no one is left behind.  Another important achievement is the inclusion of China to this important meeting. The UK government and the PM has done some right for the country and for the global community. It also reinforced the UK’s leadership in not only the creation but also the governance of frontier technologies.”

“The AI Safety Institute is the first international collaboration initiative of this kind in the world. I hope that it also considers AI safety in different context, taking into consideration the context in the developing countries and make sure AI is safe and for good in all countries at different levels of development.”

Dr Heloise Stevance, Schmidt AI in Science Fellow:

“It was re-assuring to see that the risks posed by these tools with respect to disinformation are being acknowledged and will be tackled. However, I am also disappointed to see that the recent talks about AI Safety at the summit, or petitions for pausing AI research in previous weeks, are also heavily focusing on hypothetical harms of AI sentience and self-replication that are closer to sci-fi than real life. Currently one of the major ways that large computational models will affect society is in their ability to harness the work of the many for the benefit of AI tools controlled by the few.”

“Elon Musk worrying about how individuals will find meaning in their life when AI kick-starts an ‘age of abundance’, is one example of how the drivers of AI funding can be fundamentally disconnected from the reality that most of us experience. Historically, technological advances have not benefited all tranches of society and all countries equally - if we want a better and safer future for everyone, we must ensure that the fruits of the AI revolution are not only ‘safe to eat’ but also shared fairly with humanity.”

Jared Tanner, Professor of the Mathematics of Information, University of Oxford

“The proposed frontier AI ’State of the Science’ report will be an invaluable resource for educators, employers, policy makers, and scientists. Having this report written by one of the leading technical experts in AI, Prof. Yoshua Bengio, gives the report the accuracy, technical sophistication, and trust that it needs. AI is at only the start of delivering profound improvements to out lives; guiding its application to the most important societal challenges will be one of the essential tasks of the AI Safety Institute.”

Felipe Thomaz, is Associate Professor of Marketing, Saïd Business School, University of Oxford.

“It is difficult not to celebrate safety. Customer protections, safeguarding trust in news and information flows, protecting infrastructure, and many areas beyond are all crucial, honourable, and needed. However, the US Executive Order (30 Oct) and this announcement by Mr Sunak represent an incredibly successful year-long lobbying effort by the largest AI players and providers globally, who were deeply concerned about the ease of entry by homebrewed competitors into their arena. By requiring government approval prior to public testing and product releases, these governments have raised very tall barriers to entry to anyone dreaming of entering the AI economy. This process will favour the largest companies who already have inroads into the government (see their celebratory quotes in the announcements), and who already possess the data assets required for development, and who already spent the previous year accelerating their own R&D with the knowledge of this development.”

Mari Sako, Professor of Management Studies, Saïd Business School, University of Oxford:

“It’s natural on this occasion to focus on what countries and companies can do to enhance AI safety for the globe. But safety depends on both formal and informal rules of the game, and soft norms evolve from within the scientific community (that develop AI) and professional associations (whose members both develop and use AI). I am struck by lack of explicit involvement of these norm-setting institutions. (I research about the impact of AI on professions and ventures).”

Alex Connock, Senior Fellow in Management Practice, Saïd Business School, University of Oxford:

“The use of AI by bad actors is one risk that can indeed be partially mitigated by the kind of AI safety institutes announced this week, by spy agencies checking new models and so forth.  Although even that will be hard, as Large Language Models increasingly become something you can tune and run on a laptop, and there are actually countries out there who didn’t make the summit that nonetheless might want to use unregulated models of their own.

Human replacement by AI is not just changing the paradigms of entertainment, but every sector. Here it won’t be GCHQ, or even government regulation at large, that can help, but the ability of the economy to flex at speed to create the kind of new ‘prompt engineer’ roles in every sector that can thrive alongside AI, notwithstanding Elon Musk’s improbable suggestion at the summit that all jobs will be obsolete.”

Dr Lulu P. Shi, Lecturer, Department of Education, University of Oxford:

“The line of argument still has a very strong focus on the long-term risks, such as risks of human extinctions etc. This is dangerous, as it leads the focus away from very real and already existing risks that AI is causing, such as risks caused by surveillances etc, which have been punishing people from already marginalised groups. At no point social justice was put at the center of the AI discussion.”

“I appreciate that it was mentioned by the PM, that “Until now the only people testing the safety of new AI models have been the very companies developing it. We shouldn’t rely on them to mark their own homework”. However, the follow up interview the PM and Elon Musk, demonstrated that the UK may not be in the position/have the willpower to distance its policies from the big tech. Also looking at the partnership with the big tech companies, it remains doubtful how much sovereignty the UK will push to pursue.”