
Study reveals how conversational AI can exert influence over political beliefs
A new joint study from the Oxford Internet Institute (OII) and the UK AI Security Institute (AISI) uncovers how conversational AI sways political beliefs and why it works.
The paper, 'The Levers of Political Persuasion with Conversational AI', published in Science, examines how large language models (LLMs) influence political attitudes through conversation.
Authored by a team from OII, AISI, the LSE, Stanford University and MIT, the research draws on nearly 77,000 UK participants and 91,000 AI dialogues, to provide the most comprehensive evidence to date on the mechanisms of AI persuasion and their implications for democracy and AI governance.
Our results demonstrate the remarkable persuasive power of conversational AI systems on political issues. We show that very small, widely-available models can be fine-tuned to be as persuasive as massive, proprietary AI systems.
Kobi Hackenburg, DPhil candidate, Oxford Internet Institute
Key findings include:
Model size isn’t the main driver of persuasion - A common fear is that as computing resources grow and models scale, LLMs will become increasingly adept at persuasion, concentrating influence among a few powerful actors. However, the study found that model size plays only a modest role.
Fine-tuning and prompting matter more than scale - Targeted post-training, including supervised fine-tuning and reward modelling, can increase persuasiveness by up to 51%, while specific prompting strategies can boost persuasion by up to 27%. These techniques mean even modest, open-source models could be transformed into highly persuasive agents.
Kobi Hackenburg, Oxford Internet Institute
Persuasion comes at a cost to accuracy - The study reveals a troubling trade-off: the more persuasive a model is, the less accurate its information tends to be. This suggests that optimising AI systems for persuasion could undermine truthfulness, posing serious challenges for public trust and information integrity.
AI conversation outperforms static messaging - Conversational AI was found to be significantly more persuasive than one-way, static messages, highlighting a potential shift in how influence may operate online in the years ahead.
This paper represents a comprehensive analysis of the various ways in which LLMs are likely to be used for political persuasion. We really need research like this to understand the real-world effects of LLMs on democratic processes.
Professor Helen Margetts, Oxford Internet Institute
Lead author Kobi Hackenburg, DPhil candidate at the OII and Research Scientist at AISI, said: 'Our results demonstrate the remarkable persuasive power of conversational AI systems on political issues. We show that very small, widely-available models can be fine-tuned to be as persuasive as massive, proprietary AI systems.'
Co-author, Helen Margetts, Professor of Society and the Internet at OII, said: 'This paper represents a comprehensive analysis of the various ways in which LLMs are likely to be used for political persuasion. We really need research like this to understand the real-world effects of LLMs on democratic processes.'
The authors note that, while persuasive in controlled settings, real-world impacts may be constrained by users’ willingness to engage in sustained, effortful conversations on political topics.
Read the full paper, 'The Levers of Political Persuasion with Conversational AI', in Science.