
Expert Comment: How and why mathematics will both underpin and lead the next generation of AI
Peter Grindrod CBE, Professor in Oxford University’s Mathematical Institute and Co-Investigator of the Erlangen AI Hub, outlines why mathematics is pivotal for all artificial intelligence technologies, and how future leadership will depend on investing in numerical disciplines.
Professor Peter Grindrod.More recently, we have seen the rise of “agentic” AI: systems that can coordinate many components, tools, and subprocesses to pursue set goals. AI can now perform many high-frequency “grunt” tasks and can both increase bandwidth and free up users to spend more time on the "matters that really matter".
These approaches have delivered impressive results, but they also share defining weaknesses: opacity and implicit biases. Whether we are discussing a neural network trained on millions of images, or a language model orchestrating external actions or procedures, the internal logic of these systems is often difficult to interrogate, explain, or formally trust. As AI moves from making recommendations to supporting decisions, and then to partial autonomy, this opacity becomes a serious concern.
If we want AI systems to exhibit forms of creativity, abstraction, and imagination that remain uniquely human, we will need new mathematical frameworks.
The challenges are now well rehearsed. Are the data adequate and representative? What biases are embedded in training sets or calibration procedures? Can systems be made fair, and who gets to define “fairness”? What subjectivity and blind spots are (ever) acceptable within distinct applications? How vulnerable are they to malicious manipulation? What can we do about hallucinations?
These are not peripheral questions. They strike at the foundations of what it means for an AI system to function reliably in the real world.
It is precisely at this foundational level that mathematics can and must play a leading role. Too often, mathematics is treated as something that can be “bolted on” to AI, as a tool for interpretation, evaluation, or error analysis. This is a profound misunderstanding. Mathematical structure is not an accessory to intelligent systems; it is their scaffolding. Without it, we are left with heuristics, pragmatics, and empiricism alone, powerful but fragile, effective yet difficult to justify when things go wrong.
In addition, mathematics offers deeper concepts and abstractions that may be catalysts for next-generation AI. Mathematics offers something very distinctive: a language for hard provable results, logic, confidence, and performance bounds - some guarantees of behaviour and (foreseen and unforeseen) performance.
Mathematics gives us principled ways to reason about data, uncertainty, and evidence. Through probability, geometry and topology, it helps us understand the structure and shape of data spaces, why certain representations work, where decision boundaries lie, and how small perturbations can lead to large changes in outcome. Through optimisation, numerical analysis, and dynamical systems, it sheds light on convergence, stability, and failure modes in learning algorithms. Through information theory, it clarifies what can, and cannot, be inferred from finite data.
Many of today’s most transformative technologies rest on mathematical ideas that once seemed abstract or esoteric.
Equally important, mathematics underpins explanatory and exploratory AI. It allows us not just to build systems that perform well, but to ask why they perform as they do. Explainability, interpretability, and robustness are not purely engineering add-ons; they are mathematical properties that can be analysed, proven, and stress-tested. In examples of AI spoofing or having vulnerabilities to adversarial attacks, or hallucinations, there is a need to understand how and why these occur, and to define and justify suitable mitigations. The same is true with issues of operational bias emanating from conditioning data sets and methods, as data drift between calibration and operations. This is the difference between post-hoc explanations and models that are interpretable by design.
There is also a forward-looking dimension. As interest grows in neuromorphic and brain-inspired computing, mathematics becomes even more central. If we want AI systems to exhibit forms of creativity, abstraction, and imagination that remain uniquely human, we will need new mathematical frameworks, drawing on areas such as category theory, stochastic processes, non-classical logics, and the mathematics of learning and adaptation. These are not incremental tweaks to existing architectures; they are conceptual shifts.
This is the intellectual space in which the Erlangen AI Hub is operating.
Our ambition is not merely to make current AI methods safer or more efficient, important though that is, but to solidify their foundations. By bringing powerful abstract ideas from across mathematics into direct engagement with real-world AI challenges, we aim to build systems that are more reliable, more controllable, and more transparent.
Crucially, this work is grounded in practice, responding to national priorities. Our partners include the BBC, Ofcom, Capgemini and many large and small companies spanning industries of all sizes and sectors, policy specialists and regulators, funders, and national strategic decision-makers. The goal is not mathematics for its own sake, but mathematics that yields actionable insight, mathematics that changes what AI can responsibly do.
Mathematics will be both the innovator and the disruptor in the next phase of AI. It will move us beyond systems that merely correlate toward systems that reason, adapt, and justify their actions within known limits.
This approach matters profoundly for the UK. If a UK Sovereign AI initiative simply replicates the trajectories of the United States, China, India, or the European Union, it will struggle to distinguish itself. Scale alone is not our comparative advantage. Intellectual leadership can be. By developing genuinely new concepts, methods, and guarantees for AI, rooted in deep mathematics, we can lead rather than follow.
There is precedent for this. Many of today’s most transformative technologies rest on mathematical ideas that once seemed abstract or esoteric. Public-key cryptography, post-quantum cryptography, compressed sensing, and modern control theory all began as mathematical insights before becoming industrial necessities. AI will be no different.
Mathematics will be both the innovator and the disruptor in the next phase of AI. It will move us beyond systems that merely correlate toward systems that reason, adapt, and justify their actions within known limits. It will help us replace blind trust with warranted confidence. And it will enable forms of creativity, not just in generating content, but in solving problems that are rigorous, accountable, and genuinely new.
If we want AI that society can rely on, mathematics must be at its core. That is not a constraint on progress. It is the condition that makes progress sustainable. The Erlangen AI Hub is an asset to Oxford, to its academic, commercial and institutional collaborators; and to the UK, which must succeed within a global, competitive, community.
For more information about this story or republishing this content, please contact [email protected]