Oxford Science Blog | University of Oxford

Oxford Science Blog

Image credit: Shutterstock

Mathematicians are known for having a brilliant way with numbers, but to have impact beyond their field they need to have an altogether different skill: the ability to communicate.

The George Pólya Prize for Mathematical Exposition, from the Society for Industrial and Applied Mathematics (SIAM), acknowledges and celebrates academics who are both great thinkers and writers.

This year’s recipient Professor Nick Trefethen, Head of the Numerical Analysis Group in the Oxford Mathematical Institute, has been celebrated for bridging the communication gap with his publications. The Society highlights the ‘exceptionally well-expressed accumulated insights found in his books, papers, essays, and talks... His enthusiastic approach to his subject, his leadership, and his delight at the enlightenment achieved are unique and inspirational, motivating others to learn and do applied mathematics through the practical combination of deep analysis and algorithmic dexterity.’

Professor Trefethen discusses receiving the honour and why his field is the fastest moving laboratory discipline in STEM.

Congratulations on your award, how did you react when you found out you had won?

I was thrilled. There are many accolades to dream of achieving in an academic career, but I am one of the relatively few mathematicians who love to write. So, to be acknowledged for mathematical exposition is important to me. My mother was a writer and I guess it is in my blood.

What is numerical analysis?

Much of science and engineering involves solving problems in mathematics, but these can rarely be solved on paper. They have to be solved with a computer, and to do this you need algorithms. 

Numerical analysis is the field devoted to developing those algorithms.  Its applications are everywhere. For example, weather forecasting and climate modelling, designing airplanes or power plants, creating new materials, studying biological populations, it is simply everywhere.

It is the hands-on exploratory way to do mathematics. I like to think of it as the fastest laboratory discipline. I can conceive an experiment and in the next 10 minutes, I can run it. You get the joy of being a scientist without the months of work setting up the experiment.

How does it work in practice?

Everything I do is exploratory through a computer and focused around solving problems such as differential equations, while still addressing basic issues. In my forthcoming book Exploring ODEs (Ordinary Differential Equations) for example, every concept measured is illustrated as you go using our automated software system, Chebfun.

How has your research advanced the field?

Most of my own research is not directly tied to applications, more to the development of fundamental algorithms and software.

But, I have been involved in two key physical applications in my career. One was in connection with transition to turbulence of fluid flows, such as flow in a pipe; and recently in explaining how a Faraday cage works, such as the screen on your microwave oven that keeps the microwaves inside the device, while letting the light escape so that you can keep an eye on your food.

You got a lot of attention for your alternative Body Mass Index (BMI) formula, how did you come up with it?

My alternative BMI formula was *not* based on scientific research. But, then again, the original BMI formula wasn’t based on much research either. I actually wrote a letter to The Economist with my theory. They published it and it spread through the media amazingly.

As a mathematician, unless you’re Professor Andrew Wiles or Stephen Hawking for example, you are fortunate to have the opportunity to be well known within the field and invisible to the general public at the same time. The BMI interest was all very uncomfortable and unexpected.

Image credit: Nick TrefethenImage credit: Nick Trefethen

Professor Nick Trefethen has won the George Pólya Prize for Mathematical Exposition from the Society for Industrial and Applied Mathematics (SIAM).

Why do you think so few mathematicians are strong communicators?

I don’t think this is necessarily the case. One of the reasons that British universities are so strong academically, is the Research Excellence Framework, through which contributions are measured. But, on the other hand the structure has exacerbated the myth that writing books is a waste of time for academic scientists. The irony is that in any real sense, writing books is what gives you longevity and impact.

At the last REF the two things that mattered most to me, that I felt had had the most impact, were my latest book and my software project, and neither were mentioned.

In academia we play a very conservative game and try to only talk about our latest research paper. The things that actually give you impact are not always measured.

What are you working on at the moment?

I just finished writing my latest book on ODEs (due to be published later this year), which I am very excited about.

Have you always had a passion for mathematics?

My father was an engineer and I sometimes think of myself as one too - or perhaps a physicist doing maths. Numerical Analysis is a combination of mathematics and computer science, so your motivations are slightly different. Like so many in my field, I have studied and held faculty positions in both areas.

What is next for you?

I am due to start a sabbatical in Lyon, France later this year. I'll be working on a new project, but if you don’t mind, I won’t go into detail. A lot of people say that they are driven by solving a certain applied problem, but I am really a curiosity-driven mathematician. I am driven by the way the field and the algorithms are moving. I am going to try and take the next step in a particular area. I just need to work on my French.

What do you think can be done to support public engagement with mathematics?

I think the change may come through technology, almost by accident. You will have noticed over the last few decades, that people have naturally become more comfortable with computers, and I think that may expand in other interesting directions.

The public’s love/hate relationship with mathematics has been pervasive throughout my career.  As a Professor, whenever you get to border control you get asked about your title. ‘What are you a Professor of?’ When you reply, the general response is ‘oh I hated maths.’ But, sometimes you'll get ‘I loved maths, it was my best subject’, which is heartening.

What has been your career highlight to date?

Coming to Oxford was a big deal, as was being elected to the Royal Society. It meant a lot to me, especially because I am an American. It represented being accepted by my new country.

Are there any research problems that you wish you had solved first?

I’m actually going to a conference in California, where 60 people will try to prove a particular theorem; Crouzeix’s Conjecture. By the end of the week I will probably be kicking myself that I wasn’t the guy to find the final piece of the puzzle.

Image credit: Shutterstock

The End of ET Fees: A Good day for the Rule of Law

Lanisha Butterfield | 27 Jul 2017

In what was a landmark ruling, the highest court in the UK, yesterday declared employment tribunal fees to be ‘inconsistent to the access to justice.’

 The decision represents a humiliating defeat for the government, who have been forced to scrap the controversial fee system.

The Supreme Court voted in favour of the trade union Unison, which argued that fees of up to £1,200 were preventing workers - particularly people on lower incomes, from getting justice.

Associate Professor Abi Adams, from the Oxford Department of Economics, and Associate Professor Jeremias Prassl, from the Oxford Faculty of Law, welcome the gravity of the ruling and what it could mean for workers who make future claims.  In research published earlier this year, the two argued that the 2013 Order introducing Employment Tribunal Fees was a ‘clear violation’ of long-established UK and EU law. 

Access to Justice is the bedrock of the Rule of Law. Today’s unanimous Supreme Court judgement vindicates one of the most fundamental principles of our Constitution, dating back to Magna Carta: everyone has the right to be heard before the Courts.

 Everyone? Well, at least until 2013. Nearly four years ago, Chris Grayling (one of the most disastrous Lord Chancellors in recent history) introduced fees of up to £1,200 for employment tribunal claims. Even relatively straightforward claims (e.g. for unpaid wages, median value just under £600) cost £390 to bring – with no guarantee of recovery, even for successful claimants.

 The impact was swift and brutal: within months, claims had dropped by nearly 80%. And it was entirely predictable: when we crunched the government’s own numbers, it became clear that 35-50% of those who won their case risked losing out financially. Most workers with low-value claims simply gave up.

 Bad news for workers, of course – but also for employers. Without enforcement, employment rights are meaningless, as Matthew Taylor’s recent review of modern working practices acknowledged.  Rogue employers can get away with undercutting those who comply with the law. Both the High Court and the Court of Appeal, however, upheld the government’s fees in a series of challenges brought by Unison, the trade union.

 The Supreme Court’s powerful judgement could not have disagreed more strongly: the Fees Order, the Justices unanimously agreed, ‘effectively prevents access to justice, and is therefore unlawful.’ Their conclusion was built both on fundamental constitutional theory, ‘elementary economics’, and ‘plain common sense’.

 Constitutional theory first: ‘It may be helpful’ Lord Reed politely suggested, ‘to begin by briefly explaining the importance of the rule of law’. There’s little to add to his powerful analysis:

 Courts exist in order to ensure that the laws made by Parliament, and the common law created by the courts themselves, are applied and enforced. In order for the courts to perform that role, people must in principle have unimpeded access to them. Without such access, laws are liable to become a dead letter, the work done by Parliament may be rendered nugatory, and the democratic election of Members of Parliament may become a meaningless charade.

 Whilst the Lord Chancellor’s aims of deterring vexatious litigants and raising money for the Ministry of Justice may well be legitimate in principle, in practice they had achieved something different altogether: given the financial risks even when successful, ‘no sensible person will pursue the claim’.

 There was more elementary economics and plain common sense to come, demolishing the Lord Chancellor’s insistence that ‘the higher the fee, the more effective it is’ and that no public benefits flowed from the justice system, amongst others.

 Where does this leave us? In the short term, things will get messy (and expensive) for the Ministry of Justice: in 2013, the Lord Chancellor undertook to repay all fees illegally levied, and there may well be further arguments about extending time limits for claimants who were deterred from bringing claims in the first place. Going forward, our rational choice model furthermore suggests that similar challenges could be brought to fees introduced in other areas of the civil justice system.

 For now, it’s time to celebrate: employment tribunals have already begun to scrap the fees, and claimants across the country will once more have access to the ‘easily accessible, speedy, informal and inexpensive’ system first set up nearly 50 years ago.

‘An unenforceable right or claim’, the late Lord Bingham reminded us, ‘is a thing of little value to anyone.’ The Supreme Court did well to heed his words, and restore the Rule of Law.

This article was first published on the Huffington Post

About the authors

 Abi Adams (@abicadams) is an Associate Professors in Economics at the University of Oxford, a research Fellow at the Institute of Fiscal Studies, and a Fellow of New College. She specialises in labour and behavioural economics with an empirical bent.

 Jeremias Prassl (@JeremiasPrassl) is an Associate Professor in the Faculty of Law at the University of Oxford and a Fellow of Magdalen College. He writes on UK and European Employment Law, with a particular interest in the future of work in the gig economy.

Breaking boundaries in our DNA

Marieke Oudelaar from the Weatherall Institute of Molecular Medicine explains how complex folding structures formed by DNA enable genetically identical cells to perform different functions.

Our bodies are composed of trillions of cells, each with its own job. Cells in our stomach help digest our food, while cells in our eyes detect light, and our immune cells kill off bugs. To be able to perform these specific jobs, every cell needs a different set of tools, which are formed by the collection of proteins that a cell produces. The instructions for these proteins are written in the approximately 20,000 genes in our DNA.

Despite all these different functions and the need for different tools, all our cells contain the exact same DNA sequence. But one central question remains unanswered – how does a cell know which combination of the 20,000 genes it should activate to produce its specific toolkit?

The answer to this question may be found in the pieces of DNA that lie between our protein-producing genes. Although our cells contain a lot of DNA, only a small part of this is actually composed of genes. We don’t really understand the function of most of this other sequence, but we do know that some of it has a function in regulating the activity of genes. An important class of such regulatory DNA sequences are the enhancers, which act as switches that can turn genes on in the cells where they are required.

However, we still don’t understand how these enhancers know which genes should be activated in which cells. It is becoming clear that the way DNA is folded inside the cell is a crucial factor, as enhancers need to be able to interact physically with genes in order to activate them. It is important to realise that our cells contain an enormous amount of DNA – approximately two meters! – which is compacted in a very complex structure to allow it to fit into our tiny cells. The long strings of DNA are folded into domains, which cluster together to form larger domains, creating an intricate hierarchical structure. This domain organisation prevents DNA from tangling together like it would if it were an unwound ball of wool, and allows specific domains to be unwound and used when they are needed.

Researchers have identified key proteins that appear to define and help organise this domain structure. One such protein is called CTCF, which sticks to a specific sequence of DNA that is frequently found at the boundaries of these domains. To explore the function of these CTCF boundaries in more detail and to investigate what role they may play in connecting enhancers to the right genes, our team studied the domain that contains the α-globin genes, which produce the haemoglobin that our red blood cells use to circulate oxygen in our bodies.

Firstly, as expected from CTCF’s role in defining boundaries, we showed that CTCF boundaries help organise the α-globin genes into a specific domain structure within red blood cells. This allows the enhancers to physically interact with and switch on the α-globin genes in this specific cell type. We then used the gene editing technology of CRISPR/Cas9 to snip out the DNA sequences that normally bind CTCF, and found that the boundaries in these edited cells become blurred and the domain loses its specific shape. The α-globin enhancers now not only activate the α-globin genes, but cross the domain boundaries and switch on genes in the neighbouring domain.

This study provides new insights into the contribution of CTCF in helping define these domain boundaries to help organise our DNA and restrict the regulation of gene activity within the cells where it is needed. This is an important finding that could explain the misregulation of gene activity that contributes to many diseases. For example in cancer, mutations of these boundary sequences in our DNA could lead to inappropriate activation of the genes that drive tumour growth.

The full study, ‘Tissue-specific CTCF–cohesin-mediated chromatin architecture delimits enhancer interactions and function in vivo’, can be read in the journal Nature Cell Biology.

Aerial view of the British Mulberry in operation.

In a guest blog, Professor Thomas Adcock, Associate Professor in Oxford’s Department of Engineering and a Tutorial Fellow at St Peter’s College, discusses his newly published research ‘the waves at the Mulberry Harbours.’

 Professor Adcock’s research focuses on understanding the ocean environment and how this interacts with infrastructure. He has a passionate interest in engineering history, particularly the Mulberry Harbours, which were used during the Second World War as part of Operation Overlord (the invasion of Normandy). His Grandfather was one of the engineers who worked on their design and construction.

Operation Overlord was the invasion of occupied Europe by the Allies in the Second World War. Whilst the Allies' primary enemy was the Axis forces, they also faced another foe — the weather. In particular, given the continuous necessity for personnel and supplies to cross the channel there was serious concern that the sea might cause a breakdown in supply lines. A particular problem was that the enemy would render all ports useless.

 The solution to this was to construct the components of the ports in Britain and take them with them. These temporary harbours were codenamed Mulberry. The plan was ambitious — to have two harbours each twice the size of Dover Harbour — and that these should be operational only 14 days after D-Day and last for 90 days. There were to be two harbours — “Mulberry A” in the American sector and “Mulberry B” in the British sector. Various novel breakwaters and roadways were designed and constructed and floated across to Normandy in the days immediately after D-Day. The American Mulberry was finished ahead of schedule whilst the British harbour was on time. 

However, a fortnight after D-Day a severe storm blew up, almost completely destroying the US harbour and doing serious damage to the British Mulberry. 

Yet the British Mulberry survived and was used (after minor adaptations) for more than twice as long as originally planned. The remains of it can still be seen at Arromanches today. To an engineer a number of questions immediately spring to mind: why did one harbour fail but the other survive; and should the engineers have expected the storm that hit them?

The Mulberry Harbours interest me because they were novel and unusual structures deployed as an integral part of one of the most important operations in military history. We can gain technical insights from what worked and what failed, and learn important lessons by analysing the historical decisions that were made. But my interest is also personal. My Grandfather, Alan Adcock, was one of the engineers who made the Mulberry Harbours a reality. I doubt I would have become an engineer without his influence. 

To answer the question of why the harbours had different fates we needed to understand how big the waves were which hit them.  This was made possible by a wave hindcast carried out by the European Centre for Medium-Range Weather Forecasting.

A hindcast is similar to a weather and wave forecast — point measurements such as atmospheric pressure and wind speed and direction are assimilated into a model run on a supercomputer which predicts how strong and persistent the winds were and how big the waves would be. To model what happens in the English Channel it is necessary to model most of the Atlantic. However, as waves enter shallow water, different physics becomes important — such as wave refraction and breaking.  We took the output of the large-scale  hindcast and used this to drive a local model of the waves close inshore, allowing us to predict what the waves were like when they hit the harbours.

 We found that the waves at the American harbour were significantly larger than those at the British Mulberry — although both experienced waves larger than they were designed to withstand. This goes a long way to explain why the American harbour failed whilst the British one narrowly survived. We also found that a storm of the severity of the 1944 storm would only be expected to occur during the summer once in every 40 years. The Allies were clearly very unlucky to experience a storm this severe only a couple of weeks after D-Day.

The lead author of our recent paper is Zoe Jackson who completed this work as part of her final year undergraduate project under my supervision. The work would not have been possible without the technical expertise of HR Wallingford and the European Centre for Medium-Range Weather Forecasting based in Reading.

 As a final coda, this project used technical knowledge and computing power developed over the more than 70 years since D-Day, and took Zoe about eight months to complete. As I am sure my Grandfather would have observed were he still with us, this is the same period as the original engineers had to design and construct the harbours.

Alan Adcock giving his Grandson early lessons in coastal engineeringAlan Adcock giving his Grandson early lessons in coastal engineering

Image credit: Shutterstock

Oxford Mathematician Neave O'Clery works with mathematical models to describe the processes behind industrial diversification and economic growth. Here she discusses her work in Oxford and previously at Harvard to explain how network science can help us understand why some cities thrive and grow, and others decline, and how they can offer useful, practical tools for policy-makers looking for the formula for success.

No man is an island. English poet John Donne's words have new meaning in a 21st century context as network and peer effects, often amplified by modern technologies, have been acknowledged as central to understanding human behaviour and development. Network analysis provides a uniquely powerful tool to describe and quantify complex systems, whose dynamics depend not on individual agents but on the underlying interconnection structure. My work focuses on the development of network-based policy tools to describe the economic processes underlying the growth of cities.

Urban centres draw a diverse range of people, attracted by opportunity, amenities, and the energy of crowds. Yet, while benefiting from density and proximity of people, cities also suffer from issues surrounding crime, transport, housing, and education. Fuelled by rapid urbanisation and pressing policy concerns, an unparalleled inter-disciplinary research agenda has emerged that spans the humanities, social and physical sciences. From a quantitative perspective, this agenda embraces the new wave of data emerging from both the private and public sector, and its promise to deliver new insights and transformative detail on how society functions today. The novel application of tools from mathematics, combined with high resolution data, to study social, economic and physical systems transcends traditional domain boundaries and provides opportunities for a uniquely multi-disciplinary and high impact research agenda.

One particular strand of research concerns the fundamental question: how do cities move into new economic activities, providing opportunities for citizens and generating inclusive growth? Cities are naturally constrained by their current resources, and the proximity of their current capabilities to new opportunities. This simple fact gives rise to a notion of path dependence: cities move into new activities that are similar to what they currently produce. In order to describe the similarities between industries, we construct a network model where nodes represent industries and edges represent capability overlap. The capability overlap for industry pairs may be empirically estimated by counting worker transitions between industries. Intuitively, if many workers switch jobs between a pair of industries, then it is likely that these industries share a high degree of know-how.

This network can be seen as modelling the opportunity landscape of cities: where a particular city is located in this network (i.e., its industries) will determine its future diversification potential. In other words, a city has the skills and know-how to move into neighbouring nodes. A city located in a central well connected region has many options, but one with only few peripheral industries has limited opportunities.

Such models aid policy-makers, planners and investors by providing detailed predictions of what types of new activities are likely to be successful in a particular place, information that typically cannot be gleaned from standard economic models. Metrics derived from such networks are in-formative about a range of associated questions concerning the overall growth of formal employment and the optimal size of urban commuting zones.