Michaela Ecker, a former doctoral student in the Oxford School of Archaeology, details her recent research at Wonderwerk Cave of Southern Africa, on which she is the lead-author. Conducted in partnership with the University of Toronto, the Hebrew University Jerusalem, the National Museum in Bloemfontein and the McGregor Museum in Kimberley, South Africa, the findings, published in Nature Ecology & Evolution, shed light on how the first humans and the environment itself may have evolved.
From the smallest to the largest, environmental context is the key driver behind the evolution of organisms. Understanding the environments in which humans evolved is thus key to improving our knowledge of our species and its development. My recent work at Wonderwerk Cave has demonstrated how humankind existed in multiple different environmental contexts in the past, contexts which were substantially different from the local environments in the modern day.
My research in South Africa began at the Florisbad Quaternary Research Station (part of the National Museum in Bloemfontein, South Africa) where James Brink has assembled an unparalleled collection of fossil and modern specimens of the bovids of southern Africa. Walking into his lab that is housed in a large building made of corrugated metal is like stepping into a great library, only instead of books the shelves are lined with bones.
By analysing the isotopes of teeth of a spectrum of animals from the different levels at Wonderwerk Cave - which cover a period of close to two million years, I hoped to be able to fill in the picture of the environmental history of the interior of southern Africa and particularly the ecological context of early human occupations.
Even before I drilled a single tooth for stable isotope analysis, the first thing that caught my attention as we examined the fauna from excavations at Wonderwerk Cave was a tooth of an antelope that most people have never heard of - the Southern Lechwe (Kobus leche). This animal needs permanent standing water as habitat and is nowadays restricted to the extensive wetlands in northern Botswana, Namibia and Angola. It could not survive in the modern environment around Wonderwerk Cave, which is a semi-arid thornbush savanna that receives rainfall only in the summer months. This means the environment must have been extremely different in the past that it could support this species, assuming it had the same habitat requirements as today.
After studying the collections at Florisbad Quaternary Research Station it was time to visit the site itself, together with Liora Kolska Horwitz, the co-director of overall research at Wonderwerk Cave.
The cave itself is a long tube with a low ceiling, running from a single entrance 140m into the hill, until the visitor is surrounded by complete darkness. The samples for this study were excavated near the cave’s entrance, where daylight still reaches and Holocene rock art adorns the walls. The present-day vegetation at Wonderwerk Cave, at the edge of the Kalahari, is composed of grasses that follow the C4 photosynthetic pathway and trees and bushes that follow the C3 photosynthetic pathway.
The lechwe antelope teeth showed carbon isotope values that are similar to contemporary animals. However, the carbon isotope results for species that consume a grass diet showed a mix of C3 and C4 in their diet. This meant there was not only much more water around, as attested by the very presence of the lechwe, but also a plant community that is not found in a modern African savanna.
I presented these findings in 2016, at the Society of African Archaeologists meeting in Toulouse, where unbeknownst to me, Lloyd Rossouw, an archaeologist from the National Museum in Bleomfontein and another member of the large international Wonderwerk team, was also presenting results from Wonderwerk Cave, only this time focused on Phytoliths - the mineral particle that form inside plant tissues.
Once we realised our research links, it became apparent that our results evidenced similar trends and conclusions, which we then combined with a whole range of analyses from other specialists. This exchange of knowledge allowed us to build a clearer picture of the environment at Wonderwerk Cave during the occupation of the earliest toolmakers in the region, and their descendants.
These findings challenge the narrative of early Homo adapting to open, arid C4 savannas. It shows that these hominins lived in a wide range of environments, some of them different to anything existing in Africa today.
This study highlights the need to review local terrestrial records to reconstruct past climate and environmental conditions, rather than global records alone. While this can be challenging, since some areas don't preserve long sequences well, Wonderwerk Cave is a rare exception in this region in the dry interior of southern Africa and testament to the importance of considering every aspect of the local ecology – you never know where your first clue might come from.
The full list of people involved in this project is as follows:
Lead author: Michaela Ecker (former doctoral student at Oxford’s School of Archaeology, now postdoctoral fellow at the University of Toronto)
Excavation directors: Michael Chazan (University of Toronto), Liora Horwitz (Hebrew University Jerusalem), Francesco Berna (Simon Fraser University).
Julia Lee-Thorp (Head of Oxford’s School of Archaeology), James Brink and Lloyd Rossouw from the National museum in Bloemfontein, South Africa.
Professor Ard Louis from Oxford’s Department of Physics explains the maths behind a phenomenon observed in science and engineering termed ‘simplicity bias’, which makes simple outputs far more likely than complex outputs.
Despite the apparent complexity of the world around us, there is an inherent bias towards simplicity. This holds true as much for biological processes as for engineering and mathematics in everyday life.
Consider the familiar analogy of monkeys hitting keys at random on a typewriter. As the story goes, if the monkeys could type long enough, then they would eventually produce the full works of Shakespeare (although it would take much longer than the age of the universe to do so ).
To put this into a quantifiable form, for a typewriter with N keys the probability of the monkeys producing a specific sequence of characters of length K is just 1/NK, since at each stroke, the probability of getting the right character is 1/N, and they have to do this correctly K times in a row. This argument also implies that all sequences of length K are equally likely or unlikely (assuming of course that the monkeys type in a truly random way). In other words, the monkeys are equally likely to produce Hamlet, which has about 100,000 characters, as they are any other sequence of about 100,000 characters.
Now consider a different scenario, where the monkeys are typing not into a typewriter but instead into a computer programming language. Then some sequences of characters can be generated quite simply. For example, a short 22 character program ‘Print “0011” 250 times’ will produce a 1000 character sequence of the repeating form ‘00110011001100110011…..0011’.
The probability of accidentally generating this program by monkeys typing on a computer is 1/N22, which is much less than the 1/N1000 probability that they would produce the full sequence on a typewriter (or a word processor). Interestingly, most sequences don’t have programs to generate them that are much shorter than simply ‘Print “sequence” ‘, which means the probability of obtaining them by monkeys either typing into a computer program or on a typewriter is more or less the same. But for some sequences, like the one above, the computer program route is exponentially shorter, and so it is much more likely to occur upon random key strokes.
The basic intuition behind monkeys typing into computer programs was formalised over 50 years ago in the field of algorithmic information theory (AIT). In brief, the simplicity or complexity of a sequence is defined in AIT by the length of the shortest program that can generate the sequence on a universal Turing machine, a basic computer device, first hypothesised by Alan Turing, which can perform any possible computation.
Unfortunately, while the results from AIT are mathematically profound and elegant, they are difficult to apply in practice because universal Turing machines are rather special objects, and moreover, for deep reasons linked to the foundations of mathematics, the AIT definition of complexity is formally incomputable.
To make progress, our team at Oxford derived a form of the AIT coding theorem of Solomonoff and Levin, which calculates the probability that a random program will generate a particular output.
While this new coding theorem result only provides an upper bound on that probability, and so is less powerful than the full AIT coding theorem which provides both an upper and a lower bound, it is much easier to use in practice.
We applied our theorem to input-output maps, which are a general class of systems that take some input, and after some calculation, produce an output. The really interesting thing we found was that for many maps we need to know very little about the map. We can simply convert the outputs to binary strings, and then compress them with a simple compression algorithm (not unlike those you have on your computer to zip files).
The new bound predicts that if an output is highly compressible, then it is much more likely to occur upon random inputs to the map. Not needing to know much about the map is analogous to not needing to know what programming language the monkeys are typing into in order to still predict that they are much more likely to produce something like a highly compressible 1000-character sequence that repeats the pattern ‘0011’, than they are to produce some truly random and incompressible sequence of 1000 characters.
As in AIT, if a sequence is highly compressible, this implies that there exists a short code to generate it, so we called these outputs simple. Since simple outputs are much more likely to occur, we say that this broad class of maps exhibit ‘simplicity bias’.
Many systems in science and engineering can be analysed as input-output maps. Examples range from the biological mapping from RNA sequence to RNA secondary structures, to systems of coupled differential equations, to simple models from financial mathematics; these all show an exponential bias towards simple outputs.
In other words, we can expect that many different situations both in science and engineering will manifest simplicity bias, and it is very likely that simplicity is all around us.
The full paper, 'Input–output maps are strongly biased towards simple outputs,' can be read in Nature Communications.
Dr Marco J Haenssgen discusses the application of management thinking to solving the growing global problem of antimicrobial resistance.
You may have heard about superbugs, drug-resistant bacteria, or antibiotic and antimicrobial resistance (AMR) – all referring to one of the most pressing health challenges that the world is facing currently. AMR is high on public health agendas, it has attracted several hundreds of millions of pounds of research funding, and it risks to become one of the leading causes of death in the world by claiming an estimated 10 million lives annually by 2050. The World Bank argues that this will have an economic impact similar to the 2008 global financial crisis. Poor countries will be hit hardest, but rich countries are by no means safe because drug resistance is a global problem and drug-resistant bacteria can also be imported from abroad. The UK experienced this very recently, for example.
AMR means that certain types of medicine become less useful. The problem arises when bacteria and other microbes develop a tolerance to antibiotics and other antimicrobial drugs, which happens for example if we keep using antibiotics for the wrong purpose, like to treat flus and colds. At the same time, new medicines to fight superbugs are still far on the horizon, and so diseases like tuberculosis are becoming more difficult to treat or even life threatening.
A part of the response to the superbug crisis therefore involves stimulating the supply of new antimicrobials, and reducing the demand for and unnecessary use of antimicrobial drugs. Typical suggestions to reduce the demand for antibiotics among the general population thereby include reducing infections through improved public health, and to increase public awareness about superbugs. As a social scientist, I would argue that this is unlikely to solve problematic antimicrobial demand and overuse in the general population. The problem is likely to persist even if everyone in the world was aware and educated about AMR because health behaviour is not solely driven by what we know (other determinants include e.g. poverty, lacking access to qualified doctors and nurses, fear, different cultural beliefs, or people’s understanding of what “good care” is).
The supply-and-demand definition of a market for medicine does not help to resolve this problem. The definition is common in neoclassical economists, where markets are defined as an allocation mechanism for products and services. Akin to a “market place,” supply and demand depend on the price of these goods and services. According to this simple model, we could lower demand for antimicrobials by changing people’s preferences, by ensuring that they don’t get sick so often, or by offering them other medicines instead. These suggestions are not insensible, but the focus on a single product or family of products is a barrier to understanding the nature of the demand for antibiotics among the general population, and to find more comprehensive solutions. We can find some impulses for an alternative in strategic management.
Business leaders know that they don’t compete only with comparable products for their customers. An example I have been trained with in management school is a construction firm in the Middle East (let this be our “Customer”). The customer wins a valuable contract by the government to build the next skyscraper, deadlines are tight and stakes are high. The construction firm will therefore have a demand for the best and most reliable construction equipment available (diggers, cranes, and all the other things that get boys excited about). Let a manufacturer of such equipment be our “Supplier 1.” Quite obviously, Supplier 1 competes with other manufacturers of equipment, for example on the basis of product quality, price, or other purchase-related services like quick order fulfilment.
Is Supplier 1 right in considering only the product market for construction equipment? What is it that actually matters to the Customer? Certainly no skyscraper construction can happen without equipment, and so to consider competing manufacturers of the same product is not absurd. But the important consideration for the Customer is to fulfil the construction contract without delay so as to avoid financial penalties by the government. That is why high-quality and breakdown-proof equipment is important, as it helps to limit the risk of delays. Similarly useful would be insurance to cover the penalties for delays, in which case our Customer could make do with cheaper equipment. Supplier 1 is therefore not only in direct competition with other equipment manufacturers, but also with insurance companies, and if the function of avoiding financial penalties for delays can be met by an insurance broker, then suddenly there may no longer be a demand for high-quality and reliable equipment by the Customer. The market does therefore not just comprise products, but more general functions that the customer aims to fulfil, and different types of solutions or technologies can fulfil these functions.
This is the conceptualisation of strategic market segments following Abell (1980), and one line of management teaching suggests that businesses should not only be concerned with competing producers of similar products, but indeed with solutions from other industries that help customers fulfil their needs.
Though it may appear “off the beaten track,” we can apply this definition of strategic markets quite usefully to AMR and the demand of antimicrobial drugs. If we consider the case of people’s antibiotic use, then the conventional supply-and-demand logic can easily trap us in a focus on prices, different types and brands of antibiotics, or perhaps relative prices with other medicines. A strategic market definition draws attention to other aspects: functions (the ultimate goal of taking medicine), technologies (the range of solutions to reach this goal, including medicine), and consumers (different segments of the general population). We could therefore consider:
• What function(s) do antibiotics fulfil when people demand them? For example, some people might just take medicine in the hope to get better quickly, especially if they cannot (afford to) take time off and their family depends on their work.
• What other solutions help people to achieve the same function(s) that antibiotics fulfil? If antibiotics provide peace of mind, then this might not be an intrinsic characteristic of antibiotics, but of receiving some form of pharmaceutical treatment more generally. The same peace of mind could be brought about by labour laws that provide paid sick leave, so people don’t have to worry about their income when they get sick.
• Do these functions matter equally to all consumers? Strategic marketing starts from the premise that consumer groups differ in their needs, and the functions of antibiotics may be distributed unevenly across a population.
If we apply this strategic management definition of a market, then we can broaden our understanding of and response to people’s antibiotic use. For example, while awareness campaigns might change some people’s behaviour, what we think to be superior knowledge or a better solution may not be deemed superior by the population groups whom we serve, so we need to understand their needs and objectives first. The reason for overusing antibiotics might not have been because people did not know what they were taking, but for example because they were desperately trying to keep working and sustain their family.
The strategic market logic thereby permits us to formulate new premises for analysing people’s medicine use. A selection of such premises is exemplified below:
|1. The landscape of healthcare providers is fragmented and obscure.||While access to prescription medicine may be regulated more easily in public healthcare settings, the wide spectrum and number of non-public providers of healthcare (e.g. unregulated pharmacies or grocery stores selling medicine) means that the general population will not automatically be drawn to public healthcare services.|
|2. Preferences and means to access healthcare vary within the population.||Patients may ascribe a higher curative value to private healthcare providers, gaps in public healthcare provision might make private alternatives preferable for logistical reasons, or ethnic minority groups’ experiences with discrimination can bias their treatment-seeking behaviour towards informal local healthcare providers (e.g. local stores) - all of which could increase people’s likelihood to receive antibiotics for their treatment when it is not clinically necessary.|
|3. When navigating these obscure health systems, people share a social space within which they collaborate and compete.||Treatment seeking and access to medicine do not happen in isolation. Help from others can help overcome constraints and shape choices. However, available healthcare resources are often scarce and the competition for them can crowd out already marginalised groups.|
|4. New healthcare solutions that target patient behaviour will always have to compete with existing solutions.||From a strategic market perspective, antibiotic prescription and use, even if they are deemed “inappropriate,” will always be part of a network of solutions to meet various health-related consumer functions. This network constitutes potential competition for new interventions to reduce antibiotic use.|
|5. Social, economic, and technological change can affect treatment-seeking behaviours in unforeseen ways.||Contextual change alters the constraints that people experience when they seek care and access medicine, which can lead to the emergence of new behaviours. Mobile phone diffusion can for instance increase access to healthcare but could also complicate and bias people’s choices towards non-public healthcare providers.|
|6. Solutions for what is deemed “problematic health behaviour” need not be confined to the health sector, but they can plausibly have similarly if not more effective substitutes in other sectors.||In the same way that contextual change can influence healthcare choices and constraints, interventions to improve health behaviour and antibiotic use might focus on changing the composition of contextual constraints. For example, health education about “appropriate antibiotic use” may be informative for the general population but unable to alleviate financial hardship as the underlying driver of adverse behaviours – social protection schemes may be more effective in such a case.|
As a simple frame of mind, the strategic management market definition and this list of premises can be useful to understand people’s demand for antibiotics, but it can also be applied to other health behaviour and interventions beyond AMR. Interdisciplinary approach like this – applying social sciences thinking to global health problems – thereby help us to understand why interventions fall short of expectations, and they can help to stimulate new ideas for action and interventions beyond awareness-raising and education campaigns.
Haenssgen, M. J., Charoenboon, N., Zanello, G., Mayxay, M., Reed-Tsochas, F., Jones, C. O. H., et al. (2018). Antibiotics and activity spaces: protocol of an exploratory study of behaviour, marginalisation, and knowledge diffusion. BMJ Global Health, 3(e000621). doi: 10.1136/bmjgh-2017-000621
Haenssgen, M. J., Charoenboon, N., Althaus, T., Greer, R. C., Intralawan, D., & Lubell, Y. (2018). The social role of C-reactive protein point-of-care testing to guide antibiotic prescription in Northern Thailand. Social Science & Medicine, 202, 1-12. doi: 10.1016/j.socscimed.2018.02.018
California, Brazil and South Africa have all recently experienced major drought, threatening serious disruption to supplies for major cities (‘Day Zero’ events). How can England prepare for drought without harming the environment or driving up water charges?
Dr Matthew Ives and Mike Simpson of Oxford's Environmental Change Institute, discuss their research on strategic water planning - conducted with Professor Jim Hall and newly published in the Water & Environment Journal.
Many people find it hard to believe that a country so blessed with rain as England would have any need to undertake intensive water conservation measures. But, contrary to popular opinion, the United Kingdom isn’t as wet as some believe. In fact, some parts of England have rainfall rates per person as low as the world’s most arid regions, such as the Middle East.
Convincing people to use less water and investing in long-term leakage reduction solutions will be critical for the avoidance of drought-induced interruptions to water supplies for large numbers of businesses and households in England.
Additional consequences of failure to act would include high costs for new infrastructure, such as for desalination or transfer pumping, while the extra energy this uses may mean additional carbon dioxide emissions. These stark conclusions are the headline results from recently published research into future-proofing England against the spectre of severe drought.
This twin-track approach represents a bold challenge to the water engineering community. Technological and social solutions to address leakage and demand reduction already exist, with many currently implemented in the UK or overseas.
Smart metering, available on a voluntary basis in much of England, can drive down the costs of finding and managing leaks, as well as encouraging reduced use of water. Satellite and remote-sensing technologies pioneered in drier parts of the world, like Israel and California, can be used to identify leakage sites.
The sheer number of people in the relatively small urban areas of England require an enormous amount of water. Unfortunately, while many of the most densely populated areas are in the South and East, much of the rain falls in the North and West. One regularly proposed answer to this problem is to transport water across the UK, in particular from Wales and Scotland, to support temporary dry conditions in the Southeast of England. Could this pipeline idea be a solution? Maybe technologies such as desalination could be used? Or the development of a new generation of larger reservoirs? What about increasing the efficiency of our existing water system?
Developing solutions to meet England’s future water needs calls for a national perspective, which can answer strategic questions about our water infrastructure strategy. Using our purpose-built National Infrastructure Systems Model (NISMOD) we assessed all of the different investment options available to England’s water companies for future-proofing the country’s water supplies. With a twist. We included the options available to individual companies, such as reservoir extensions and desalination plants, alongside options requiring a national perspective, such as inter-company transfers and demand management campaigns. And we pitted all such options against the spectre of future uncertainty around climate change and population growth.
We termed this analysis ‘navigating the water trilemma’ as it involved finding solutions that not only provided England with future water security but solutions that were also affordable and did not put too great a strain on the natural environment. This study highlighted the value of the flexible, ‘trilemma-friendly’ options like leakage reductions and demand reductions.
Our analysis points to the unavoidable answer: leakage reduction and demand management are the most cost effective and widely applicable components of future water strategy for England. Early investment in both of these solutions would allow a sensible and frugal culture of water use to be developed without recourse to panic during the inevitable drought events, such as experienced in the summer of 1976.
When we look at the impacts of drought in places which have the resources of England but have not taken sufficient preparation, the results are clear.
In Australia, hugely expensive new desalination works were developed in response to an extended drought, with long-term costs to public finances. Over recent years in California, restrictions on water use have been seen as deeply socially disruptive. However, many Californians now see responsible water use as a normal part of daily life.
Our research and new modelling capabilities were used to great effect by the National Infrastructure Commission (NIC) in their assessment of England’s drought preparedness. Their analysis, produced on the basis of our work, proposes a dramatic and ambitious change in approach. The NIC concluded that the equivalent of an extra 4 billion litres of water per day would be needed across England in case of significant drought. The report proposed that two-thirds of this should be made available through developing efficient pipe systems as well as shifting to the lowest household water use rates in the developed world. The NIC recommended that this should be supported by transfers of water between regions and, where appropriate, new water infrastructure including reservoirs and water recycling schemes.
Without improved national co-ordination and large-scale investment in water supply, the NIC’s report suggests that large parts of the country have a one-in-four chance of having their water cut off during a drought. Emergency measures, such as road and ship tankers, could cost up to £40 billion up until 2050, while the costs of building greater resilience would cost only half this amount.
Improving water resource efficiency is a fascinating challenge with many lessons to be learned from around the world. Technological solutions including sensing and monitoring of water supplies can be complemented by social solutions such as education and identifying the factors that influence people to make better use of water. Organisations such as ECI and the Centre for Ecology & Hydrology are well-placed to influence how such ideas are researched and how this research can become reality.
With some planning and vision, water supply in England can be future-proofed and it doesn’t have to be expensive. Adequate early investment, the development of a culture of water saving and some new technological and social ideas should make our occasional long, dry summers something to look forward to. When the alternative is expensive, environmentally damaging short-term solutions and regularly running out of water, surely the choice is clear?
This article is based on research in the Water and Environment Journal (WEJ), and the National Infrastructure Commission’s report “Preparing for a drier future”
Marina Filip, Postdoctoral Research Assistant, and Feliciano Giustino, Professor of Materials, both in the Department of Materials, explain how elementary geometry and modern data analytics can be combined to predict the existence of thousands of new materials called ‘perovskites’, as shown in their recent publication in PNAS.
Perovskites are a broad family of crystals that share the same structural arrangement as the mineral CaTiO3 . The extraordinary appeal of perovskites is their unusual chemical versatility, as they generally can incorporate almost every element in the Periodic Table. This leads to an incredibly diverse array of functionalities. For example, two major scientific discoveries of our times prominently feature perovskites, high-temperature superconductivity in perovskite cuprates (Bednorz and Müller, Nobel Prize 1987) and the recent discovery of the perovskite solar cells (Snaith, University of Oxford 2012).
In our own study we wanted to understand what makes certain combinations of elements in the Periodic Table arrange as perovskite crystals and others not, and whether we could anticipate how many and which perovskites are yet to be discovered.
It turned out that Norwegian mineralogist Victor Goldschmidt asked exactly the same question in 1926. Based on empirical observations, he proposed that the formability of perovskites follows a simple geometric principle, namely: The number of anions surrounding a cation tends to be as large as possible, subject to the condition that all anions touch the cation. This statement is known as the ‘no-rattling‘ hypothesis, and essentially means that if we describe a crystal using a model of rigid spheres, in a perovskite the spheres tend to be tightly packed, so that none can move around freely. Using elementary geometry Goldschmidt’s hypothesis can be translated into a set of six simple mathematical rules that must be obeyed by the ions of a perovskite.
Goldschmidt’s hypothesis had been used in one form or another in countless studies over the last century, in order to explain the formation of perovskites in qualitative terms, but its predictive power had never been assessed quantitatively. We realized that unlike 1926, in 2018 we benefit from a century of research in crystallography, documented in publicly available databases of crystal structures, such as the Inorganic Crystal Structure Database, and more than 50,000 published scientific papers on perovskite compounds. Using internet data-mining and statistical analysis, we were able to collect and study a library of more than 2000 chemical compounds which are known to form in various crystal structures, and use them to test the predictive power of Goldschmidt’s hypothesis. We found that this very elegant geometric model is actually capable of discriminating between compounds which are perovskites and those which are not with a higher success rate than sophisticated quantum-mechanical approaches.
In our study we used this simple model to screen through nearly four million compositions, and predict the existence of more than 90,000 new perovskite materials that have not been synthesized yet. This library of predicted compounds offers the exciting challenge of uncovering the functionalities of these novel perovskites to the community working on the synthesis and characterization of new materials. Most importantly, our discovery may lead to the realization of entirely new functional materials for a broad range of technologies, from applications in energy, electronics and medicine.
- 1 of 128
- next ›