Louie Fooks, Humanities and Healthcare Policy Officer for Oxford Healthcare Values Partnership, discusses how a new report by Oxford University’s Healthcare Values Partnership and the Royal College of Physicians on ‘advancing medical professionalism’ can help address some of the problems faced by doctors and the NHS
The government’s Long Term Plan for the NHS, published earlier this month, sets out its vision for a quality health service able to cope with an ageing and expanding population. But, as many commentators point out, without the workforce it needs to support it, the plan will not meet its objectives.
More than 100,000 healthcare posts are currently vacant across the NHS and the number is likely to rise after Brexit. Indeed, the difficulty of recruiting, retaining, and ensuring the well-being of doctors has recently been described as a ‘crisis’ – with health organisations warning it’s a greater threat to the NHS than lack of funding.
Nationally, a quarter of doctors in training say they feel burnt out by high workloads and many are planning to reduce their hours or leave the profession early. And doctors report working in a culture of blame and fear which is jeopardising patient safety and discouraging learning and reflection.
Yet all this is set against a background of an ageing population with complex health needs – increasing the demands we put on doctors and making it even more important that they can operate at their best. Healthy life expectancy at birth is currently 63 years (against overall life expectancy of well over 80), with nearly half the population living much of their older years managing one or more chronic health condition.
Claire Pulford is the incoming Director of Medical Education for Oxford University Hospitals NHS Foundation Trust and explains the situation here. She says: 'Oxford and Thames Valley is lucky not to have some of the recruitment challenges to our medical training programmes which are faced by other parts of the country, but we still see vacancies and rota gaps in many essential specialities such as acute and emergency medicine. In recent years, there has been a marked drop-off after Foundation-level training, with doctors choosing not to move immediately into more senior or specialist training posts. And morale and engagement are adversely affected, with high levels of burn-out increasingly evident.'
Medical professionalism – part of the solution
How then should we prepare and educate students and junior doctors for modern medical practice – and enable doctors to maintain professional satisfaction throughout their careers? Advancing Medical Professionalism (AMP), argues that enabling and supporting doctors to develop their professional identities is an important part of the answer.
AMP took as its starting point the RCP’s 2005 definition of professionalism as the ‘set of values, behaviours and relationships that underpin the trust the public has in doctors’. It built on this with a series of workshops with healthcare staff, patients and other stakeholders to explore what professionalism might mean for doctors in 2018 and beyond.
The RCP’s Dr Jude Tweedie, co-author of AMP, says: 'Medical professionalism is extremely hard to define. As doctors, we recognise immediately when it’s absent and instinctively know that it’s essential to great patient care and physician satisfaction – but it can be very hard to quantify. So, we went out to talk not only to doctors, but to patients, academics, practitioners and others to find out what they thought.
'The process was really fruitful and helped us identify seven key aspects of doctor’s working lives essential to professionalism, highlighting the many different roles we expect our modern doctors to fulfil. From this we were then able to develop practical strategies and approaches to promote professional values, skills and attributes in each area.'
Seven key aspects of professionalism. Doctor as:
• Patient partner
• Team worker
• Manager and leader
• Patient advocate
• Learner and teacher
Claire Pulford comments: 'The General Medical Council’s Generic Professional Capabilities have been adopted into the medical curriculum and give a much-needed basis for embedding professionalism in education and training. Advancing Medical Professionalism provides an excellent resource to support this – to start conversations with students, trainees and other colleagues – and help individuals, teams and institutions to reflect on, and develop, their practice. In Oxford University Hospitals NHS Trust we intend to use the AMP report as a toolkit to inform our development programmes for trainees and trainers; and to explicitly reference it in our teaching, training, research, and Quality Improvement initiatives.'
Professor Joshua Hordern, of Oxford University Theology Faculty and the Oxford Healthcare Values Partnership, sits on the RCP Committee for Ethical Issues in Medicine and co-authored the AMP report. He believes passionately that humanities disciplines can provide vital insights into the modern health care challenges we face. Hordern says: ‘Most doctors go into the profession with a strong sense of vocation and commitment. But heavy workloads and the increasingly complex context in which they practice take their toll. We hope the approaches in AMP can support doctors in sustaining values of compassion, respect and integrity, developing their vocation and professional identity, and refreshing their joy and confidence in the work they do.'
Advancing Medical Professionalism was authored by Dr Jude Tweedie, research fellow to the president, RCP; Professor Dame Jane Dacre, immediate past president of the RCP; and Professor Joshua Hordern, Associate Professor of Christian Ethics, University of Oxford. Professor Hordern leads the Oxford Healthcare Values Partnership and is a member of the RCP’s Committee for Ethical Issues in Medicine. Dr Richard Smith added to, and extensively edited, the report.
Oxford Healthcare Values Partnership is a partnership of University of Oxford researchers and healthcare staff seeking to understand and improve the ethos of healthcare services. Advancing Medical Professionalism was developed as part of the healthcare and humanities programme, generously supported by the Arts and Humanities Research Council.
Researchers at the MRC Weatherall Institute of Molecular Medicine (MRC WIMM) have developed technology that allows scientists to explore the complex 3D structure of DNA in Virtual Reality. In a newly published pre-print, the team describes their tool, which is now freely available to all.
Working out the sequence that makes up genetic code is now routine in medical research, but the sequence is not the whole story; genes are also turned on and off by physical interaction between specific parts of DNA.
Consider chromosome 1, just one of the 23 paired chromosomes we have: An intricately folded chain of 250,000,000 nucleotides containing 4,220 genes which physically interact with each other in three dimensions.
The molecular origami of these interactions needs to be very precise, and mistakes can literally be the difference between life and death. Changes in the folding of DNA is believed to be associated with a range of diseases, including cancer.
All 22,000 of the genes we carry are contained within 2 meters of DNA, which is similarly packaged into complex folds and whorls in the nuclei of every one of the 37 trillion cells of the body.
Visualising in 3D
Working out the 2D sequences of nucleotides that make up the genetic code in our DNA is crucial in understanding how genes work, but understanding the physical interactions between the folds of DNA requires a leap into a new dimension.
That’s where Stephen Taylor and Jim Hughes, from the Centre for Computational Biology at the MRC WIMM, come in. They put their expertise in computational biology and gene regulation together with experts in real time computer graphics and human-machine interaction at Goldsmiths, University of London, to produce CSynth. CSynth is an interactive tool that allows scientists to visualise a whole chromosome of DNA in 3D and track points of physical interaction.
Unlike comparable tools, CSynth combines interactive modelling with the ability toconnect what they see in their 3D model with the DNA sequence information freely available online. Users can dynamically change parameters and compare models to see how this might affect genes and other elements in the DNA, such as the switches that turn genes on and off. An additional feature of CSynth is that it combines its state-of-the-art computational model with Virtual Reality. This means that researchers can virtually step inside the DNA structure and explore and manipulate DNA molecules in a new way.
The potential to really visualise DNA also makes CSynth an excellent learning and public engagement tool, especially when combined with the Virtual Reality. Thousands of people have experienced CSynth at the Royal Society Summer Exhibition, the Cheltenham Science Festival and many schools and institutes.
The Oxford team has already collaborated with other researchers at the MRC WIMM to examine how the DNA that codes for part of the haemoglobin complex (the molecule that transports oxygen in red blood cells) folds in 3D, and how the folding changes in different cell types.
What’s new is that the software is freely available to anyone who has access to a web browser. Any scientist can now upload their own data to model and explore at http://csynth.org/. It doesn’t need software installation and is extremely fast to run. The researchers hope that this public web interface makes CSynth useful for education and learning too, and that researchers can share their models online.
But perhaps most importantly, CSynth will help scientists at Oxford and beyond identify potential structures and genetic elements associated with disease and to understand the impact of DNA structure on function.
By Amy Hinsley, Department of Zoology
It is widely accepted that many conservation challenges are directly related to human behaviour. Whether it is the over-collection of a rare orchid by harvesters in Southeast Asia, or the decisions by collectors in Europe to buy and smuggle these orchids home, understanding the extent and nature of these behaviours is essential to addressing the threats they might cause. This has led conservation researchers and practitioners to start looking outside of their discipline, to find methods and approaches from across the social sciences to improve our understanding of these complex issues.
Whilst this interdisciplinarity is a positive move for conservation, it is important that we treat these ‘new’ methods carefully and understand their limitations. If we don’t, there is a risk that our new toolbox full of exciting methods that sound great on a funding application, may in fact not be making what we do any better, or in extreme cases they may even be making it worse.
The Unmatched Count Technique
With this in mind, a group of conservation social scientists, led by researchers at the universities of Oxford and Exeter, decided to look in depth into one of these ‘new’ methods, to provide recommendations on when and how it should be used, and when it shouldn’t. The paper, freely available in the journal Methods in Ecology and Evolution this week, looks at the Unmatched Count Technique (UCT - also called the list experiment), which is increasingly being used in conservation to ask questions about sensitive topics.
The method asks questions in an indirect way that allows the respondent to remain protected and anonymous, meaning that it should produce more truthful answers. So far it has mainly been used to investigate topics that people might be tempted to hide their association with, including illegal behaviours (e.g. stealing), but also those that somebody might be embarrassed to admit openly to a researcher, such as socially undesirable (e.g. racist views), or very personal topics (e.g. being HIV positive). It can also be used to find out how many people really support or participate in socially desirable behaviours that might be exaggerated to impress people, such as recycling or turning out to vote.
The team reviewed all peer-reviewed studies that had used UCT and, along with insights from their own experiences using it, developed a set of guidelines. We found that, since UCT was first developed in 1979, it has been used in more than 50 countries and several disciplines.
Impressed by the potential of the method, conservationists started using the UCT in 2013, and it has been growing in popularity ever since, with five peer-reviewed conservation studies using it in 2017 alone.
How does it work?
One of the biggest draws of the UCT is that is looks so easy – UCT questions consist of a short list of items, and respondents are asked to report how many are true for them. These lists can also include drawings to make it more appealing and easier to understand, especially where literacy levels are low.
A random 50% of respondents are shown a list of only non-sensitive items (shaded blue in this example below about international illegal orchid trade), whilst the other half see a list with an additional sensitive item (shaded yellow).
Our research shows that social science methods from outside of conservation are useful and we should not stop trying to increase the range of techniques available that can improve how we do conservation. However, we also have to accept that as well as benefits from the use of a new method there will be also be a responsibility for us to investigate its potential limitations, and put in the work required to do it well.
Read the full paper: 'Asking sensitive questions using the unmatched count technique: Applications and guidelines for conservation', Methods in Ecology and Evolution.
A new device could enable computers that use optics and electrical signals to interact with data
Researchers from the universities of Oxford, Exeter and Münster have demonstrated a new technique that can store more optical data in a smaller space than was previously possible on-chip. This technique improves upon the phase-change optical memory cell, which uses light to write and read data, and could offer a faster, more power-efficient form of memory for computers.
In Optica, The Optical Society's journal for high impact research, the scientists describe their new technique for all-optical data storage, which could help meet the growing need for more computer data storage.
Rather than using electrical signals to store data in one of two states - a zero or one - like today’s computers, the optical memory cell uses light to store information. The researchers demonstrated optical memory with more than 32 states, or levels, the equivalent of 5 bits. This is an important step toward an all-optical computer, a long-term goal of many research groups in this field.
Research team leader Harish Bhaskaran from Oxford University’s Department of Materials said: ‘Optical fibres bring light-encoded data to our homes and offices, but that information is transformed to electronic signals once inside computers. By bringing the speed of light-based data transmission to the circuit boards that run computers, our all-optical memory could enable a hybrid computer chip that interacts with data both optically and electrically.’
The new work is part of a large project called Fun-COMP, for Functionally-scaled Computing technology, that brings academic and industrial partners together to develop groundbreaking hardware technologies.
Writing data with light
The optical memory cell uses light to encode information in a phase change material, a class of materials used to make re-writable CDs and DVDs. A laser heats portions of a phase change material, which causes it to switch between states where all the atoms are ordered or disordered. Because these two states exhibit different optical indices of refraction, the data can be read using light.
Phase change materials can store data for a long time because they remain in the disordered or ordered state until illuminated again with the specific type of laser light originally used to write the data. Mixing different ratios of ordered and disordered states in an area of the material allows information to be stored in a continuum of levels instead of just a zero and a one as in traditional electronic memory.
The researchers accomplished the increased resolution by using a new technique they developed that uses laser light with a single, double-stepped pulse — two pulses put together into a rectangular-shaped pulse — to precisely control the melting and the crystallisation of the material.
Multi-level memory storage
The researchers showed that they could use their approach to reliably encode data on 34 levels, which is more than the 32 levels necessary to achieve 5-bit programming.
‘This accomplishment required understanding the interaction between the light and the material perfectly and then sending exactly the right sort of laser pulse necessary to achieve each level,’ said Bhaskaran. ‘We solved an extraordinarily difficult problem.’
The new technique could help overcome one of the bottlenecks limiting the speed of today’s computers: the link between the processor and the memory. ‘A lot of work has gone into improving the communication between these two units using fiber optics,’ said Bhaskaran. ‘However, linking these two units optically still requires expensive electro-optical conversions at both ends. Our memory cell could be used in a hybrid optical-electrical setup to eliminate the need for that conversion on the memory side by allowing data to be stored and retrieved optically.’
Next the researchers want to integrate multiple memory cells and individually program them, which would be required to make a working memory chip for a computer. The research groups have been working closely with Oxford University Innovation, the University’s innovation arm, to develop commercial opportunities arising from their research on photonic memory cells. The researchers say that they can already replicate the devices extremely well but will need to develop light signal processing techniques to integrate multiple optical memory cells.
How could a sugar pill placebo cause harm? A new review of data from 250,726 trial participants has found that 1 in 20 people who took placebos in trials dropped out because of serious adverse events (side effects). Almost half of the participants reported less serious adverse events. The adverse events ranged from abdominal pain and anorexia to burning, chest pain, fatigue, and even death.
The study found that the apparently strange phenomena of sugar pills producing harm can be explained by misattribution and negative expectations.
Someone in a trial might have a symptom like a stomachache for any number of reasons that are not related to the trial. Because they are in a trial, they think the trial intervention caused the ache. This gets reported as an adverse event when it would have happened anyway.
The way patients are warned about adverse events can sometimes cause an adverse event. Effects of negative expectations are called ‘nocebo’ (‘negative placebo’) effects. ‘Our study provided preliminary data indicating that some trial participants experience nocebo effects,’ reports lead author Jeremy Howick. Other studies provide more definitive evidence that the way patients are warned about adverse events can affect whether they report them. For example, a study found that patients in a randomised trial of aspirin or sulfinpyrazone for treating unstable angina who were warned about gastrointestinal adverse events were six times more likely to withdraw from the study due to reported gastrointestinal adverse events. A more recent study published last year in The Lancet found that patients were more likely to report adverse events when they knew they were taking statins, compared to when they didn’t. This is probably because the belief that statins cause adverse events like muscle pain can actually produce the muscle pain.
Finding ways to reduce adverse events among patients in placebo groups is important for improving trial quality (since fewer participants will drop out), and improving trial ethics (by avoiding harm). The question is: how?
‘Misattribution can be hard to avoid,’ says Jeremy Howick, ‘because it’s hard for someone to know whether a symptom like a stomachache would have occurred anyways or whether it was because of the trial. However, I believe we can reduce the harm caused by negative expectations.’
For example, telling patients that a new treatment is safe for 90% of patients contains the same information as saying it causes adverse events like headaches in 10% of patients. But the second way may be more likely to actually cause the adverse events than the first.
Unfortunately, guidance for informing trial participants about trial intervention harms, in a way that is ethical, understandable, and does not produce nocebo effects, is currently under-researched. A recent study suggested that information provided to trial participants often fails to tell them what they wish to know, and that it is presented in a way that is difficult to understand. Ongoing research at the Universities of Oxford and Cardiff is looking at ways to inform patients in trials about the best way to provide balanced information about the benefits and harms of participating in trials. Their preliminary research suggests that patients are provided more information about trial harms than trial benefits.
Says co-author Professor Kerry Hood (Director of Cardiff Centre for Trials Research): ‘We believe it is possible to balance the information about trial benefits and harms in a way that is fact-based and that does not cause unnecessary harm. This can be achieved by ensuring that the benefits, as well as the harms, are explained in a way patients understand.’
The full paper, 'Rapid Overview of Systematic Reviews of Nocebo Effects Reported by Patients Taking Placebos in Clinical Trials,' can be read in Trials.
- 1 of 136
- next ›