What have we learned from an extraordinary two years when science, politics and government have been inextricably linked? Here I suggest some lessons, drawing on current work on science policy, the projects of the International Public Policy Observatory (which is publishing a series of case studies on science advice in different countries) as well as experience of having been a recipient of science advice in governments.
My main argument is that we are seeing a convergence of the ‘scientisation’ of politics and the politicisation of science which is uncomfortable for both sides. And I argue that while it’s rarely feasible or wise to ‘follow the science’, governments will increasingly need to ‘embrace the science’, by which I mean a more dynamic, engaged and reciprocal relationship that transcends the attachment to neutrality and independence that was so valued in the last century.
1. You cannot govern without science even if you can never ‘follow the science’
A first, perhaps obvious, lesson of the last two years is that you can’t govern well in the 21st century without really good scientific analysis on tap. The proportion of prominent issues that require a scientific understanding grows each decade. So governments need capacities to analyse, interpret, model and guide – particularly during events that can’t be understood without science, like pandemics, potential meteorite strikes or climate crises.
Some governments lacked or ignored such advice. Donald Trump and Jair Bolsonaro were the most visible but there were plenty of others, like the President of Tanzania who recommended lemon and ginger as a cure for Coronavirus, Madagascar’s President who developed his own herbal remedy, or Nicaragua’s Daniel Ortega who just saw the pandemic as a message from God. All fared badly.
But having good science advice and listening to it is a necessary condition – albeit missing in many places – but not a sufficient one. As I discuss later there are few situations where it is sensible simply to ‘follow the science’. There will always be multiple considerations and types of relevant knowledge.
I have yet to see a detailed analysis of any links or correlations between the strength of science advice and the kinds of outcomes measured by the Lancet study on excess mortality (probably the best available assessment of country performance so far). Moreover, plenty of countries like the UK and US showed that even with strong capacities for science advice it’s still possible to perform relatively poorly in terms of outcomes.
Nor is there a clear trend in relation to politics – leaders like Marine Le Pen, Orban, the Taliban can thrive with a strongly anti-science rhetoric, and it is too soon to judge what the pandemic impact will have been on electoral politics (South Australia is one of a very few places where mistakes in the pandemic, in this case a premature move to open up, clearly had a decisive impact on electoral outcomes).
2.Breadth – and the risk of gatekeepers becoming bottlenecks
Although governments need good science advice, they also need breadth. Following just one kind of science is risky. In many countries which did well in mobilising classic science advice – epidemiological and biomedical – there was a striking failure to mobilise social sciences and other kinds of knowledge. Governments making decisions in the pandemic needed many sources of knowledge, including statistics, economics, epidemiology, public opinion, politics, practitioner knowledge, legal, ethical, engineering, implementation and other types of knowledge, not to mention skills in handling data from testing, mobile phone, banking and much more.
But while some were mobilised well, others were poorly organised. For example, the UK did well on statistics, including community level data, but poorly in terms of other kinds of data (particularly from care). Epidemiology was mobilised well; social science less well. Skills in programme management and implementation were weak – as seen in the contrast between the failings of the test and trace system and the later successes of the NHS’ mechanisms for vaccine roll-out.
In many countries the gatekeepers of knowledge turned out to be bottlenecks – blocking out relevant insights, and often reflecting the biases of their own backgrounds.
Taking a broad view of knowledge matters because there is no obvious hierarchy of these kinds of knowledge. In the early stages of a crisis epidemiology is all-important. But that is much less true in the later stages. The failure to properly tap into knowledge of mental health, for example, fuelled an imbalance in responses that took very seriously the visible physical health threats to old people but largely disregarded the less visible mental health threats to young people.
In short, there is not much use mobilising wonderful scientific advice if there isn’t a broader view of intelligence – and an ability to know what kinds of knowledge are needed for different tasks (the question of how governments around the world mobilised intelligence is the subject of a new IPPO project, reporting at the end of this year).
3.Synthesis and judgement is vital to guide action
A crucial related point is that, having mobilised multiple types of knowledge, the key is then to know what weight to give to different ones in different circumstances and when, for example, it is more or less valid to ‘follow the science’, or when ICU bed numbers cease to be the crucial metric.
This is a skill of synthesis which I’ve written about recently. But in our systems it’s not clear whose job it is. Is this a task for advisers? Is it for civil servants? Or politicians? And, if so, do they have the skills or methods to do it well? Some scientific committees attempted their own syntheses, taking a broad approach to making recommendations, for example on whether to vaccinate children (weighing health risks against educational considerations), and some like the Scottish government attempted a framework for handling different kinds of harm.
But in most governments this work of synthesis was done in unsophisticated ways, using simple heuristics or just sitting groups of experts around a table in committees. Yet there are plenty of methods for more systematic synthesis which could have been used, meta methods that help to identify which specific types of knowledge to prioritise at different points.
A related problem was the imbalance in models. Much attention was paid to epidemiological models, and then modelling of pressures on hospitals – rightly so in the early stages of the crisis. Some attention was then paid to economic models.
But nowhere pulled these together to make it easier to understand the trade-offs. A rare example of an attempt at synthesis was the UK statisticians’ attempt to adjust GDP to take account of education losses (albeit done in retrospect, rather than as part of decision-making). I suspect that historians will see these many imbalances as a fairly fundamental failure of science advice.
So, if nothing else, our thinking about science advice needs to shift from a focus on inputs to a focus on how the whole system works, how multiple kinds of knowledge are mobilised, interpreted and then synthesised. Seen in this perspective it becomes vital to improve the skills of decision-makers (who are generally competent in law and economics but often much less competent in understanding data, systemic patterns or exponential trends). And it becomes vital to address the skills gaps in relation to synthesis and integration.
4.The battles between science and anti-science (some within governments) may be intensifying
What about communication? Governments attempted to set the agenda, with daily press conferences, often, as in the UK, combining politicians and scientific advisers. This was broadly successful in explaining the changing rules and helping the public to understand the dynamics of the pandemic.
But the scale of misinformation on social media was extraordinary and showed the limits of traditional science communication. For example, it’s estimated that nearly half of the Twitter accounts spreading messages on the social media platform about the pandemic are likely to have been bots. In response there was unprecedented government influence on media and social media, mainly in the name of science. The big platforms were pressured to suppress or remove anti-vaxxer content, misinformation and conspiracy theories, and all in the end did so, from Facebook to Tiktok. This kind of pressure was unprecedented outside of war-time. But governments’ capacities to confront anti-science ideas and misinformation on social media varied greatly. Some were very agile in using humour and responding fast, while others relied on much more traditional methods.
Indeed the irony is that misinformation was being spread through sophisticated algorithms that tailored messages to data on past choices, cognitive styles and preferences whereas governments were responding with 20th century tools, essentially broadcasting to whole populations. I’m not aware of any governments that used 21st century information methods, targeted to different groups.
Despite this, the net effects appeared to be positive. A Wellcome poll in late 2021 80 percent of people from 113 countries said they trusted science either “a lot” or “some.” Many believe that the pandemic made science stronger – and in many ways it is.
But that obscures much. Anti-science sentiment was suppressed but not defeated. A paper in Nature showed that the US states in which people were most likely to consume online misinformation also had higher vaccination hesitancy and lower vaccination rates. A more anecdotal example: my local high street (in Luton) each Saturday morning has a series of stalls (some Christian, some Islamic) explaining that COVID was a punishment from God for various sins committed and proclaiming that science had failed and only religion could answer the true questions of life. Social media continue to circulate misinformation of all kinds and the immediacy and amplification of emotion that they bring are as powerful as ever.
The UK government would probably admit it has no real plan to cope with misinformation (a topic for a current IPPO review). But it and other governments will have to decide whether to take a more active role promoting more validated truths over lies, whether to use data-informed tools to target messages, and how much to bring law into this question.
5.Handling argument and ambiguity
As they communicate one of government’s biggest challenges is how to cope with ambiguity. The old idea of science as a world of stable and solid truths looks ever more anachronistic. Instead, science evolves through discovery and argument. Good decisions are often helped by explicit argument between competing explanations or competing models. That behavioural science often gave divergent advice during the crisis is not in itself a bad thing so long as there is honesty about this, and then honest learning about who got what wrong. But this kind of honesty may sometimes clash with the communications pressures to offer very simple and consistent messages. We know that much of the public struggle with shifting and nuanced communications. According to the work of Karen Stenner, for example, around a third of the public in much of the world dislike complexity – one reason why they are attracted to simpler, authoritarian messages. So, the challenge for governments may be whether they can segment their audiences: again, catching up with the big platforms in using data to target messages rather than relying on the broadcast messages of press conferences and public information campaigns.
They will be helped if they can earn trust. Global evidence confirms a rise in trust in scientists early in the pandemic but in many countries patterns of erosion. For example, in Indonesia trust in scientists is about the same as for traditional healers. Everywhere there seem to be correlations of trust in government and trust in science. In short, the long war between science and its competitors is far from concluded. Indeed, the more people feel stress in their lives – whether economic or security – the more likely it is that anti-science ideas will become more attractive. The implications are complex but at a minimum they reinforce the importance of sustaining trustworthiness in public institutions, and the need to aggressively combat lies and distortions wherever they arise, and at a more macro level they confirm what could be called the challenge of meaning: as science becomes more powerful it doesn’t become any better at answering questions of meaning and value.
SCIENTISATION OF POLITICS AND POLITICISATION OF SCIENCE
5.Scientisation of politics The pandemic signalled two possibly converging trends. On the one hand politics is becoming more scientific, even if that is sometimes hard to see amidst populism, hysteria and anti-science rhetoric. On balance governments are having to take more notice of science than ever before. There is slowly growing interest in the use of evidence in policy; more use of experiments; more attention to data. Meanwhile, parliaments appear to cite more research than in the past. In this respect politics echoes daily life where more people try to use science to shape their diet, exercise or lifestyles.
For governments that doesn’t mean always following the science. But it does mean that although politicians can sometimes ignore the science, they should never be ignorant of it (which makes it all the more important that there are effective ways to distil complex scientific messages in ways that ministers or parliamentarians can digest). And it does mean more use of experiment, evidence and data as a matter of course in the business of government, including opening up data as much as possible for secondary analysis.
6.Politicisation of science The other trend is that science is becoming more political. Scientists often want to be protected from politics. That’s understandable when they face streams of abuse on social media and even physical threats. But as science becomes more powerful, and gains more authority, it’s inevitable it will also be called more to account (and of course science has always been political in many ways).
Science will never be wholly independent or interest-free. Even if they don’t have a direct economic interest in an issue they will have a stake and a position. They probably need to be more open and honest about this, even as they strive for objectivity.
The limits of neutrality are even more obvious when it comes to science in government. Many scientists exercised de facto executive authority at crucial points in the pandemic, even though their de jure authority was unclear. This raises important challenges. Scientists will have to learn to cope with public pressures and the need to communicate (as some did brilliantly). They’ll need to be more honest when they get things wrong (as so many did early on in the pandemic, for example in relation to masks). But the greater prominence for scientists may also require them to be more accountable.
How much were the scientists being co-opted to political agendas which they couldn’t question? Who was accountable for what when these went wrong? How should science advice, and decision-making, connect when there are multiple tiers of government (in the UK at least, the advice was relatively aligned, even though the governmental mechanisms for coordination turned out to be largely missing)?
These questions will matter acutely as public inquiries get underway, particularly in relation to the key mistakes made in the UK in March 2020 and again that September.
There may also need to be more clarity about the boundaries between roles – with advisers accountable for the advice they give, while executives and officers are accountable for the decisions they make and the actions they take (at present these boundaries are blurred). This matters greatly for the proposed reorganisation of science in the UK government. If it really is to become more strategic, and to link scientific knowledge to issues of policy, regulation and trade, it will not be plausible just to bolster existing systems of advice. Instead, government will need more matrix approaches to deal with the big challenges such as net zero and levelling up and it will need coherent ways of handling all the relevant types of knowledge rather than seeing scientific knowledge as unique.
In my book ‘The Art of Public Strategy’, I describe the many ways governments can organise in horizontal as well as vertical ways. The paper ‘Rewiring the Brain’ (published by Nesta in 2014) set out the implications for the centres of government and has been drawn on in many countries and by the European Commission.
These experiences also open up new questions for democracy. For two thousand years political philosophers have debated whether knowledge or citizens provide a sounder basis for legitimacy (with Plato arguing the first, Aristotle the second), and the tension between these was well described by the Israeli political scientist Yaron Ezrahi as ‘an unsettling empty dark space at the foundation of political order’. That uncomfortable tension was made even more evident in the pandemic, as leaders struggled to make sense of their choices and social media amplified misinformation, but the tension is unlikely to disappear.
One legacy of the crisis will be reformed science advisory systems – which were very uneven. Another may be better ways of engaging democracy – including better training for parliaments. I hope some will also look at better methods to involve the public in these decisions. Citizens assemblies and juries are quite well designed for topics of this kind and might have given more legitimacy to the difficult judgements needed on trading off threats to physical health against the threats to mental wellbeing or economic prosperity.
7.Generalisations about science advice
So, what have we learned about science advice? Generalisations are risky but I think in the light of the pandemic, and the roles of science in relation to climate change, we can generalise a framework that makes sense for most contexts, and may be applicable to big strategic challenges other than pandemics.
First, knowledge needs to be mobilised, from wherever it can be found, and will usually be global as much as national, and broader than traditional science advice mechanisms acknowledge; Second, it needs to be assessed and validated for reliability – including by committees, secretariats and networks of experts; Third, it needs to be contextualised to the situation where it will be applied, which requires skilled interpreters and intermediaries who understand both the science and its conditions of application; Fourth, it needs to be synthesised to guide action, which requires a capacity to absorb, question and adapt knowledge. Fifth, its use needs to prompt feedback loops of learning, to elicit new sources of knowledge and potentially new approaches to validation and contextualisation.
This implies that the priority for governments and politics is not so much to ‘follow the science’, which won’t often be either possible or wise. Instead, they should embrace it and integrate with it through more systematic approaches to the organisation of knowledge and intelligence. Embracing someone doesn’t mean doing whatever they say or accepting all their views. It is rather a reciprocal relationship that can be intense, dynamic, full of mutual care but also sometimes challenge.
How will this all look in five to ten years time? The pandemic has in some respects been a golden age of scientific authority and legitimacy – as populations willingly gave up freedoms in the light of science advice and saw global science move at breakneck speed to develop vaccines and treatments.
But it has also shown up major weaknesses. There were very uneven capacities in different countries to confront misinformation and anti-science; a glaring failure to take account of many dimensions of knowledge or to think and act synthetically; a tendency to privilege harms to the old over harms to the young, the visible over the less visible.
As politics becomes scientised and science becomes politicised we will need leaders on both sides who can grasp this and not run away from its implications.