top of page
Search
  • Geoff Mulgan

Evidence Ecosystems and the Challenge of Humanising and Normalising Evidence


It is reasonable to assume that the work of governments, businesses and civil society goes better if the people making decisions are well-informed, using reliable facts and strong evidence rather than only hunch and anecdote.  The term ‘evidence ecosystem’1  is a useful shorthand for the results of systematic attempts to make this easier, enabling decision makers, particularly in governments, to access the best available evidence, in easily digestible forms and when it’s needed.  


This sounds simple.  But these ecosystems are as varied as ecosystems in nature.  How they work depends on many factors, including how political or technical the issues are; the presence or absence of confident, well-organised professions; the availability of good quality evidence; whether there is a political culture that values research; and much more.


In this short piece, I reflect on the UK’s attempts to create better ecosystems of evidence and what the priorities might be for the next period, both in the UK and elsewhere.    I have been involved in several phases of this history and have attempted to be reflective rather than complacent.   The paper argues that rigorous generation of knowledge, interpretation and synthesis, and clear communication to users, are vital and need to become more systematic and embedded.


However, these are necessary but not sufficient conditions for these systems to work well.   In particular, the paper argues that the next generation of evidence ecosystems need a sharper understanding of how the supply of evidence meets demand, and the human dimension of evidence.  That means cultivating lasting relationships rather than relying too much on a linear flow of evidence from researchers to decision-makers; it means using conversation as much as prose reports to ensure evidence is understood and acted on; and it means making use of stories as well as dry analysis.  It depends, in other words, on recognising that the users of evidence are humans.


In terms of prescription the paper emphasises:


  • Sustainability/normalisation: the best approaches are embedded, part of the daily life of decision-making rather than depending on one-off projects and programmes.  This applies both to evidence and to data.  Yet embeddedness is the exception rather than the rule.

  • Multiplicity: multiple types of knowledge, and logics, are relevant to decisions, which is why people and institutions that understand these different logics are so vital.  

  • Credibility and relationships: the intermediaries who connect the supply and demand of knowledge need to be credible, with both depth of knowledge and an ability to interpret it for diverse audiences, and they need to be able to create and maintain relationships, which will usually be either place or topic based, and will take time to develop, with the communication of evidence often done best in conversation.

  • Stories: influencing decision-makers depends on indirect as well as direct communication, since the media in all their forms play a crucial role in validating evidence and evidence travels best with stories, vignettes and anecdotes.


So, while evidence is founded on rigorous analysis, good data and robust methods, it also needs to be humanised – embedded in relationships, brought alive in conversations and vivid, human stories – and normalised, becoming part of everyday work.


The ideal model


Many people involved in evidence have an implicit ideal model in their mind. In it, good quality evidence is provided by academics to rational decision-makers who have a sophisticated understanding of research, and want their policies or actions to work, and to be cost effective.    For this ideal model to work, it’s generally agreed that there needs to be:


  • plenty of research to draw on, including rigorously designed Randomised Control Trials and experiments, to feed this stock of knowledge

  • good data on activities, outputs and outcomes, with suitable granularity and time series;

  • regular meta-reviews and syntheses, and living maps that pull these together; 

  • communications channels that distil these into useable forms, including ones that are contextually relevant;

  • presenters of the evidence who are credible, and have deep knowledge of the topics, beyond that available to civil servants, professionals and politicians; 

  • working cultures in professions, the civil service and politics which value evidence, and invest in training on how best to use it.

  • a wider public (and media) who both understand and respect science and the scientific method.


Recent UK history

Over the last few decades, the UK has seen serious attempts to move closer to this ideal, complementing parallel work globally, from the Cochrane and Campbell Collaborations to the work of the OECD, the European Union and individual countries (notably the USA with its ‘Evidence-Based Policies Act’).   Many university-based centres were set up in the 1990s, a period when government showed a healthy appetite for evidence.  A lot of money was spent on evaluations, there were a growing number of experiments and RCTs, and there was healthy interaction between academia and policy-makers.


However, although many of the university-based centres did well in creating repositories of evidence, they were less successful in getting it used, and many evaluations had little impact, coming too long after the policy interest had waned.  This led to greater interest in the demand side of evidence, and attention to the ‘evidence on evidence’, which emphasised the importance of who was presenting evidence, how it was presented and timeliness. 


The National Institute for Health and Care Excellence (NICE) was set up in the late 1990s with a strong focus on the use of evidence, and a few years later, in the 2010s, was followed by a wave of ‘what works’ centres, partly aiming to address this problem, with a tighter link between supply and demand.  This report back in 2013 proposed how these would work and how their impact could be maximised (it argued that they should be ‘demand led… useful to their target audiences and those for whom the evidence is relevant, working hard to develop easy–to–understand outputs… completely independent from government, but close enough to have an impact, and complementing rapid progress being made in opening up public data, and administrative data’).


This approach received strong support from the Coalition government2. More recently, it guided the approach of IPPO – linking centres in all the nations of the UK – and other observatories and evidence centres that work closely with policy-makers.

Some of the ‘what works’ centres became very influential.  NICE was powerful thanks to its analysis of the effectiveness of treatments and the link to commissioning in the health service.  Some achieved impact through being tied into training (like the College of Policing).  Some achieved significant influence thanks to their scale and smart ways of working (like the Education Endowment Foundation, EEF, with huge budgets by the normal standards of academia).


Meanwhile there was significant investment in experiments and trials, using increasingly robust experimental methods to ensure that their results were valid, helped by bodies like the Behavioural Insights Team and the Innovation Growth Lab.   Other initiatives supported this ecosystem: the Alliance for Useful Evidence, which covered all the nations of the UK; media programmes such as ‘More or Less’ on the BBC; FullFact, Our World in Data and other initiatives promoting accuracy; CAPE (Capabilities in Academic Engagement), UPEN (Universities Policy Engagement Network) and policy fellowships; and initiatives in the devolved nations such as the Wales Centre for Public Policy, and the new network of LPIPs (Local Policy Innovation Partnerships). 


Within UK government, the Evaluation Task Force (ETF) was set up in 2021 as a joint Cabinet Office–Treasury team, and promised that by 2025 every department would have published an evaluation strategy; every major project would have robust evaluation plans; and 90% of departments would be compliant with the Treasury’s evaluation conditions set with budgets at spending reviews, all building on an evaluation registry – a record of evidence from policy evaluations to inform future policy making.


This sustained investment has left the UK ecosystem widely admired; institutionally strong; reasonably well-connected to governments, better resourced than in other countries and enjoying healthy cross-party support and a lattice of connections to parliaments (through bodies like the Parliamentary Office of Science and Technology, POST).


Challenges – what didn’t work so well

However, it would be wrong to be complacent.    A more critical look at the UK scene quickly shows some of the challenges.  A recent report from the Institute for Government (Whitehall Monitor 2024), commented that ‘too much policy remains unhelpfully closed to evidence and input, particularly from outside central government. …[civil servants] can feel that outside evidence might point towards proposals misaligned to ministers’ priorities and so may be politically unwelcome. In energy policy, for instance, outreach to external experts has been undermined by officials being tied to a ‘house view’ that limits “what evidence is deemed relevant, what policies are considered, and who is consulted”. Or they can lack the knowledge, networks and capability to bring in that outside view. Where good practice exists, it is usually the result of the skills of particular civil servants, rather than a systematic approach of seeking external expertise.’ 


This last point highlights a fundamental issue: it remains unclear whether there is an owner of the problem, a department or funder sufficiently committed to the use of evidence to ensure that there is some kind of ecosystem in place.  Recent Prime Ministers and Cabinet Secretaries have given less support than some of their predecessors.   Government also still lacks a coherent intelligence function for domestic policy – instead this is divided not only between functional departments (health, education, industry) but also between different teams and cultures (statistics, research, data, science), and the knowledge management function in government remains relatively low status.  As we have argued before (in the IPPO study on how governments organised intelligence during the pandemic), this leads to multiple inefficiencies.


These structural weaknesses at the centre of government may help to explain some of the other patterns which mean that the UK is some way from an ideal of evidence-informed policy:


Professional practice and analysis – rather than policy:  perhaps the most striking feature is that the most impactful centres have avoided policy altogether. One group (NICE, EEF etc) have primarily targeted professional practice rather than policy, whether in hospitals or schools. They aligned with strong professions that invest heavily in training and systematisation of knowledge, but were careful not to comment on policy choices. 


Another group of evidence centres, often the most visible in the media (such as the Institute of Fiscal Studies (IFS) and Fullfact) were funded from outside government but tended to focus on facts, analysis and description, again avoiding much prescription or policy. 


Uneven embeddedness: NICE is an example of an embedded evidence infrastructure, closely tied into everyday decision-making.  But it remains an exception and there has been only limited progress in creating others.  In an ideal world evidence is generated automatically, providing feedback about what policies or interventions are working.   This depends on gathering data about what is being done, how much money is being spent, and what outcomes are achieved.  As I show later there are impressive examples – such as New Zealand’s Integrated Data Infrastructure (IDI) – which show what is possible.    But the IDI is an exception, and it remains rare for evidence to be tied to finance (as NICE does), even though there is less value in knowing something is effective if you don’t know how much it costs. 


Interventions rather than systems and institutions:  the main methods used by What Works Centres, and within government through methods like cost-benefit analysis, focus on individual interventions.   They are not designed to evaluate whole systems or institutions, which requires a wider range of disciplinary lenses (including history) and methods.  This matters since a high proportion of government activity can only be understood through systems thinking rather than as an aggregation of discrete interventions (the recent attempts at ‘levelling up’ are an obvious example).


A lot of ‘what’, not so much ‘how’:  the traditional evidence model focuses more on ‘what’ works than on ‘how’ it works, with relatively little attention paid to the challenges of implementation and adapting evidence to real contexts.  Yet evidence is only transferable if the conditions of implementation are similar to the conditions the evidence was generated in, which is rare.  Often decision-makers want as much guidance on the practicalities of implementing a new programme or policy as on its design.   This is more of a craft skill than a science.


Patchy connections between demand and supply:  in an ideal world there would be close links between the policy-makers who need evidence and intermediaries and providers. There would also be regular exercises to identify demands and potential supply.  But despite the introduction of ARIs (Areas of Research Interest), these links remain patchy.  There is no coordinating function within government to gather and triage specific demands, or a systematic channel back to providers of evidence.  UK government still lacks a head of social science (an idea recommended in the past, for example by the House of Lords), despite having many Chief Scientific Advisers (CSAs), a social science strand in GO-Science, and a head of economics profession.  The definition of CSA roles, for example, still focuses solely on STEM issues (even though most research used to guide policy comes from the social sciences) and the CSA role is usually presented as quite detached (ie advice rather than action). 


Mismatches with politics:  it would be hard to claim that political decisions have become more evidence-based over the last decade.  Some policies that are repeatedly dismissed by evidence synthesisers continue to be popular amongst ministers: enterprise zones are one example.   The interface of politics and evidence also continues to be problematic. Politicians and policy-makers are deluged with information.   They become adept at selecting what will be most useful to them.   This isn’t always or even usually the most rigorous research but rather research which is most useful and most likely to resonate with their world, a world in which the media, commentators and other politicians matter much more than academics.  As I show later this means that influence is often most effective when it’s indirect rather than direct, and when evidence is shaped into stories, vignettes and anecdotes, and combined with striking statistics.


In the next sections I suggest some ways in which the evidence ecosystem could become both more normalised and more humanised, while building on the best of what already exists.


1.Focusing on sustainability and embeddedness: evidence by default

Evidence is most likely to be wanted and used if it simply part of the everyday processes of decision-making.  This is true to a large extent in medicine, engineering and in many sciences.  In these fields it’s hard to practice without some familiarity with the state of knowledge.  The best models are embedded, part of the daily life of decision-making rather than depending on one-off projects and programmes (some have suggested the idea of ‘evidence by default’, an equivalent to moves to ensure ‘digital by default’). 


The development of any new policy, or intervention, would begin with a scan of what’s known.   This has sometimes been common for policy units in government, but it’s not automatic, though there are many tools available for doing scans of this kind, from meta-reviews to evidence syntheses, and many new tools using AI that can generate rapid syntheses, from EPPI-Reviewer to Elicit (see our piece surveying these last autumn).  That evidence may not be directly relevant or transferable (see our recent paper on transferability for a more in-depth discussion of this).  But it at least provides a starting point for policy design.


The field of health has addressed the question of embeddedness more than others, and research confirms that this depends not just on knowledge flows but also on much else, including skills, mindsets and organisational capacities (see, for example, this recent piece on health systems).


Embeddedness also involves data.   All public services might regularly collect data on what they are doing, and what is being achieved.   This is a potential direction of travel for public finance, with much more systematic tagging of spending allocations and learning about patterns and surprises.  But not many public services come close to this ideal, in part because of very limited progress in linking financial allocations to impacts achieved. 


Probably the world’s best example of a more sophisticated infrastructure of this kind is New Zealand’s Integrated Data Infrastructure which collects many different kinds of household level (de-identified) micro-data, from social and health data to tax and jobs, making it possible to discover patterns without experiments or trials.   The UK was on a similar track with Biobank (which is now generating fascinating insights) and the ‘children at risk’ database, Contactpoint (though this was cancelled in 2010).  More recently, LEO, which connects educational data and tax records, has started to generate very useful insights, and hopefully the same will happen with big projects like ‘Our Future Health’ (involving some 5 million citizens),   The ONS’ Integrated Data Service (IDS) could in time become a very significant part of the UK ecosystem.  Australia also has several impressive examples including: the Person Level Integrated Data Asset (PLIDA)Business Longitudinal Analysis Data Environment (BLADE)  and a de-identified data set on tax and superannuation called Alife.


Finally, embedding requires governments to be better at learning.  The organisation of collective memory appears to have deteriorated in recent years (and is highlighted in the recent Whitehall Monitor), partly thanks to high turnover of staff and weaknesses in the digitisation of records.   When a new policy is being developed there may be no one around who remembers similar initiatives 5, 10 or 15 years ago, no records of lessons learned, and no lists of who was involved in previous initiatives. 


High turnover of officials was one of the factors that made the recent Online Safety Bill problematic.  Another example (which IPPO is involved in) is use of data by local authorities for problem solving. There appears to be no home for memory from the many initiatives over the last 15 years, from ‘offices of data analytics’ and data stores to ‘open data challenges’ (apart from the individual memory of the people involved).  As a result, wheels are constantly reinvented.  


Here there is much to be learned from best practice in other sectors, including the global consultancies and some of the military, which systematically organise ‘lessons learned’ exercises, and directories of who has been involved, so that anyone facing a new task can easily find out what was learned from similar exercises in the past, and who to talk to get the subtler lessons.


2.Multiplicity: there are multiple logics at work

Evidence ecosystems bring together many different players. These are quite diverse in their needs and ways of thinking. They include:


  • Professional practitioners in teaching, medicine, social work, policing or probation

  • Specialist researchers within governments, who are often the most hungry for inputs, but are also sometimes in competition with external bodies

  • Policy makers in the civil service, with varying degrees of specialist knowledge, who have to serve elected politicians;

  • Politicians – either in government or opposition – and their various advisers

  • Civil society, thinktanks and others


Anyone who has worked in this field quickly discovers that these groups have different cultures and mindsets.   Indeed, the most basic feature of evidence systems is that they straddle different logics, which are different ways of seeing the world.  These include:


  • The logic of science and research, which tends to be neutral, sceptical, cumulative, peer-based and impersonal, has long time horizons, and tends to see knowledge as inherently good;

  • The logic of politics which tends to favour narrative, anecdotes and examples; is fluid and pragmatic and concerned with values; can be empathic with lived experience and oriented to achievement in the present and action, often short-term;

  • The logic of officials and bureaucracies which tends to be pragmatic, oriented to problem solving, with a bias to order, rules, representations, outcomes as well as process, interested in implementation as well as policy, the how as well as the what;

  • The logic of professions, which tend to have a sense of moral vocation, a commitment to autonomy (and suspicion of politicians and bureaucrats) and an ethos which privileges individual judgement grounded in practice and experience as much as codified knowledge.


These logics3 are necessarily different, though they have overlapping interests.  Part of the task of intermediaries is to be fluent in all of them.


Researchers who genuinely believe that it is only stupidity that makes politicians ignore their findings are bound to be ineffective.  Likewise, bureaucrats who believe that they can simply command professionals to act in a certain way are likely to be resisted, as are politicians who see scientists as self-indulgent and unrealistic.


The existence of these divergent logics explains why individuals and institutions which can bridge these logics and are truly multi-lingual are particularly valuable.


Facts, evidence, innovation and systems change

A related point concerns what evidence ecosystems produce. It’s sometimes assumed that policy-makers only need to be given better evidence and that this will directly shape their decisions.   But very little policy is, or could be, directly evidence-based.  Rather evidence is one important guide, but not the only one.  In practice, multiple factors influence policy design:

  • The facts – the current data on the issue, problem, or opportunity

  • The evidence on what works and what doesn’t, including cost effectiveness and relevance

  • Innovations and emerging ideas – which may not yet have strong evidence to support them or may come from other fields

  • The direction of travel – where they might want to get to in 5, 10 or 20 years time, including transformation of whole systems

  • The politics of the issue – the balance of forces, arguments and public opinion


All of these matter and it’s an error to believe that evidence is automatically more important than the others.   The work of synthesis is usually done within governments (see, for example, this proposal for ‘policy steering rooms’), but evidence synthesisers could also expand their methods to encompass some of the other relevant inputs.


3.Credibility and relationships – either place or topic based

Intermediaries need to bring a mix of specialist knowledge and sensitivity to the context of use if they are to be of use to decision-makers.   That means deep field expertise (ideally from a mix of research and practice) or shorter term, recent in-depth immersion in a specific question. 


The leaders of the most influential intermediaries are respected for their knowledge and can be relied on in meetings, events or conversations to provide insights and information that go well beyond what was available within government departments from other sources:  Kevan Collins and Becky Francis at EEF, Nancy Hey at What Works Wellbeing, Ligia Texeira at Centre Homelessness Impact, Anand Menon at UK in a Changing Europe, and Paul Johnson at IFS, are all good examples.   Without this credibility it’s hard to persuade politicians or officials to give up time in their busy diaries.   A similar pattern applies to thinktanks and research centres – much depends on the authority and relationships of key individuals.  In addition, much of the absorption of evidence comes from conversation, not just reading prose reports or online materials.


Relationships take time to build up and consolidate: they are rarely strong if spread too thin.   It follows that relationships generally have to be clustered either around topics or places.  Otherwise, it is hard to make, and sustain relationships.   So EEF is effective because there is a pre-existing community of teachers, policy makers and education researchers interested in its work (though even in this case, getting evidence used is hard).  The Wales Centre for Public Policy (WCPP) works well because there is an easily reached community of decision-makers in Cardiff.   The OECD works most effectively when it orchestrates communities of interest – in fields like education or tax policy – with relationships that persist over years or decades, and less well when team leaders lack these relationships.   Other approaches (possibly including IPPO) have struggled in handling relationships because of too much breadth – it is hard to maintain meaningful relationships with a very wide range of decision-makers.


So, what models might work well in the future for either topics or places?    One interesting place- based model that could be relevant to the UK is ‘Open Research Amsterdam’, which straddles the city administration and universities.  It systematically links supply and demand; engages with communities and municipalities to identity key needs and problems; mobilises research from universities; organises these together in clusters; is linked by 200 or so ‘editors’ whose job it is to orchestrate the knowledge and make it easily available; and is overseen by the City’s Chief Scientist whose post straddles the city government and the universities.    A similar approach could be taken in the UK’s main cities (and some recent initiatives including LPIPs, Insights North-East, the West Midlands Regional Development Institute, YPERN in Yorkshire, and the NIHR ‘Health Determinants Research Collaborations’ point in this direction).


Thematic areas like education or policing also benefit from continuous curation of the links between users and creators of research, and shared platforms that collect the results and make them available as both living maps and living communities of practice.  


For these to work well, researchers need to be disciplined and selective. It’s a problem if they feel they have to create a policy brief even if they have nothing new or interesting to say (which is often bound to be the case with research projects).   These platforms also need to be open: the downside of the importance of relationships is that particular researchers can become bottlenecks, skewing policy-makers towards particular sources of research and ignoring others (this has sometimes been seen with Chief Scientific Advisers, who inevitably prioritise their own networks, disciplines and colleagues).


An evidence ecosystem entirely based around topics and places still leaves some gaps.  One is methodological innovation.   Over the last six months, we at IPPO convened a series of sessions partly to fill this gap, looking at uses of Generative AI trained on reliable research for synthesis; new data tools to track the links between supply and demand; transferability of evidence; the role of lived experience; systems maps and more.  There is a need for this work to be done more systematically, particularly as technologies advance very rapidly (and as new risks appear – see, for example, this interesting recent piece on AI and the ‘illusions of understanding’).


One important focus for methodological innovation is systems maps which are a vital part of the toolkit for evidence.   They can describe the reality – in ways that connect different disciplines, from economics and engineering to psychology.  They can provide a common language.  They can provide a framework for seeing where evidence is strong and weak.  And different kinds of systems map can show who is doing what in generating and using new knowledge.


They can also be useful in addressing another potential gap, cross-cutting priorities.  Here, in future the best approaches might explicitly bring together the verticals, the topic-based specialist organisations, along with a light touch horizontal integration function and links into place-based centres.   IPPO has done excellent work on Net Zero under Jeremy Williams and colleagues.  But in retrospect we could have brought together the various bodies working on evidence in this space (our first paper back in 2020 on a ‘what works’ centre for Net Zero listed about 15 in the UK).  We could have discussed with them, and policy makers, the key gaps; and then worked together to fill them, using systems maps as a tool.  Our work on spatial inequality likewise could have been conceived as a programme in its own right, bringing in from the start a mix of organisations who have specialised in the deep analysis of spatial issues, and using systems mapping methods at an earlier stage to identify key gaps and potential priorities.


International collaboration?

One striking feature of both the topic-based issues and the cross-cutting ones is that they are shared by many other countries.    Existing organisations like the WHO, OECD and European Commission already try to mobilise global evidence and communities of peers.  IPPO has worked quite closely with INGSA (the International Network of Government Science Advisers) and has run international events, including through IPPO Cities, on topics such as how to run society-wide conversations or Net Zero. 


In these we have had speakers and participants from dozens of countries.  But overall,  there has been less success, despite many attempts, in creating multi-national collaborations, for example around urban transport, labour markets, schools or tax, or linked sets of experiments in multiple countries (this piece from me and David Halpern back in 2016 which advocated more formal international collaboration remains relevant).   However, the time might be right to revisit these, since digital platforms now make it a lot easier to coordinate multiple research programmes, sharing de-identified data and orchestrating peer learning.


4. Politics: achieving influence indirectly as well as directly, through stories as well as analysis

It’s often assumed that the best way to achieve influence is to directly target decision-makers with carefully prepared briefs.  But as indicated earlier, this isn’t quite how the civil service and politicians work.  Instead, they seek to track the mood of the field around them, and the primary route is through seeing what the media attend to.   Recent research by Basil Mahfouz4 at UCL/STEaPP, drawing on very large data sets of policy and research, confirms this point –  the research that’s most used in policy is often the research that has had most media exposure, not the research that is ‘best’ in any other senses.


The fact that influence is often achieved indirectly rather than only directly has guided the

approach of some intermediaries, in particular the IFS – which provides high quality analysis for the media; thinktanks such as the Resolution Foundation; and, in a very different way, the UK in a Changing Europe, which provides a mix of analysis, commentary and arguments (but generally steers clear of policy). 


Their experience suggests that a top priority for any intermediary could be to influence decision-makers indirectly rather than directly.   To do so often requires that evidence is turned into stories: surprising facts, narratives, cases, anecdotes, ideas and arguments rather than evidence on its own.  These often work best if combined with ‘killer facts’, striking statistics that politicians can cite.


Even in more stable times, political interest and attention are episodic, driven by shifting agendas, whereas evidence synthesisers tend to move at a slower pace.   One symptom of this mismatch is that some evidence synthesisers describe a 6-9 month project as a ‘rapid evidence review’, which sounds very slow to people working in government.  Another mismatch is that evidence synthesisers tend to be interested in the methods they use, whereas their audiences are interested in the conclusions and insights.  Documents still vary greatly as to whether they highlight conclusions or methods – if the latter they are unlikely to prompt much interest from decision-makers. One obvious step is to give researchers more opportunities to spend time with policy teams. This experience may be the only way to get a feel for the pressures and nuances of policy in the real world (and for the different ‘logics’ mentioned earlier).  The American jurist Oliver Wendell Holmes once wrote of the need to find ‘the simplicity on the other side of complexity’, and this is certainly true in relation to evidence.


Some conclusions

Marshalling and mobilising evidence is both essential and sometimes unnatural. John Maynard Keynes once commented that ‘there is nothing a government hates more than to be well-informed; for it makes the process of arriving at decisions much more complicated and difficult.’


The UK evidence ecosystem is quite complex, messy and uneven.   In some fields it is highly structured and institutionalised, in others much more tentative and fragile. If the analysis above is correct, these are some potential options for the future:


  1. Promoting more coherent organisation of the evidence ecosystem as a whole: with a Chief Social Scientist and Chief Intelligence Officers charged with synthesising the key inputs to policy, from data and facts to evidence and innovations, as well as promoting high quality research and granular data.  Top-slicing budgets to fund intelligence organised across departmental boundaries could be the default.

  2. Such a shift could also be helped by stronger coalitions linking the key players inside and outside government – the main departments, research teams, and professions; the arms-length bodies including ONS, UKRI/ESRC and others; and also major foundations, universities and programmes.   Ideally these might commit to a shared ten-year strategy around the issues mentioned in this paper, for example to build up high quality data sources, ensure career progressions and improve the effectiveness of evidence creation and use.

  3. Within such a strategy it would make sense to prioritise topic and place-based institutions, with leaders who have independent credibility and relationships to mobilise and ensure good alignment with the dominant political priorities (there are currently some obvious gaps, including housing policy and industrial policy)

  4. Government could also consider adapting models such as Open Research Amsterdam, that further embed and deepen these relationships, ideally as a strategic project for every nation, major city and region, potentially linked to the target of getting 40% of R&D out of London and the south-east, and building on the trailblazer deals for Greater Manchester and the West Midlands.  

  5. This could encourage government to identify needs and demands more systematically, and in more granular ways than the ARIs have done.

  6. The UK should also aim to build on examples like LEO and New Zealand’s IDI, building up the ONS Integrated Data Service and linking data sets more systematically, on topics such as crime and migration, labour markets and transport, where possible connecting data and evidence to finance and cost effectiveness (ideally with active collaboration from the Treasury).

  7. Our work at IPPO has shown a particular need to provide better evidence inputs to local government, which has been significantly hollowed out in recent years, along with its umbrella bodies.   Here a demand-led approach, providing high quality syntheses on problems shared by multiple local authorities, could be very valuable.

  8. In parallel, there need to be separate drives for methodological innovation and excellence, particularly focused on new digital tools to support rapid evidence synthesis, and tools, including systems maps, for cross-cutting issues, with some cross-cutting initiatives that integrate the relevant ‘verticals’.

  9. The evidence world also needs to work harder to feed the media in all forms, turning evidence into vignettes, anecdotes and stories, potentially with new intermediaries for social science, learning from the many successes in science communication that have been helped by specialist bodies.

  10. Finally, the UK should seek to grow international collaborations – since most of the topics the UK is concerned with are also live issues in many other countries and there are obvious potential benefits from pooling knowledge and ideas.

*This paper is in draft form at the point of publication and may continue to be updated.

Many thanks to the following who commented on an earlier draft (but are not responsible for any errors of fact or judgement on my part!):  Nick Pearce, Jonathan Breckon, Claire Archbold, Muiris MacCarthaigh, Basil Mahfouz, Sarah Chaytor, Eleanor Williams and Will Moy.



bottom of page