- Geoff Mulgan
Updated: Mar 29, 2022
This piece looks at the blindspots that have marked recent UK reviews of R&D and innovation
The UK government is, once again, reviewing how research is organised, in a post-Brexit environment, and it’s being done, once again, under the leadership of Sir Paul Nurse. The background is one of relative success for the UK. It retains a fairly buoyant global position in relation to its universities and research. The science community has lobbied effectively to grow funding, particularly the very influential life sciences field. And researchers are feeling fairly confident as a result of the prominence of science during the pandemic.
I’ve been involved in several such reviews in the past. This one has a very familiar make-up in terms of people and the key interests represented though it has a slightly odd stated purpose: to “serve the needs of businesses and places across the UK”, rather than serving the public.
There was lots of good sense in the last review – including the commitment to ‘permeability of research’, integration with action and attempts to encourage research to be more connective and less siloed. I won’t comment on the detailed issues of the balance of strategic versus responsive funding, the make-up of research councils, geographical distribution or the role of ARIA, all of which have been well written about by others.
Instead, I focus here on the blindspots which have been apparent in all past reviews and may once again be apparent with this one. These blindspots are largely functions of past success. Powerful vested interests within the system tend to be defensive and have become adept at seeing off challenges. In other words the blindspots fit a pattern.
INNOVATION IN INNOVATION TO ADDRESS DECLINING R&D PRODUCTIVITY
The first blindspot is attention to productivity. There is now abundant evidence on the declining productivity of R&D across the world. There is understandable reluctance to talk about this much (and it’s largely ignored in reviews). But there is little doubt it’s happening (in pharmaceuticals it’s called EROOM’s law, the mirror of Moore’s Law).
Usually when productivity declines in a sector it prompts attention to innovation, since innovation is the only reliable way to drive up long-term productivity. But there continues to be serious resistance to even accepting the diagnosis let alone acting on it and promoting innovation in how R&D is organised.
At a minimum the R&D world should be trying to experiment. I have in the past suggested how it could experiment with different funding models (as NSF does in the US); different uses of technology to speed innovation (like the use of AI in proteins); or different kinds of testbed and lab. Over many years I've tried to encourage more use of challenges (and have been disappointed how often generic rhetoric about missions substitutes for more serious use of challenge models).
I'm also convinced that you often need new institutions to introduce a new ethos and operating model. That's why on balance I was in favour of the creation of ARIA and support a pluralism of institutions. Bureaucratic neatness can be the enemy of creativity.
But vested interests seem remarkably adept at resisting even modest moves towards more innovation. I hope this review will be more open about the issue.
CONNECTING R&D TO SYSTEMS CHANGE
A second blindspot also matters because it may be key to boosting productivity in the next few years. Many of the key technologies of the next decade are systemic in nature. That means that they depend on many things changing in tandem. Driverless cars for example depend on new rules and connectivity, especially during transition periods, as well as the core technologies (like Lidar). Genomics depends on a host of new arrangements for data and consent. Drones require entirely new ways of governing airspace, safety and liability.
This has big implications for how R&D organised. It can’t just be a linear flow from labs into products and services. Instead a more strategic approach is needed that aligns technology development, regulation, policy and testbeds, and engages the public.
A few years I and colleagues advocated the need for new structures to do this well which we called ‘Advanced Systems Agencies’ – linking spending decisions on R&D with regulation, experimental testbeds, particularly in energy, transport and healthcare, and for technologies such as drones. We also made the case why any mission-driven approach had to be about more than money (and had to be very different from missions like Apollo half a century ago).
At the time government opted instead for a more generic, funding-led model with the creation of UKRI and other governments have also interpreted missions in a fairly traditional, and narrow way. But the arguments still stand and are likely to become more obvious in this decade, particularly as China proves adept at taking a more systemic approach.
BEIS has done good work in trying to connect regulation to innovation, with the ‘Regulatory Pioneers Fund’ and adopting some of the ideas of ‘anticipatory regulation’. But it’s hard to see any influence this has had on the leaders in charge of R&D.
The third, perhaps surprising, blind spot concerns the orchestration of intelligence. Business has been transformed in the last generation with data and knowledge now at the heart of the world’s biggest companies (which have become much more valuable than banks). But governments have yet to make a parallel shift. Instead they always default to the distribution of money as the primary issue and see the organisation of intelligence as very much second order. The most powerful department in the UK government is the Treasury and there are still no powerful institutions organised around data and knowledge. The same pattern can be found globally where we have an IMF and a World Bank but no comparable global public institutions dedicated to intangibles.
That the organisation of science mirrors this – with the great majority of arguments centred around funding rather than intelligence - is already anachronistic and will become more anachronistic in the future. Instead our whole science and technology ecosystem needs to be looked at through the lens of intelligence, rethinking how it gathers and curates data; how it spots potential links; interprets patterns; or attempts forecasts of emerging technologies or demands.
This is a task that sits somewhere between ONS, GO-Science, BEIS and UKRI. But because it’s not central to any of them it’s not done very well. When I was at Nesta we commissioned many projects using data tools to map innovation ecosystems, flows and potential partnerships. But there wasn’t an obvious client for this in government. Now at UCL I’m involved in various projects mapping the velocity of ideas – how fast they move into action and how dynamic the loops are between research and practice (which are very dynamic in fields like medicine or AI but missing in others). Some commercial sectors are rich in this kind of intelligence – but it tends to be proprietary. So one task for the review is to recommend the creation of a new public capacity to help the system know itself.
A striking symptom of the problem has been made visible by the STRINGs project, which I’m part of, which is backed by the UN and UKRI and has been mapping global STI and its relationship to the SDGs. It’s a fascinating project and is generating country level maps.
But it’s striking that nothing like this exists for the UK: there is no systematic analysis of where our brainpower goes and how well it does, or does not, align with the UK’s big priorities.
ADOPTION MATTERS – NOT JUST INVENTION
Better orchestration of intelligence would also help with another consistent blindspot in reviews of this kind: the lack of attention to adoption and diffusion. For at least a century it has been clear that the UK is better at upstream research than downstream implementation and adoption. That used to be thought of primarily as an issue of commercialisation – encouraging spin-outs from universities.
But that often misses the point. It’s more often about SMEs (public organisations or charities) adapting or adopting emerging technologies wherever they come from. BEIS has started attending to this more seriously – I was part of the design of the ‘Business Basics Fund’ which used rigorous experimental methods to compare different ways of promoting adoption.
But I still find that the research world sees this as uninteresting and low status. The rhetoric on being a ‘science superpower’ hasn’t quite taken on board that this objective will be less appealing to the public if most businesses remain very far from the frontier in terms of their uses of science and technology.
FUTURE-ORIENTED SOCIAL SCIENCE
Social science will be a minor part of the review and remains relatively weak at the higher echelons of government, despite its obvious importance through the pandemic. At a minimum there needs to be more attention to integrating social science into science advice and decision-making. But social science could also be addressed in two other ways.
One is to recognise the ways in which social science is integrated into the economy and society. The UK does very well in monetising economics, psychology, behavioural science, ethnography, design and other fields, much of this through consultancies and advisory services, or through teams within companies, large and small. But little analysis is done of this thriving economy which sits alongside academic social science, or of the role of start-ups in keeping it dynamic. Cardiff University is about to launch its social science park – SPARC – which is one route to better connecting social science to the outside world – and there are many small initiatives in this space. But it’s not part of the broader narrative.
A related issue is how social science can do better at helping the UK through the transitions that lie ahead, including the transition to a zero carbon economy. While life sciences and others are strongly focused on the future, equivalent thinking has been largely squeezed out of social science (for many reasons, including healthy emphases on data and empirical research). As a result, social scientists mainly look at the present or the past and see it as career threatening to do too much speculative or design work on the future. Researchers in the life sciences are offered a battery of grants and investments to test out speculative ideas for the future; there is very little comparable for the social sciences.
I’ve written about this extensively for the New Institute in Germany and proposed some remedies – so that social science could do better at exploring the social and political dimensions of, for example, future AI, the Fourth Industrial Revolution, ageing and ecology (I’ll be talking about this at the Academy of Social Sciences annual lecture in June). I have had good engagement with these ideas in the European Commission, UNESCO, Germany and elsewhere and the British Academy hosted a very useful discussion.
But this has yet to translate into serious action to shift the incentives for social scientists. This work sits on the boundaries of science, technology and social science – and is badly needed if research is to serve the UK well.
INCUMBENCY AND SUCCESS
That these issues are generally ignored is a symptom of relative success. When there isn’t an obvious crisis, the natural approach is to leave things as they are, with only modest tinkering and a shuffling of acronyms rather than anything more deep-seated. That's a concern both for the Nurse review and for the parallel moves to create an Office of Science and Technology Strategy in the Cabinet Office under Sir Patrick Vallance. There is nothing inherently wrong with redoing the organograms. But without sharp diagnosis and prescription, bureaucratic rearrangements always disappoint.
While reviews are led by incumbents who have, almost by definition, thrived in the existing system, more critical thinking is bound to be difficult. But history tells us again and again that it’s better to attend to challenges when things are going well than to wait for things to go badly wrong.