- Geoff Mulgan
Governing Artificial Intelligence - a wasted decade?
Updated: May 6
The arrival of ChatGPT and GPT4 has prompted a wave of interest in new ways to govern AI: most of the proposals are very vague, but in many parts of the world laws are being passed and governments are getting into the hard work of regulation, with the White House calling for an AI Bill of Rights.
Ten years ago it was already clear that AI was going to have a huge influence on every aspects of our lives, with algorithms already dominating finance, welfare, the media and much more. A bit over seven years ago I gave a talk and published a paper suggesting how governments might approach the challenge. I made the case for a Machine Intelligence Commission - a public body to help government develop rules, regulations and commissions for AI (you can find a copy below). This was intended to be a first step - and one that drew from lessons learned around other technologies such as human fertilisation.
It prompted some action in the UK, including the establishment of the Centre for Data Ethics and Innovation within government, which was fairly similar to what I suggested. It also prompted many discussions with other governments considering similar bodies, from France to New Zealand, Canada to Germany. At Nesta I then helped oversee several public funds to commission AI for education, health and jobs, and we did a lot of work on using data to track AI, how governments could use AI, AI ethics and more.
My proposals on the Machine Intelligence Commission aimed to prompt a debate - with critiques and alternative suggestions - since it was obvious we would soon need a battery of new ways to govern AI. Instead there was almost nothing. There were endless vague discussions of AI ethics, but a glaring gap on governance. Huge books like Shoshanna Zuboff's 'Surveillance Capitalism' offered diagnoses but not a single prescription. Gatherings of AI scientists - such as the one in Puerto Rico in 2015 - issued airy proclamations but with no sense of how they might be acted on.
Looking back however I can't help feeling disappointed and angry that so little serious work was done to grapple with issues that were bound to become more important and difficult. Despite the vast wealth of the digital industries they chose not to help the hard work of thinking through how to guide and govern immensely powerful technologies, instead preferring to intimidate politicians into only looking at how they could support AI rather than how it could be governed. Essentially a decade was wasted - despite plenty of hand-wringing.
When, a few years later, the European Union and China started developing comprehensive AI laws they essentially had to make them up along the way (I was part of one of the EU's advisory mechanisms and saw this at first hand). The bureaucrats and advisers did well in the circumstances. But it's remarkable how little they were helped by academics or public policy experts.
Within the UK although the CDEI has done excellent work (we at UCL worked with them last year, for example, on household data, ethics and net zero) it never had serious political or ministerial engagement, part of the chaos that surrounded government in general in the years 2017-22, and has effectively been in an-house consultancy: far less visible than what I had recommended (I envisaged its head being regularly on the evening news, explaining the dilemmas around AI).
When the post of chair became vacant in 2021 I applied, and was apparently recommended by a civil service panel, but overruled by then minister Oliver Dowden, during a period when the Tories had decided to fill every available post with partisan supporters. As far as I am aware the role is still empty, presumably because they couldn't find anyone sufficiently qualified and partisan (there have been various interim chairs of advisory boards, one of whom, well-qualified, was asked to become chair but refused). Meanwhile, the latest UK government statement, promising 'pro-innovation' regulation for AI, remains very thin, as much a symptom of the problems as a solution to them.
Only the arrival of ChatGPT and GP4 has at last sparked more serious attention to AI governance, yet even now most of the statements are vague in the extreme, like the recent Open Letter calling for a six month moratorium on LLMs.
It's too early for historians to assess why this happened. The cynics think the AI ethics movement and the vast injection of money into AI ethics by figures like Steve Schwarzman - working on trolley problems and the dilemmas of the singularity rather than the pressing issues of bias, facial recognition and manipulation in the present - was a deliberate distraction exercise, encouraged by the digital industry. My sense is that in most cases this was well-intentioned naivete rather than malice, though I know some of the funders were fully aware of what they were doing.
We now badly need much more energetic work on specific options for governance, and ones that are not only defined by inherited frameworks of competition and privacy law. I have been working with colleagues on options for what we call an IPCC for AI at a global level which will be published shortly - a means to help governments but not substitute for their power. A few years ago I set out suggestions for governance of data - essential to more publicly oriented AI, and now beginning to be considered more seriously (though again, it's a shame we have lost several years). The OECD now very helpfully maps AI governance actions globally.
But the fact that so many leading figures publish calls for AI to be regulated, but can offer almost no detail about how that should be done, is a sign of how far we still have to go. There is still a dearth of centres that combine sufficient knoweldge of government with sufficient knowledge of technology to design plausible options. We've wasted a decade. Lets hope we don't repeat the mistakes with the next generations of AI and with quantum.
