top of page
Search
  • Geoff Mulgan

Governing Artificial Intelligence - a wasted decade?


The arrival of ChatGPT and GPT4 has prompted a wave of interest in new ways to govern AI, within nations and globally.


In many parts of the world laws are being passed and governments are getting into the hard work of regulation. The EU has worked on detailed laws, attempting a risk-based framework which is due to start implementation soon. China has introduced strict rules, for example on deep fakes and has created a potentially powerful regulator in the Cyberspace Administration of China. It also recently proposed to ban LLMs with “any content that subverts state power, advocates the overthrow of the socialist system, incites splitting the country or undermines national unity”. The White House has called for an AI Bill of Rights and detailed work is being done on adapting existing laws - from competition to copyright.


But as I will argue in many respects its remarkable how long it's taken to get here - perhaps a symptom of an era when the tech industry was able to scare governments off creating rules of any kinds. For most of the decade there has been a dearth of serious thinking or proposals of any kind.


Ten years ago it was already clear that AI was going to have a huge influence on every aspects of our lives, with algorithms already dominating finance, welfare, the media and much more. But very little work was being done on how to respond.


A bit over seven years ago I gave a talk and published a paper suggesting how governments might approach the challenge. I made the case for a Machine Intelligence Commission - a public body to help government develop rules, regulations and commissions for AI (you can find a copy below). This was intended to be a first step - and one that drew from lessons learned around other technologies such as human fertilisation.


It fed into some action in the UK, including the establishment of the Centre for Data Ethics and Innovation within government, a much watered down version of what I suggested. It also fed into many discussions with other governments considering similar bodies, from France to New Zealand, Canada to Germany.


My key argument was that there needed to be a meta regulator, with deep knowledge of AI, that could work with and advise other regulators: ie neither leaving regulation solely to existing regulators or expecting a single new entity to regulate AI in all its different forms. I still think that is right, and is where the debate will end up.

I also argued for creating empowered institutions rather than trying to specify in too much detail what regulation should look like, which is impossible given the pace of change. Again, I'm fairly sure that remains the right approach (and somewhat different from the EU's attempts to create lasting legal frameworks).


My proposals on the Machine Intelligence Commission aimed to prompt a debate - with critiques and alternative suggestions - since it was obvious we would soon need a battery of new ways to govern AI. Instead there was almost nothing. There were endless vague discussions of AI ethics, but a glaring gap on governance. Huge books like Shoshanna Zuboff's 'Surveillance Capitalism' offered diagnoses but not a single prescription. Gatherings of AI scientists - such as the one in Puerto Rico in 2015 - issued airy proclamations but with no sense of how they might be acted on (repeated this year with the open letter calling for a moratorium on large language models).


Looking back I can't help feeling disappointed that so little serious work was done to grapple with issues that were bound to become more important and difficult. Despite the vast wealth of the digital industries they chose not to help the hard work of thinking through how to guide and govern immensely powerful technologies, instead preferring to persuade politicians into focusing only on how they could support AI rather than how it could be governed. Essentially a decade was wasted - despite plenty of hand-wringing.


When, a few years later, the European Union and China started developing comprehensive AI laws they essentially had to make them up along the way (I was part of one of the EU's advisory mechanisms and saw this at first hand). The bureaucrats and advisers did well in the circumstances. But it's remarkable how little they were helped by academics or public policy experts.


Within the UK although the CDEI has done excellent work (we at UCL worked with them last year, for example, on household data, ethics and net zero) it never had serious political or ministerial engagement, part of the chaos that surrounded government in general in the years 2017-22, and has effectively been in an-house consultancy: far less visible than what I had recommended (I envisaged its head being regularly on the evening news, explaining the dilemmas around AI).


The last ten years have also been largely wasted in terms of shaping AI for the needs of the public sector. In 2015 I persuaded the then UK Cabinet Secretary, Sir Jeremy Heywood, to host an event on the uses of AI for government and public services, from education and health to welfare. It seemed to me crazy that so little was being done to shape the technology for the needs of public value (as opposed to the military, surveillance and business). We held an all day event at Nesta but - as so often in those years - it was sidetracked by interesting corporate presentations and lost sight of the strategic issue. At the time the digital teams in governments were doing very useful work on simplifying services - but very little with a longer term focus.


Nevertheless, at Nesta I helped persuade some government departments to let us oversee several public funds to commission AI for education, health and jobs, and for democratising law, and we did a lot of work on using data to track AI, how governments could use AI, AI ethics and more. But the funds were very small - a tiny fraction of the money going into AI in the military and commerce, and there was no mention of this strategic imperative in the various government strategy documents.


There has been a similar pattern around data, despite the arrival of GDPR. A few years ago I set out suggestions for governance of data - essential to more publicly oriented AI, and now beginning to be considered more seriously. But again there were surprisingly few competing options to debate.


As attention now turns to the practical issues of AI governance we badly need more institutions capable of thinking both about technology and about governance.


We will need answers at multiple levels, from cities to nations, continents to the world. As an example, over the last few months I have worked with colleagues at MIT and Oxford on designs for an IPCC for AI - a global observatory to provide data-driven assessments of the technology, key opportunities and risks (we will publish details next week). With the G7 having announced a 'Hiroshima AI process' the time is ripe for an initiative of this kind - a support for the much more challenging attempts to create common global rules and standards.


It's too early for historians to assess why AI spread so far with so little attention to governance. The cynics think the vast injection of money into AI ethics by figures like Steve Schwarzman - working on 'trolley problems' and the long-run dilemmas of the singularity rather than the pressing issues of bias, facial recognition and manipulation in the present - was a deliberate distraction exercise. My sense is that in most cases this was well-intentioned naivete rather than malice, though I know some of the funders were fully aware of what they were doing.


I would also like to think its naivete more than anything else that persuaded Rishi Sunak to put corporates in the lead of his own attempts to contribute to AI governance. It's entirely understandable why they are keen to take the lead, rather less understandable why any government would let the companies that are set to be regulated design the regulations.

We now badly need much more energetic work on specific options for governance, and ones that are not only defined by inherited frameworks of competition and privacy law. That so many leading figures publish calls for AI to be regulated, but can offer almost no detail about how that should be done, is a sign of how far we still have to go. There is still a dearth of centres that combine sufficient knoweldge of government with sufficient knowledge of technology to design plausible options. Meanwhile the media and politicians remain very easily distracted by sweeping claims about existential risk - but have failed to see that algorithms are already very present in so many aspects of daily life. We've wasted a decade. Lets hope we don't repeat the mistakes with the next generations of AI and with quantum.


a_machine_intelligence_commission_for_the_uk_-_geoff_mulgan
.pdf
Download PDF • 115KB

bottom of page