top of page
Search
  • Geoff Mulgan

Governing artificial intelligence - where next?


 

AI governance continues to evolve at a dramatic pace.   Last autumn President Biden issued a comprehensive executive order and Prime Minister Rishi Sunak hosted a summit to discuss AI safety, and even interviewed Elon Musk.  Meanwhile, Europe is racing to update its own EU act, and China has introduced a batch of laws. A remarkable number of organisations have now been set up to survey governance options and make proposals.

 

But as governments race to catch up this time the problem is not the anti-science delusions of figures like Ron De Santis or Donald Trump. Instead, it’s a heady mix of naivete and cynicism that risks messing up one of the most important tasks politics has to grapple with in the decades ahead.

 

As I’ll show the challenge of AI governance is like filling in a 1000 cell matrix, full of complex details to handle specific harms, in specific sectors with specific responses, not so dissimilar to the ways societies have handled the car, or finance. Yet too much of the debate still assumes that generic solutions will be enough – like licenses for foundational models, or seeing AI just through the lens of safety.   That this continues to be the tenor of the debate reveals that we still have far too little capacity to shape technology governance effectively.

Here I just focus on some of the broad issues, though the details are of course hugely important.

 

Three big problems risk continuing to repeat themselves.  The first problem is to allow the agenda to be dominated by industry not politics or the public interest – rather as if the oil industry had been asked to shape climate change policy or the big platforms had been asked to design regulations for the Internet.    It’s understandable that every country wants to attract leading AI innovators; but insufficient attention to the public interest will just create problems later, as the many scandals and problems around AI and digital systems shows. This is why governments badly need knowledgeable in house capacity that can understand the detailed issues in different fields, from transport and education to tax and war.

 

The second problem has been the focus on fairly distant, doomsday risks while downplaying the many ways in which AI already affects daily life, from decisions on credit and debt to policing, not to mention the many scandals that have accompanied mistakes around AI in welfare (of the kind that nearly led to the downfall of the Netherlands government) and abundant evidence of bias and discrimination in fields like criminal justice.  There is no doubt that the longer-term risks are important – but too often they’ve been talked up as a deliberate strategy to divert governments from action in the present.

 

Third much of the debate continues to focus on the scientists, many very eminent.  This looks at first glance quite enlightened, and some of the scientists, such as Stuart Russell, deserve a lot of credit for filling the vacuum of government inaction.  Unfortunately, scientists have not proven very good at governance.  The pattern was formed back in 2015 when a windy declaration from figures such as Demis Hassabis, the founder of DeepMind,  Max Tegmark, Jaan Tallinn and Elon Musk committed to preparing for the risks that AI could bring and proposed ‘expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do’.   


Yet the declaration said nothing about how this might happen: nothing did.  The pattern repeated in 2023 when another open letter from 1000 scientists called on ‘all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4.’   But there was, once again, nothing on how this might be done.  In the US, the public apparently supported such a pause (by a margin of roughly five to one): but without a serious plan there was little chance of it happening, and nothing did.  Since then a flurry of equally windy statements have issued forth.  There is, again, nothing wrong with this. But without competent institutions to implement these aspirations they are unlikely to materialise any time fast.

 

Here we get to the core of the problem.  There are many good examples from history of science and politics collaborating to contain risks, often with business involved too: the various treaties and institutions designed to stop nuclear proliferation and the Montreal agreements on cutting CFCs are just two.   In each case the scientists and the politicians both understood the gravity of the threat and their own division of labour.  Action on climate change is another good example, as was the creation of new regulators around human fertilisation and embryology.

 

Around AI however the politicians have too often been negligent while the scientists have misjudged their ability to fill the gap, proving as poorly prepared to design governance arrangements as legislators would be to programme algorithms.

 

So, what needs to be done?  First, at a global level the long journey of institution building needs to start.  A minimum step is to create a Global AI Observatory, with parallels to the IPCC, to ensure a common body of analysis.   I and others have shown how this could work, tracking risks and likely technological developments, and providing a forum for global debate.  For climate change, the IPCC provided the analysis – but the decisions on how to act on climate change had to be made by governments and politicians.  A very similar division of labour will be needed for AI, and it’s good that, at last, some in the industry have come round to this idea. This may then allow for some common standards and some alignment of legislation between different regions.

 

Second, at a national level a new family of institutions will be needed. I first outlined how these might work eight years ago, often in collaboration with existing regulators, and hosted, along with the Cabinet Secretary back in 2015, a day event on how government should handle AI.  I never guessed that governments would take so long to get their act together.  Across the EU national governments are set to create some institutions as part of the implementation of their new AI law, and China has set up a powerful Cyberspace Administration.  


There are still however glaring gaps, for example around procurement. And within particular sectors there are striking gaps of capability, to align research, procurement and implementation. Yet without competent institutions we risk more well-intentioned statements willing the ends but not willing the means.

 

Third, we need much better training for politicians and officials. The failure to handle AI is a symptom of a much bigger problem.  Ever more of the issues politicians face involve science – from climate change and pandemics to AI and quantum.  Yet they are about the only group exercising serious power who get no training and no preparation. It’s not surprising they are easily manipulated, and that politicians who don’t understand science or technology glibly promise to ‘follow the science’ or to have an ‘innovation-led’ approach to AI, without grasping what this means.   There is also a need to rethink skills for officials.  Some good initiatives have started in the US and Europe to introduce officials to AI, but these need to become much more systematic, recognising that while law and economics remain important public officials need to be just as adept at understanding science and technology.

 

AI in all its many forms has immense power for good and for harm. But in the decades ahead the shape of that governance will be quite different to that being talked about now.  Many of the scientists talk as if generic licensing, or rules on safety, will be enough.  This is wrong.

 

A better analogy is with the car.  As the car became part of daily life in the 20th century governments introduced hundreds of different rules: road-markings and speed limits, driving tests and emissions standards, speed bumps and seatbelts, drink drive rules and safety regulations.  All were needed to ensure we could get the benefits of the car without too many harms, and they are overseen by dozens of different agencies, not a single one. 


Finance is another analogy.   Governments don’t regulate finance through generic principles but rather through a complex array of rules covering everything from pensions to equity, insurance to savings, mortgages to crypto.

 

Much the same will be true of AI which will need an even more complex range of rules, norms, prohibitions, in everything from democracy to education, finance to war, media to health.   Governments will have to steadily fill in what I call the ‘thousand cell matrix’, that connects potential risks to contexts and responses.


The Biden administration has recognised this and very belatedly the UK government has moved away from its earlier claim that existing regulators could handle all the issues.

  

We’ve wasted the last decade which should have been used to develop more sophisticated governance, as AI became part of daily life, from the courts to credit, our phones to our homes.  Now, belatedly, the world is waking up.  But in AI, as more generally, we will need much better hybrids of science and politics if wise decisions are to be made.

 

 

Many of these issues are covered in my book ‘When Science Meets Power’, published this winter by Polity Press.



bottom of page