I had a go in late 2020 at creating a matrix for thinking about algorithmic governance (to feed into discusions at UCL led by my colleague Zeynep Engin). This matrix tries to map the many spaces in which algorithmic governance is happening.
It suggests a simple taxonomy – government with, by, for, of algorithms – to help map the work programme for potential centre focused on the topic. The idea is then to look cell by cell at where the current stock of ideas and proposals are, their ethical and social implications, and why some are rich in options and others remain largely empty, in order to guide a research agenda.
This work would look at who is developing them, have they been tested, do they fit into broader policy shifts (eg the rise of anticipatory regulation, experimentalism, ‘steering by capability’, systems approaches to climate change)? The premise is that despite the explosion of work on AI ethics remarkably little serious work is being done to flesh out these cells with specific proposals that can be interrogated or implemented on an experimental basis. This reflects the broader lack of strong skills and experience that combines deep technology knowledge with deep understanding of policy and politics. One result is that governments - from cities to nations - that want a more comprehensive view of the options aren't well served by universities, which tend to be more comfortable in the (admittedly important) spaces of critique and ethics.
The matrix is very simple and this is very much a first cut to distinguish with, of, by and for, and then the issues that apply in different aspects of governance. I'm sharing it belatedly to encourage feedback both on the approach and on the examples in the cells.


Comments