top of page

Technology is a question as well as an answer

Technology is often talked about as an answer – a solution to our needs for mobility, or health or fun.  It often is.  But every technology is also a question, or rather a cascade of questions.  How could it be improved? Who will it benefit (or harm) and how could it be made more useful (or less harmful)? How could it be combined with other ideas or technologies?

What complementary innovations could help it? What changes to skills, cultures, behaviours are needed to make the most of it? What might be its unintended effects?


That technology prompts questions is even more the case for general purpose technologies - like the car or electricity - that affect every sector.


Artificial intelligence is the most obvious example now.  It is often discussed as the answer to any problem, and talked about almost like magic, particularly by media commentators and politicians who understand little of how it works.   But AI is a tool, however powerful, and like other tools it poses questions.


The UK got a sense of this in the first summer of the pandemic, in 2020, when thousands of teenagers marched on the streets of London complaining about the algorithm that had marked their school exams because physical exam halls were not feasible amidst the lockdown.


The designers of the algorithm, employed by the Department for Education, had come up with a clever solution to the problem of running exams during a pandemic. But they hadn’t involved any children in the design; hadn’t opened the algorithm up to scrutiny; and in retrospect simply hadn’t asked the right questions.


This was a costly and embarrassing debacle for the government. But like so many others it derived from a simple error – imagining the technology as an answer not a question.  There will probably be many others in the years ahead. But there will be fewer excuses available for those responsible.

 

Take any of the everyday applications of AI like two that come into my life: the AI chatbots of Babylon that is now my entry point into the National Health Service, or the Ocado robots that organise my food deliveries.  Each has many advantages over what went before.




But each also raises many questions.  Babylon is often annoyingly unresponsive – as I found this year when a series of appointments were cancelled.   AI in healthcare has still had surprisingly little impact on everyday diagnoses and treatments – and many predictions turned out to be wildly wrong, like the famous forecast that radiologists would all have been replaced by AI by 2020 (in fact there are more now than a decade ago).   I still think AI could reshape every aspect of health, ensuring better decisions – but it will require careful work, detailed redesign of processes, and often lots of adjustments to the AI itself for it to become useful.


The Ocado warehouse also raises questions: about just how far automation can go (will it be able to do the deliveries as well as the packing – it’s now a long time since Amazon promised drone deliveries, but these have yet to arrive).  It raises questions about future jobs; about the evident vulnerability of supply chains; and about the many things  – the rare earth, materials and chips - necessary for the robots themselves.


We can best understand the extent to which technology is a question and not just an answer if we learn from other general purpose technologies, or GPTs, like the car.  Invented some 140 years ago it has prompted thousands of design questions ever since.


Some were, and are, about the technology itself – improvements to the internal combustion engine, chassis, tyres, steering, safety, satnav, batteries, audio systems, seats and much more.


Others are designs of rules and regulations: road markings, speed limits, rules on emissions and alcohol. Some are about skills – like driving schools and tests (in the early decades of the car few imagined that ordinary people would be allowed to drive them).  There are design questions about infrastructures – from motorways to charging stations; about complementary innovations like the changing design of cities, with suburbs and supermarkets that weren’t feasible before the car.   There are questions about social norms like not letting your friends get into their cars drunk or not idling near a primary school. And there are questions about geopolitics – like strategies for trade and peace in an oil dependent economy.    


With the car, over time these questions became more complex – more ethical, more detailed, more numerous, and with many more institutions to manage them.  I am absolutely certain that exactly the same will be true of AI which poses thousands of similar questions of rules, skills, norms, laws and more.


This should be obvious to anyone with any knowledge of technology history.  But this perspective is almost entirely missing from the great majority of books about AI; appears not to be understood by the AI scientists themselves; and isn’t obvious in many policy pronouncements.


As I show this isn’t just about safety (in my sceptical view the work underway globally on AI safety is as much a distraction as a necessary step, and more a symptom of the weak state of thinking about AI governance than anything else). Nor is it just a question of bureaucratic regulations stopping innovation, even though they often do (just imagine if governments hadn’t introduced any regulations for cars and roads- a fun thought experiment).   


Rather (see below) we have a complex challenge of designing not just new technologies but also rules, laws, norms, behaviours and skills as well and are still barely at first base on this.


However, one thing that is different about AI is that it doesn’t just generate new questions. It also provides new ways for us to ask and answer these questions.  Already AI is transforming the everyday work of design as well as policy.   AI tools, using LLMs, can be used to generate multiple personas and run experiments – showing how they might respond to a new product or offering.  They can synthesise evidence of all kinds; generate new ideas; or agent-based modelling can be used to think through dynamic patterns of response.



A growing body of research is exploring how best to use AI in complex problem solving.  For example, one recent study looked at comparing AI and AI human combinations for problem solving: ‘human-AI solutions demonstrated superior strategic viability, financial and environmental value, and overall quality. Notably, human-AI solutions co-created through differentiated search, where human-guided prompts instructed the large language model (LLM) to sequentially generate outputs distinct from previous iterations, outperformed solutions generated through independent search’.


A team at MIT are experimenting with using AI for social science, what they call ‘automated social science’ using GPT to generate multiple types of people and then running economic experiments.


Before too long these should be able to help with the thousands of everyday questions surrounding the design of AI itself.    We live surrounded by huge numbers of AIs – whether based on machine learning, computer vision or LLMs and their designs can either be good or bad, efficient or inefficient, life-enhancing or destructive, just as turned out to be the case with social media and cars.


In social policy, for example, we know that AI tools have a mixed record. They can in principle help predict problems and target resources more effectively, for example to identify mental health problems.

 

But in the real world they have turned out to be of uneven value, often generating too many false positives, or simply embedding biases.  So a lot of work has had to be done to tweak and improve the AIs so that they don’t deliver foolish results.  In some cases this is starting to work well. A recent study showed how with good design AI could sharply reduce racial bias in decisions made about investigations into child maltreatment.


 

 

 This is just one of dozens of examples which should be being drawn on to guide policy.   They push us to ask questions and not to see AI as magic. 

  

This should be obvious.  It’s now years since Facebook tried to use an algorithm to spot suicidal tendencies, but without a coherent strategy for what to do next;

 

  It’s now four years since the Netherlands government had to resign because of bad decisions made by an algorithm around welfare fraud.


And there is now many years of experience from education where, despite the potential of AI to personalise learning, or support teachers overall results have been disappointing (I used to oversee an investment fund to commission AI for schools, so have seen this up close).


In each of these cases the AI turned out to be as much a question as an answer – a prompt to design better. Yet far too often in public discussion the AI is presented as something complete or even magical and of course the opacity of so many algorithms doesn’t help.

For the designers the lessons are clear:  test in real world contexts; involve the users in the design task; constantly question, improve and rethink. 


Only then can some of the practical choices be handled well, such as when in UX to move between algorithms and people; how to ensure privacy; how much to give users choice.


For the world, the challenge will then be to fill out what I call the ‘thousand-cell matrix’ of governance of AI.  Think of all the possible risks (some summarised in the left-hand column); the many different domains where they show up (the middle column); and the many possible responses, some summarised in the right-hand column.


You quickly get a matrix with at least a thousand different cells, each of which requires very different answers.  The idea – still sometimes promoted in AI circles – that all we need to worry is some vague sense of AI safety – completely misses the point, and the lessons of history.

 

AI – like every technology is a question.  This should have been obvious for years – not least because of events like the one picture below, AI is probably the greatest design challenge in human history, with so many potential positive and negative implications.   But we have yet to get our collective heads around how to think about it, too often veering between excessive evangelism and dystopian fear.


There are good reasons why so many of the pioneers have tried to obscure or distract. They believe that they are in a winner-takes-all race in which only one firm and one nation will triumph on the road to Artificial General Intelligence.  This justifies them in blocking any serious progress on governance – and too many politicians and commentators have been gullible in allowing this to happen.


I’m still surprised how little progress has been made over the last decade, even as AI has been very much part of daily life. But it’s not too late to make a serious start.

Comments


bottom of page