top of page

Human ChatGPTs and the vices of foggy thinking


Since the arrival of ChatGPT there has been much debate about how AI can replace humans or work with them. Here I discuss a slightly different phenomenon which I increasingly notice, and probably wouldn’t have without the presence of generative AI: people who act rather like large language models. The phenomenon isn’t new. It’s just that we now have a new way of understanding it. As I show it's quite malign, particularly in academia.


The strength and weakness of ChatGPT is that it can quickly assemble a plausible answer to a question, drawing on materials out in the world. It sucks them in, synthesises and mimics, sometimes quite convincingly, someone knowledgeable talking about a subject.


Its weakness, of course, is that it doesn’t understand any of the material and can hallucinate all sorts of errors (see my recent blog on the IPPO website about use of LLMs in synthesis).


Lots of people now use ChatGPT to help them with first drafts of articles or talks. But I'm more interested in the people who act like an LLM even if they don't actually use one. These are the smart people who absorb ways of talking and framing things and become adept at sounding convincing. The problem is that if you ask a probing question you find they have very little understanding. Its all surface and no depth. It’s all mimicry rather than thought.


The classic example in academia was Alan Sokal’s piece ‘Transgressing the Boundaries: Towards a Transformative Hermeutics of Quantum Gravity'. The article was submitted and accepted by the journal Social Text. The piece was deliberately written to sound plausible, at least to the academic community served by the journal. Yet it was in fact wholly meaningless. It was a perfect example of vapid mimicry and was bitterly resented by the academic community it mocked.


Sokal’s stunt was an extreme example. But what he was mocking is not so exceptional. Many people in many fields, including quite a few in academia, also act rather like a ChatGPT, particularly in academic disciplines that don’t do much empirical work, work with facts or testable hypotheses – the more they are just commenting on texts (as a surprising proportion of the social sciences and humanities do) the more such foggy talk is a risk.


I suspect we all do it a bit, especially early on in careers when you have to skate over the gaps in your knowledge, and work hard to sound convincing (and to sound as if you fit in with the assumptions of your discipline). But I also notice it in plenty of people over 50.


There are plenty of telltale signs. They include: talking in very general terms, often using clever sounding formulations, jargon, and sub-clauses; there are usually few if any facts; few examples, or examples used very casually with no real feel for the actual cases; and there is typically nothing that’s clear enough that it could be disagreed with.


I used to worry that when I heard apparently clever people speak in eliptical ways that I didn’t understand that it was my fault. I just assumed they were much smarter than me. That remains true in some fields – I have listened to many lectures on quantum computing but still struggle to grasp it.


But one advantage of age and experience is that I now realise that me not understanding what someone is saying is sometimes a sign that they don’t know what they’re talking about, and that they are essentially acting like an LLM. This would become apparent if they were ever interviewed in the way that politicians are sometimes interviewed in the media, with forensic questioning: ‘what do you actually mean by x? What’s an example of what you just said? What would your best critic say about your comments?’.


But this rarely happens. At most you might present a paper at a conference and someone is asked to be a discussant; but there is never then a serious, forensic dissection, and we have no places on the media where this happens either in relation not ideas (I once pitched this as a programme idea to the BBC – a forensic examination of popular theories and ideas, but the preference is for much lighter, and politer, discussion).


Once you think about it you notice the human ChatGPTs a lot, and not just in parts of academia. It’s quite common in the media – some newspapers (in the UK, ones like the Daily Telegraph, Guardian or Daily Mail) have columnists who are essentially human versions of ChatGPT: they distil the worldview of the paper and produce entirely convincing columns with entirely predictable responses to issues in the world, never saying anything original but presumably going down quite well with some of the readers.


A few years ago I wrote a blog on the related phenomen in leadership which I called blowing bubbles – the ability to speak convincingly without actually saying anything. It’s an important skill in some fields like politics, and often required of leaders in other fields such as universities.


Now ChatGPT essentially shows us how it is done – how lots of materials can be synthesised into plausible streams of words. Perhaps its unavoidable. But in academia its a very unhealthy vice - it would be far better for people to get out and about and observe things than just to mimic texts. The longer I spend in academia the more I value work that gets out into the world and looks at it directly rather than mediated by others' texts (perhaps too I am influenced by being in a department of engineering, a discipline that has little patience for bs).


And just as I would prefer to be surrounded by people who don’t act like robots I would also prefer to listen to people capable of original thought.


Comentarios


bottom of page