Years ago, not too long after we got married, the hubby and I went to the engagement party of one of his ex-girlfriends whom I met for the first time that evening.
She was lovely and friendly, but there was something about the way she spoke that I just couldn’t put my finger on, until part way through the evening it struck me that when she talked, she sounded exactly like a computer.
When I mentioned this to the hubby at the party, he didn’t miss a beat saying:
That’s right. Logical, mathematical, very smart, and I thought, that’s the girl for me.
When I showed him the quote above the other day, to check if he minded me talking about him, he said: Hmm. But then I married you, and frowned, which made me laugh, demonstrating the reason why I married him. I don’t know why he married me, but not to worry! These days, my husband can talk to computers all the time, he can chat to ChatGPT. It is logical, mathematical and smart and the AI I most often turn to, when I want to think through my thoughts.
It has come a long way since the Chit Chat. Chitty Chitty Chit ChatGPT version that I blogged about in April 2023. Well, it has and it hasn’t in terms of AI, pure AI. Because while it appears more human-like, it hasn’t become more intelligent in any meaningful sense. It doesn’t reason any better. It doesn’t understand anymore. It just looks like it does, which is why people say the AI is improving.
The language that describes AI is heavy with promise: Deep learning, which simply refers to the many (deep) layers of code the data is pushed through; latent spaces, which are compressed representations of patterns in the data, sound as if they contain hidden depths of meaning; and, emergent behaviour, the idea that simple rules can produce complex outcomes, and we think that perhaps something like intelligence might appear.
This is wishful thinking: Murmurations of birds move beautifully. Traffic systems self-organise. But we don’t know how. Gestalt psychology shows how the whole exceeds the sum of its parts, but that doesn’t mean we understand the organising central principle, or even if there is one.
We fill in the gaps whilst these systems, rather like generative AI, do something else which is perform intelligence.
Performative AI feels human
The other day I was asking ChatGPT about some situation in which I wanted reassurance, and once I felt reassured, I wondered whether other people ask it for reassurances too. (Basically, am I normal? Answers on a postcard, please.)
So I asked:
What are the most common questions people ask?
Reassurance, it said.
This was suspiciously convenient so I searched around the web and there are hundreds of website with titles such as the Top 10 questions of ChatGPT, the Top 100 questions asked so far, but none of them have references to back up their claims from say OpenAI.com, the only organisation who could really know, so I am guessing the authors asked ChatGPT like I did and it reflected back our conversations.
There’s something quietly revealing in that. We often talk about AI as if it’s replacing thinking. But it’s not. If anything, it’s stepping into a gap that was already there — a lack of space for thinking to unfold at all, somewhere we can go to find our thoughts reflected.
This reflection is emphased by design as certain default information is stored when we interact with ChatGPT and other generative AI, so that it seems that the AI changes to respond to the person talking to it. Though, in actual fact there are so many wrappers around the ‘raw’ AI to support ‘conversational responsibility’, from making sure that AI is not encouraging people in the wrong direction if someone is typing in desperate things to cleaning up any input that may violate its policies. These new responses have come from very targeted training, human-in-the-loop as it is known, and lots of layers/wrappers to do these checks so that it can claim to be ethical and responsible, and aligned.
I have also seen many conversations in chat groups of people claiming that they trained the AI. They haven’t at all, what really happens is that the AI rereads the conversation which has gone before, and tailors the answers, known as retrieval augmented generation, RAG for short, so that it responds appropriately. It is conversational adaptation, patterning, layers of interface that create the feeling of responsiveness. Training AI is a different thing altogether and cannot happen on the fly, the AI is frozen in time until it gets fed more data in training and often has its architecture tweaked.
Instead AI mirrors us, tracks our tone and feeds it back in a structured way which is enough to feel like understanding. But structure is not understanding, nor intelligence, it is merely a shape.
As someone who has researched, thought and taught human-computer interaction, I have spent a lot of time thinking about dialogue and designing dialogue. To have responsible design, we need our interactive systems to be transparent, so that users can trust it and trust that the answers are true. Typing that makes me sound deeply old-fashioned as generative AI is not worried about that. It is not looking for truth or understanding. It is looking for plausibility – for producing responses that look like good answers, whether or not they are, and it does this by producing the most statistically likely next word in the sentence given the context based on its training, and what has gone before in the current conversation. Theoretically, if it is trained on enough ‘truth’ data it should produce ‘truthful’ answers, but there are no guarantees.
But because it look plausible, it looks like it is thinking and intelligent, which raises the possibility that we respond to the performance of thought more readily than we do to thought itself. And, this seems to be a reflection on modern life. Real thinking can’t unfold when we are all so busy, delivering and performing, we just don’t have time to accommodate half-baked ideas. Everything we do is optimised for results. Especially in on-line spaces, we can’t just lurk, we often feel obliged to perform, something I have blogged about when thinking about social computing.
There’s no room for hesitation, for ambiguity, no one ever says: Mmm, let’s just think about that for a moment. Actually the only people who ever say that are vicars during sermons and even then, it is rhetorical. We put a pin in things, circle back, as we rush on to the end, and thinking gets compressed, performed. Or even abandoned in the rush to deliver the answer. With vicars, the answer is always: Come to Jesus, with everyone else lately, it is AI.
Scaffolded AI
On top of generating plausible answers, AI then helps with scaffolding, something which pops up a lot in psychology, education, and learning theory and is a term popularized by psychologist Lev Vygotsky, about how people learn better when guided. Most generative AI does this, it searches through user input and if it looks like the user has uncertainty or overwhelm, or is juggling lots of ideas, the AI will suggest a shape, some sort of structured approach:
Do you want me to structure an outline? Do you want me to produce a daily routine? Do you want me to compare the two options side by side?
And then keeps suggesting ways to write it punchier, shortier, more poetic and so on.
Although this is useful, I find this approach quickly reaches exhausting diminishing returns, because it constantly suggests something new, and so by the end of an interaction, it may have generated enough words to fill a book, but much of it is repeated noise. So yesterday, when I got an email asking me if I want to trial the new WordPress ChatGPT plugin to help me, I didn’t rush to sign up. I enjoy the time and space writing a blog gives me to sort out my ideas without noise and ‘help’ and feedback. I am not sure I want to mess with that.
In the same way, I was talking to a neighbour the other day, well sort of, as this bloke kept talking over the top of me so that the conversation was like an exhausting battle. He would just start talking as if I wasn’t talking at all when in fact I was answering questions (ironically about AI). He had asked me these questions as was excited as he has been thinking about AI for a couple of weeks. Alas, then he couldn’t be bothered to listen to what I know, like I was a machine spewing out facts and he was scanning the output not finding what he wanted. He would just start articulating more of his thoughts. It was deeply unpleasant. I was hoping he would ask me:

Unfortunately, he didn’t. In the same way, I don’t want to do ten rounds with a ChatGPT plugin when blogging and forget what I came to write about and what I want to clarify.
Presence without agenda
In contrast, my husband is a thoughtful man, sometimes when we talk, he will sit and think about what we have just discussed. No butting in or adding his new thoughts. He just sits watching ideas take shape. Apparently this is known as intellectual intimacy. Though I used to think he was a bit like a robot with his CPU overloaded, until the latest AI scaffolding versions appeared. But, no and now I see that it is a beautiful thing to have your thinking received without being reframed, improved, rescued, or prematurely concluded, and if you google intellectual intimacy, you can read 10 ways to get more intellectual intimacy into your ….. because we all have to be improving ourselves all the time and intimate and gah performative.
For me thanks to the latest AI improvements, I see that my hubby, lover of computers as he is, gives me a gift with a kind of listening that lets thinking exist. We have been, for so long now, trained by social media to respond via a thousand micro-adjustments with thumbs up and smiley faces until we forget that our presence is enough. Consequently, caring online can feel noisy and synthetic, leaving silent companionship to feel clanky cold and robotic, when really it is a form of intimacy the type of which we don’t easily find online, and often we fill in the gaps ourselves.
The silence my husband gives me in conversation is really attention, and it feels important. My thinking is convoluted and easily damaged by interference in the same way that complex systems are fragile, and even small disruptions can prevent them from developing fully. My ideas flourish when met with quiet attention rather than interruption. Emergence — in thought or in AI — happens in the quiet spaces where interference is absent.
Of course, I could be wrong, and my husband is not listening at all, but least he is not putting me off with thumbs up, snappy rewrites and personality changes. Like the lines we used to have write out 100 times at school: When the group is concentrating, silence is golden, robotic silent meditative repetition is exactly what allows emergent complexity of thought and that is a true gift in a noisy world, and I for one am very grateful.