In the Mirror of the Averaging Interface?

Contrary to my general policy, the header image was generated by ChatGPT. This is what you get when you ask an AI system to generate a self-portrait. A nebulous installation of stars connected with wires of light. It suggests something we have come to associate with the letters ‘A’ and ‘I’, though I still wonder what it really means.

In this post, I try to organise some of my, admittedly conflicted, feelings about the use of AI, specifically in the context of serious work. (It goes without saying that it is an utter waste to use any AI tools to create cute cat videos or fake news…)

Over the past year, I have occasionally used some AI tools, mostly ChatGPT and Perplexity. I used these tools:

  • As an advanced search engine. ChatGPT will give good answers to questions in situations where Google only lists websites where the information might be found. The ability to use natural language is very useful, though the usefulness of the result depends on the quality of the prompt. Still, the answers are sometimes rather flat or trivial, or plain wrong.
  • As a spelling and grammar checker. Since I am not a native speaker of English, I like to check my spelling and grammar before I publish a text. I have a standard prompt for this job, which has saved me time and mistakes.
  • For pattern recognition: Since ChatGPT has read* (not really) most books, you can ask it to identify patterns. Recently, I facilitated a team workshop in which we asked ChatGPT to predict team interaction based on our Myers-Briggs personality codes. It offered us a good starting point for a group discussion.

But I also have my concerns about my use of AI. These concerns are not just based on my own experience; they reflect points made by others as well. I am concerned about (1) what the use of AI does to my thinking. And I am concerned about (2) the uncanny experience of feeling ‘seen’ by AI.

Whose Intelligence?

AI means ‘artificial intelligence,’ but does AI think? No. Artificial Intelligence does not think; it calculates. It is the great ‘averaging interface’, which calculates the likelihood that you want this response to that prompt. It uses a massive database of patterns to come up with answers. But thinking is something else. You can see this when you look at your own thinking as a user of AI.

The process of interacting with ChatGPT is mesmerising. As I go back and forth in our discussion, I keep tweaking the prompt, deciding what to accept and what to dismiss, surprised by a clear observation, or annoyed by a silly hallucination. There is something exciting about seeing the new answers roll over the screen. How clever and fast!

But as I started to reread my previous conversations, some of which I had printed out because they were so insightful, I noticed something else. On second reading, I realised how stale or trivial they all were. While I still get shivers rereading some journal entries which I had written years ago, these exciting Chats felt foreign and second-hand. Why do these insightful suggestions have so little purchase on second reading? Is this because I have not found and formulated them myself?

These snippets of artificial insight came to me effortlessly. They disappear just as effortlessly. They leave no trace because they are so easy, so frictionless. Compare this with real wisdom: embodied sense-making, which requires movement, re-thinking, slowing down, and my own unreadable but real writing.

These interactions with ChatGPT have convinced me once again of the need to read, write, and think for myself. They also contain warning signs, even if I am not quite sure of what exactly.

Feeling seen, but by whom?

Here is another observation: an answer from ChatGPT can make you feel ‘seen.’ People describe it as talking with a ‘friend’, or perhaps a coach or even a therapist. I recognise that feeling from some experiments in which I asked ChatGPT to use a particular psychological model to analyse my own experiences. I felt seen and understood. In the past, I have also had that experience while reading a book, but a book is not personalised the way a Chat with GPT is.

I had to remind myself of two important observations:

1. The difference between quantity and quality. An important trick in the culture of capitalism is exploiting confusion about the difference between quantity and quality. More data does not lead to better answers, at least not automatically. The Large language Models used by AI-powered sites have read, or at least processed, more books than any of us can imagine, let alone read. The answers to our prompts are therefore not the result of intelligence or understanding, but of big bets and quantitative overload masquerading as insight. Useful for certain tasks, but not to be confused with thinking.

2. Output mirrors the input. AI reflects or mirrors what you give it as input. You can experiment with this by changing the way you phrase your prompts. If your prompt uses commands (do this, change that), the result will use direct language and short sentences as well. However, if you use more nuanced and open language, the responses seem to be more nuanced and multilayered. ChatGPT will even mimic your mood, certainly if you use more reflective language in your prompts. Since the output mirrors the input, the question becomes, who do we see in the mirror? Is there any substance to it, or are we merely staring in Narcissus’s mirror?

AI as Technology

Some will dismiss these questions with a variation on the old ‘it is just a tool.’ As long as we manage to use this, supposedly neutral, tool well, all will be well. But the issue is that AI is not a tool. Following the ‘Standard Critique of Technology, ‘ it can be argued that AI is instead a mirror in which we see how we think about technology, things, and the world. AI discloses information as a resource, ready for value extraction. Extraction destroys, but what and who?

The argument goes like this:

We live in a society where technology has become so powerful that it often ends up shaping us more than we shape it. Tools that were meant to serve us can start to guide our behaviour, set our priorities, and influence how we see the world.

For example, social media says it helps people connect, but it can also push people toward outrage and group pressure. Face-recognition systems can help find criminals, but they can also be used to monitor large numbers of ordinary people. Taken together, many modern technologies make life more convenient while quietly encouraging us to accept their rules and fit into their systems.

The answer is not to reject technology. Humans have always used tools, and we always will. The real question is what kind of tools we choose and how we use them. Some tools steer and control us; others support meaningful human goals.

Think of a dinner table that brings a family together, or a musical instrument that creates shared enjoyment when someone plays it well. These are tools that strengthen relationships and add depth to life. Good technologies fit naturally into human life and help people connect, create, and care for one another.

So our task is to pay attention to how different technologies influence us and to choose the ones that support healthy, meaningful ways of living—both as individuals and as a society.

This summary of part of an article by Alan Jacobs was published a couple of years ago in The New Atlantis. It was generated by ChatGPT.

The Wasteland of Peak AI

There is another AI paradox, namely that indiscriminate adaptation of AI (Peak AI) will look like an information wasteland. This idea is based on the rapid convergence of two trends around AI.

Trend 1: the more AI models develop, the clearer it becomes that they are not intelligent.
The language models might get larger, the statistical analysis might improve, but the difference between a machine and a human is a matter of quality, not quantity. The difference between a machine and a human is qualitative, not quantitative. It is infinite, not because a human being can make what looks like a (near) infinite number of ‘calculations.’ It is infinite because the human spirit is not a machine, but embodied, relational consciousness. AI can asymptotically reach a simulation of it, but it remains dead. At some point, we will understand.

Trend 2: the more people use AI tools, the less intelligent they become.
Because they exercise their intelligence muscles less frequently. There is already disturbing research about the impact of the use of AI on cognitive capabilities. Once people stop thinking because the shiny built-in AI tool in their software suggests what they want to type, they no longer know how to start thinking on their own again.

When these two trends meet, we do not reach a tech-paradise, but we have reached the information wasteland where everything drowns in AI slop.

How AI Is Sold

The metaphor of ‘drowning’ suggests something else. John Willshire made an interesting observation about information and the difference between liquid and light.

We are used to the story of information as liquid. We hear about “drowning in data”, the information deluge, etc. Information is seen as a commodity requiring storage and management focused on quantity. We need to “account for” things.

The alternative story is one of information as light. We talk about “insight,” there are light-bulb moments, perspectives and points of view. Information is not the same, but all different. The concern is how to filter and select things. Ultimately, we need to “make sense” of things.

With this distinction in mind we can make sense of how AI is sold to us: it is liquid thinking wrapped up as light thinking.

The implicit or explicit promise is that it gives us insight; the promise is light. But what if that is just vapourware, a simulacrum, a wrapper? What we get delivered is an instant gratification commodity, a liquid. In short, it is just the next round of hyper capitalism.

So What?

As something wrote,

AI does not simply give us new tools; it reshapes the conditions under which thinking happens. If its outputs are frictionless, abundant, and plausible, then the scarce goods become attention, judgment, and the slow work by which insight becomes one’s own.

The real question is therefore not whether AI is intelligent, but what forms of intelligence it quietly rewards: speed over patience, fluency over depth, prompt craft over lived understanding. In that sense, the danger is less that AI will make us stupid than that it will make our knowledge feel second-hand — sentences we can use but do not inhabit.

Serious work, then, will increasingly be defined by what resists automation: moral judgment, responsibility, lived experience, and the formation of meaning. Used well, AI can extend our reach, but it cannot replace the inner labor by which thinking becomes wisdom. The task is not to reject AI, but to ensure that, in a world of easy answers, we still practice the disciplines that make answers truly ours.

ChatGPT suggested this as an answer to the So What? question. Do you think it makes sense? AI is a powerful mirror, but you need to do your own thinking. If anything, this proves the need for having a well-formed mind. Look away from the mirror for a while, read a book, write something.