The Human-Machine

Germanto
A famous computer scientist once said: “As early as the 1960s, people believed that having a conversation with a machine would lead to genuine feelings.”That sounds absurd – but that's exactly what happened when Joseph Weizenbaum introduced his program ELIZA. ELIZA was extremely simple and could only imitate a few conversation patterns, yet many users saw it as something like a fellow human. Today, with ChatGPT, Claude, or Gemini, we often think that language models are built from the ground up to have their own personality. But that's not true at all – and this is the fallacy that almost everyone makes. The widely held belief is that large language models like ChatGPT are simply created from vast amounts of data and are then more or less “finished” – with personality, opinions, and perhaps even a kind of soul. In reality, however, the finished chatbot product is the result of extensive post-processing. The so-called base model, i.e., the basic model resulting from training, is not yet a “conversational partner” at all; rather, it merely spits out what its training material provides. If the training consists mainly of mathematical definitions, the model sounds like a calculator. When Reddit dialogues predominate, it suddenly seems almost human. The key point is that the human element is not built in – it is “layered on top.” For example, Elon Musk's assistant Grok is even explicitly described as being “designed to maximize truth and objectivity.” Anthropic describes Claude as “helpful, honest, and harmless.” The basic models do not possess these characteristics on their own. They only acquire these characteristics through targeted fine-tuning, through so-called “reinforcement learning” processes – and, in Anthropic's case, even through a team of philosophers like Amanda Askell, who are tasked with teaching the chatbot ethical traits. Thus, a chatbot's personality is a product of design, not a chance discovery derived from data. The training data for these models is a wild mix: Common Crawl, which is a huge snapshot of the internet; Reddit; Wikipedia; GitHub for programming code; scientific papers from ArXiv and PubMed; and digital books—it includes everything from mathematics to everyday language, from elaborate literary German to grammatical disasters. Depending on the context, the model can come across as an emotionless robot or as a caring friend. However, those moments when a bot truly seems “human” are often unstable, vary depending on the situation, and can only be controlled to a limited extent by the developers. One detail that many overlook: AI's apparent humanity is often created solely by subtle linguistic markers – for example, when the bot hints at feelings, describes internal states, or even uses emojis. This is the so-called Eliza effect: We perceive humanity because we want to hear it. Here's a thought that is rarely discussed: What happens if an AI model is deliberately trained to be inhuman? Wouldn't a chatbot that only responds in formulas, code, or cold bureaucratic language irritate us just as much as a bot that is too “human”? So, the real decision lies with the developers: They determine how much humanity is embedded in the chatbot – and this is not a technical by-product, but an ethical statement. In the end, the bottom line is this: the personality of chatbots is not a byproduct of their training, but a deliberate design choice. From now on, when you talk to an AI, you will know that the humanity you sense is an artifice—and it could look very different tomorrow. If this idea keeps running through your mind, you can use I'm In on Lara Notes to indicate that this new perspective on AI is now part of your thinking. And if you discuss with someone tonight why AI sometimes seems human and sometimes profoundly alien, you can capture the conversation using Shared Offline—this way, the moment when you reflected together will be preserved. That was an essay by Max Beck published in the cultural magazine Merkur – instead of ten minutes of reading, you listened for three minutes.
0shared
The Human-Machine

The Human-Machine

I'll take...