A meaningful experiment about the limits of artificial intelligence

Published: Monday, Jan 29th 2024, 10:20

Zurück zu Live Feed

Language models like ChatGPT are stupid. They are trapped in themselves. They make no reference to the real world. This can be seen in a literary experiment by media philosopher and author Hannes Bajohr.

This experiment is the AI novel "(Berlin, Miami)". Although the work shines with linguistic images, the sentences are merely strung together. There is no narrative logic or deeper meaning.

Hannes Bajohr, who teaches at the University of Basel, writes: "One of the more disturbing insights of recent AI research is that language models that are trained on their own output begin to degenerate." Behind this is a problem known as "model collapse".

In order for language models to continuously improve, they require vast amounts of text written by humans as training data. But it is precisely this kind of text that is becoming scarce and therefore valuable, because the GPT already knows almost everything that is circulating freely on the internet. A dispute over copyrighted texts is therefore inevitable.

At the end of December 2023, the New York Times filed a lawsuit against OpenAI, the operator of ChatGPT, for illegally using its journalistic texts. Billions of dollars and copyrights are at stake, for which the New York Times wants to be properly compensated. Swiss media companies are also hotly debating how to behave towards ChatGPT.

In addition, language AI is developing into a black box that swallows up everything without us being able to see into it. As early as 1956, the philosopher Günter Anders wrote that a rift runs through modern man "between our ability to produce and our ability to imagine". In other words: We no longer understand what we produce.

"(Berlin, Miami)".

This black box illustrates Hannes Bajohr's literary experiment. He trained an open-source version of the GPT so that it algorithmically generated a novel of 250 pages: "(Berlin, Miami)".

Bajohr himself was amazed by the text: he gets off to a flying start, presents two independent characters in Kieferling and Teichenkopf and opens up a world that is as dazzling as it is strange with convincing vividness. There is nothing to criticize in terms of grammar.

However, weaknesses in the narrative logic quickly become apparent. "The words shuffled in, a stage runner was able to guide my hand; I held the first sentence that communicated in front of me to see how it reacted," it says in "(Berlin, Miami)". The scenery seems mysterious, the sentences become entangled and the events are barely comprehensible.

Bajohr's novel is less convincing because of its narrative potential than because of its excessive wealth of linguistic imagery. And: it demonstrates the weaknesses of language models. Put simply, ChatGPT works in a similar way to the simple word completion in WhatsApp; one word leads to the next. The only difference is that the "next token prediction" in ChatGPT can draw on an immense amount of text data in a much more subtle way.

Even if such linguistic models may stimulate the imagination, they are unable to tell a story. They can, writes Bajohr in the epilogue, establish correlations between things, but they do not calculate causalities. But this is precisely what a narrative depends on. What happens why and how in which network of relationships? This is what interests us when we read good stories.

The end of the novel

Once the ChatGPT is triggered with a command or a request, it literally babbles out of it. The AI is informative at a high level, but also terribly mediocre overall. The underlying statistical process generates average values and eliminates deviating, special features. Above all, it texts linearly, endlessly.

Bajohr limited himself to minimal input and hardly intervened further in the genesis of the text. Only at the end did he make use of the "model collapse" to find an ending to the novel. He fed the language model with its own text until it stuttered senselessly and finally fell silent: "Sartrisch Grandios "

The term "artificial intelligence" hardly contributes to a better understanding of what tools such as the GPT achieve, writes philosopher Manuela Lenzen in her book "Der elektronische Spiegel" (2023), which is well worth reading. She therefore suggests that AI should no longer be seen as a "universal problem solver". It would be more expedient to focus on "alternative intelligent systems" that are developed to solve specific problems.

What holds the world together at its core "can neither be listed nor put into conceptual drawers and sorted into mental shelves". Real intelligence is not found in AI systems, she notes, but in playgrounds where children discover the world by playing together.*

*This text by Beat Mazenauer, Keystone-SDA, was realized with the help of the Gottlieb and Hans Vogt Foundation.

©Keystone/SDA

Verwandte Geschichten

In Kontakt bleiben

Erwähnenswert

the swiss times
Eine Produktion der UltraSwiss AG, 6340 Baar, Schweiz
Copyright © 2024 UltraSwiss AG 2024 Alle Rechte vorbehalten