Zum Inhalt springen
Farbenfrohe Tänzer in Bewegung
02 Juli 2019| doi: 10.5281/zenodo.3274594

Der kulturelle Faktor im KI-Zeitalter

Künstliche Intelligenz ist in aller Munde – und ruft sowohl große Hoffnungen als auch dystopische Schreckensszenarien hervor. Doch wie kann man die Begeisterung für die Technologie erklären? Der technologischen Debatte über KI scheint eine tiefe kulturelle Schicht zu Grunde zu liegen, in der menschliche Sehnsüchte und Ängste verborgen sind. In ihrem Beitrag betrachtet Theresa Züger den kulturellen Faktor von KI etwas genauer und zeigt, dass wir durch die KI-Debatte auch etwas über uns selbst lernen können.

Artificial intelligence in culture

Artificial intelligence is a cultural reference more than it is a technological one. This doesn’t question the existence of trained machines labelled as artificial intelligence or the transformative impact that these developments will have on current societies. Artificial intelligence as a cultural myth refers to the narrative of machines overtaking and leading the human world to a higher form of existence. The term artificial intelligence signifies a subconscious collective meaning – a myth in the extended definition of Roland Barthes (Mythologies 1957). 

Entdecke unser Dossier zu Künstliche Intelligenz und Governance

The myth of AI

In Barthes’ sense, a myth is more than an ancient story known to many. By his definition, potentially any semiotic process can gain a subconscious collective meaning. Artificial intelligence as a myth stands for a human narrative that is both deeply feared and deeply longed for. 

In a growingly secular society, where religious belief in the afterlife has become rare, powerful personalities like Ray Kurzweil (Director of Engineering at Google) speaking with the authority of a scientist, are representing a belief that a singularity will inevitably emerge from AI and transform human life into a higher mode of existence. As a fantasy of redemption, the human is giving the world a superhuman machine as the eternal creator of a superior being. 

On the side of human fears, others like Oxford Professor Nick Bostrom, predict that AI will grow out of control. He finds an intelligence-explosion of AI very likely. In this scenario humanity is facing a machine dictatorship.

As new as Nick Bostrom’s and Ray Kurzweil’s fears of a non-human entity destroying or saving human life may seem, they represent a very old human fear and longing in a new (robotic) outfit. In this fantasy, AI becomes the placeholder for a human reflection on our own making – they become what has, for a long time, been called a demon.

Todays’ demons wear wire

In his book In the Dust of this Planet (2010) Eugene Thacker introduces his understanding of demons. Humans of nearly all cultures have known demons as non-human and supernatural creatures. In many myths, demons play the role of the antagonist to human life and well-being – as seducer and dark power. The demon seems to fulfil an important cultural role of personifying human fears and hopes for divine intervention. In this sense, our projections into AI can be seen as the demons of our times.

In his book, Thacker explains: “The demon functions as a metaphor for the human – both in the sense of the human’s ability to comprehend itself, as well as the relations between one human being and another. The demon is not really a supernatural creature, but an anthropological motif through which we human beings project, externalize, and represent the darker side of the human ourselves” (p. 26).

The outdatedness of humankind

To better understand our subconscious fears of artificial intelligence, the idea of Promethean shame by Günther Anders is helpful. Anders used the term to describe the human discomfort of realising our own limitations in the comparison to the machines we have created.

Science has led to several disappointments for humankind (as Freud already described).

  1. the cosmological disappointment, which occurred with Copernicus and the realisation that the earth is not the centre of the universe, 
  2. the biological, with Darwin, when humankind had to recognise that it was not simply made by God but a part of evolution,
  3. the psychological, which Freud saw in his method of psychoanalysis and the discovery of the subconscious
  4. the technological, which Anders added to with his idea of the Promethean shame as the technological disappointment of humankind realising its inferiority in comparison with its own making.

This feeling confronts us with our own reliance and even dependence on technological objects that usually slips from our consciousness – and in extreme forms even lets us wish we could function like a machine. Behind this shame lies the frustration with humanness, as a state of being, which can never be fully understood or controlled, which is inevitably painful at times, powerless against many twists of fate and eventually – ends in death.

In his book The Outdatedness of Humankind (1956) Anders argues that a gap is growing between the human ability to develop technologies, that both create and destroy our world, and our capacity to comprehend this power and imagine its consequences. Anders wrote this book under the peril of the nuclear threat. Even though the prospect of humanity destroying itself with nuclear weapons is no less real today, we are additionally and equally urgently facing a different threat.

Today we need to face the fact, that we are a species that destroys (or hopefully only comes close to destroying) the planetary basis of its own existence. Maybe that can be seen as a fifth disappointment to humankind – at least in a western worldview in which religion as well as philosophy told us that homo sapiens is the superior being amongst all beings on earth. If any of the human ego was still remaining after the disappointments described before, realising that our own choices and inventions are most likely killing our planet and potentially most of us, must crush whatever human pride is remaining. Besides living in the age of AI, we are also living with the outlook to an age of a realistic existential crisis for humankind and nature. 

Why modern myths matter

Why does it matter, that myths are a part an essential part of the discourse on AI? Roland Barthes argued in his theory of myths, that a myth is de-politicised speech. De-politicised here means that all human relations, in their structure and their power of making the world, are stripped from its narrative. He argues that by becoming a myth, things lose the memory of the way they were made.

And that is what happens when we mystify AI: We forget how and what for it is created and lose track of the invisible power relations machine dependence and AI will extend. The myths around AI, as culturally interesting and important as they are – cloud our sight to the actual dangers and decisions ahead. 

Stripping away the mythical creature of the demon, we can see the myth of AI as a human reflection on our own dark impulses. We are looking at a realistic human fear. It is the fear of creating entities and structures that implement our own failures, weaknesses and wrongdoings. 

More than anything, AI development is a run for power, since it will be used in critical infrastructures of economy and governance. We need to look at the men (and few women) who hold this power and ask ourselves if we trust them to make choices that benefit all and don’t exclude vulnerable groups from their equation. The rightful fear we have should concentrate on the human weaknesses that show in AI today, as in biases of data sets and unreflected use of AI in surveillance and military.

As for any powerful technology, our question should be how the power to govern AI is distributed, who will benefit and who will be overlooked and de-humanised by the loving grace of the machines we create.


A slightly different version of this article was first published in Goethe-Institut Australia’s magazine “Kultur”.


Theresa Züger, Dr.

Leiterin AI & Society Lab & Forschungsgruppe Public Interest AI

Aktuelle HIIG-Aktivitäten entdecken

Forschungsthemen im Fokus

Das HIIG beschäftigt sich mit spannenden Themen. Erfahren Sie mehr über unsere interdisziplinäre Pionierarbeit im öffentlichen Diskurs.

Forschungsthema im Fokus Entdecken

Du siehst Eisenbahnschienen. Die vielen verschiedenen Abzweigungen symbolisieren die Entscheidungsmöglichkeiten von Künstlicher Intelligenz in der Gesellschaft. Manche gehen nach oben, unten, rechts. Manche enden auch in Sackgassen. Englisch: You see railway tracks. The many different branches symbolise the decision-making possibilities of artificial intelligence and society. Some go up, down, to the right. Some also end in dead ends.

Künstliche Intelligenz und Gesellschaft

Die Zukunft der künstliche Intelligenz funktioniert in verschiedenen sozialen Kontexten. Was können wir aus ihren politischen, sozialen und kulturellen Facetten lernen?

HIIG Monthly Digest

Jetzt anmelden und  die neuesten Blogartikel gesammelt per Newsletter erhalten.

Weitere Artikel

Welche Skills braucht man heutzutage auf dem Arbeitsmarkt?

Im Wettlauf gegen die Maschinen: Skills für die Zukunft

Welche Skills brauchen Arbeitnehmer*innen, um mit dem wandelnden Arbeitsmarkt Schritt halten zu können? Die Untersuchung einer der weltweit größten Online-Freelancer-Plattformen gibt wichtige Einblicke.

eine geschwungene weiße Linie vom rechten unteren bis zum oberen rechten Bildrand auf grünem Kunstrasen, die dafür steht, wie Plattformräte zu einer besseren Regulierung von Online-Kommunikation betiragen können

Die Mächtigen regulieren: Das Potenzial von Plattformräten

Unternehmen regulieren öffentliche Debatten im digitalen Raum. Könnten Plattformräte Online-Kommunikation demokratischer machen?

Russische Online-Plattformen halten mit Amazon, Facebook und Co. mit

Wie russische Online-Plattformen mit den globalen Giganten konkurrieren

Russland ist eines der wenigen Länder, in denen inländische Online-Plattformen mit den US-Giganten mithalten können. Was können wir von ihnen lernen?