Skip to content
Colourful dancers turning
02 July 2019| doi: 10.5281/zenodo.3274594

The relevance of culture in the age of AI

Artificial intelligence (AI) is the term on everyone’s lips – and it evokes both great hopes and dystopian horror scenarios. But how can you explain the enthusiasm for the technology? The technological debate about AI seems to be rooted in a deep cultural layer in which human desires and fears are hidden. In this article, Theresa Züger takes a closer look at the cultural factor of Artificial Intelligence and shows that we can even learn something about ourselves from the AI debate.

Artificial Intelligence in culture

Artificial Intelligence is a cultural reference more than it is a technological one. This doesn’t question the existence of trained machines labelled as artificial intelligence or the transformative impact that these developments will have on current societies. Artificial Intelligence as a cultural myth refers to the narrative of machines overtaking and leading the human world to a higher form of existence. The term Artificial Intelligence signifies a subconscious collective meaning – a myth in the extended definition of Roland Barthes (Mythologies 1957). 

Discover our issue in focus on artificial intelligence and governance

The myth of AI

In Barthes’ sense, a myth is more than an ancient story known to many. By his definition, potentially any semiotic process can gain a subconscious collective meaning. Artificial intelligence as a myth stands for a human narrative that is both deeply feared and deeply longed for. 

In a growingly secular society, where religious belief in the afterlife has become rare, powerful personalities like Ray Kurzweil (Director of Engineering at Google) speaking with the authority of a scientist, are representing a belief that a singularity will inevitably emerge from AI and transform human life into a higher mode of existence. As a fantasy of redemption, the human is giving the world a superhuman machine as the eternal creator of a superior being. 

On the side of human fears, others like Oxford Professor Nick Bostrom, predict that AI will grow out of control. He finds an intelligence-explosion of AI very likely. In this scenario humanity is facing a machine dictatorship.

As new as Nick Bostrom’s and Ray Kurzweil’s fears of a non-human entity destroying or saving human life may seem, they represent a very old human fear and longing in a new (robotic) outfit. In this fantasy, AI becomes the placeholder for a human reflection on our own making – they become what has, for a long time, been called a demon.

Todays’ demons wear wire

In his book In the Dust of this Planet (2010) Eugene Thacker introduces his understanding of demons. Humans of nearly all cultures have known demons as non-human and supernatural creatures. In many myths, demons play the role of the antagonist to human life and well-being – as seducer and dark power. The demon seems to fulfil an important cultural role of personifying human fears and hopes for divine intervention. In this sense, our projections into AI can be seen as the demons of our times.

In his book, Thacker explains: “The demon functions as a metaphor for the human – both in the sense of the human’s ability to comprehend itself, as well as the relations between one human being and another. The demon is not really a supernatural creature, but an anthropological motif through which we human beings project, externalize, and represent the darker side of the human ourselves” (p. 26).

The outdatedness of humankind

To better understand our subconscious fears of artificial intelligence, the idea of Promethean shame by Günther Anders is helpful. Anders used the term to describe the human discomfort of realising our own limitations in the comparison to the machines we have created.

Science has led to several disappointments for humankind (as Freud already described).

  1. the cosmological disappointment, which occurred with Copernicus and the realisation that the earth is not the centre of the universe, 
  2. the biological, with Darwin, when humankind had to recognise that it was not simply made by God but a part of evolution,
  3. the psychological, which Freud saw in his method of psychoanalysis and the discovery of the subconscious
  4. the technological, which Anders added to with his idea of the Promethean shame as the technological disappointment of humankind realising its inferiority in comparison with its own making.

This feeling confronts us with our own reliance and even dependence on technological objects that usually slips from our consciousness – and in extreme forms even lets us wish we could function like a machine. Behind this shame lies the frustration with humanness, as a state of being, which can never be fully understood or controlled, which is inevitably painful at times, powerless against many twists of fate and eventually – ends in death.

In his book The Outdatedness of Humankind (1956) Anders argues that a gap is growing between the human ability to develop technologies, that both create and destroy our world, and our capacity to comprehend this power and imagine its consequences. Anders wrote this book under the peril of the nuclear threat. Even though the prospect of humanity destroying itself with nuclear weapons is no less real today, we are additionally and equally urgently facing a different threat.

Today we need to face the fact, that we are a species that destroys (or hopefully only comes close to destroying) the planetary basis of its own existence. Maybe that can be seen as a fifth disappointment to humankind – at least in a western worldview in which religion as well as philosophy told us that homo sapiens is the superior being amongst all beings on earth. If any of the human ego was still remaining after the disappointments described before, realising that our own choices and inventions are most likely killing our planet and potentially most of us, must crush whatever human pride is remaining. Besides living in the age of AI, we are also living with the outlook to an age of a realistic existential crisis for humankind and nature. 

Why modern myths matter

Why does it matter, that myths are a part an essential part of the discourse on AI? Roland Barthes argued in his theory of myths, that a myth is de-politicised speech. De-politicised here means that all human relations, in their structure and their power of making the world, are stripped from its narrative. He argues that by becoming a myth, things lose the memory of the way they were made.

And that is what happens when we mystify AI: We forget how and what for it is created and lose track of the invisible power relations machine dependence and AI will extend. The myths around AI, as culturally interesting and important as they are – cloud our sight to the actual dangers and decisions ahead. 

Stripping away the mythical creature of the demon, we can see the myth of AI as a human reflection on our own dark impulses. We are looking at a realistic human fear. It is the fear of creating entities and structures that implement our own failures, weaknesses and wrongdoings. 

More than anything, AI development is a run for power, since it will be used in critical infrastructures of economy and governance. We need to look at the men (and few women) who hold this power and ask ourselves if we trust them to make choices that benefit all and don’t exclude vulnerable groups from their equation. The rightful fear we have should concentrate on the human weaknesses that show in AI today, as in biases of data sets and unreflected use of AI in surveillance and military.

As for any powerful technology, our question should be how the power to govern AI is distributed, who will benefit and who will be overlooked and de-humanised by the loving grace of the machines we create.


A slightly different version of this article was first published in Goethe-Institut Australia’s magazine “Kultur”.

This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Theresa Züger, Dr.

Research Group Lead: Public Interest AI | AI & Society Lab

Explore current HIIG Activities

Research issues in focus

HIIG is currently working on exciting topics. Learn more about our interdisciplinary pioneering work in public discourse.

Sign up for HIIG's Monthly Digest

and receive our latest blog articles.

Further articles

Toolkit "Making Sense of the Future" lays on the table, representing digital futures in the classroom.

Making Sense of the Future: New brainteasers for digital futures in the classroom

Explore “Making Sense of the Future”, an open educational resource combining futures studies and creative exploration to reimagine our digital futures.

Generic visualizations generated by the author using Stable Diffusion AI representing futuristic visions for futures studies

Honey, we need to talk about the future

Can futures studies challenge the status quo beyond academia and approach public dialogue as an imaginative space for collective endeavours?

two Quechuas, sitting on green grass and looking at their smartphones, symbolising What are the indigenous perspectives of digitalisation? Quechuas in Peru show openness, challenges, and requirements to grow their digital economies

Exploring digitalisation: Indigenous perspectives from Puno, Peru

What are the indigenous perspectives of digitalisation? Quechuas in Peru show openness, challenges, and requirements to grow their digital economies.