Skip to content
169 HD – AI is neutral (1)
27 April 2021| doi: 10.5281/zenodo.4719522

Myth: AI will kill us all!

AI won’t kill us in the form of a time-travelling humanoid robot with an Austrian accent. But: AI is used in various military applications – supporting new concepts of command and control and enabling autonomous targeting functions. This accelerates warfare and erodes human control, causing legal & ethical challenges.

Myth

AI will kill us all! Killer robots will strive for world domination! And invent time travel! While the Sci-Fi Terminator trope might be a bit over the top, AI becomes an integral part of military decision-making all over the world. In that context, AI will help killing people.

Military applications of AI support novel operational concepts and enable autonomous targeting functions. This accelerates warfare and can improve decisions – but also erodes human control.

Watch the talk

Material

Presentation sides
CORE READINGS

Sauer, F. (2020). Stepping back from the brink: Why multilateral regulation of autonomy in weapons systems is difficult, yet imperative and feasible. International Review of the Red Cross, 102(913), 235–259. Read here.

Dahlmann, A., & Dickow, M. (2019). Preventive regulation of autonomous weapon systems: Need for action by Germany at various levels (Vol. 3/2019). Stiftung Wissenschaft und Politik -SWP- Deutsches Institut für Internationale Politik und Sicherheit. Read here.

ADDITIONAL READINGS
Paul Scharre (2018), Army of None. Read here.

IPRAW. (2017, November). International Panel on the Regulation of Autonomous Weapons. Read here.

Schörnig, N. (2019). Paul Scharre: Army of None: Autonomous Weapons and the Future of War, London: W.W. Norton 2018. SIRIUS – Zeitschrift Für Strategische Analysen, 3(1), 107–108. Read here.
UNICORN IN THE FIELD
The International Panel on the Regulation of Autonomous Weapons (iPRAW) is an international, interdisciplinary, and independent network of researchers working on the issue of  lethal autonomous weapons systems (LAWS). It aims at supporting the current debate within the UN CCW with scientifically grounded information and recommendations.

About the author

Anja Dahlmann

Stiftung Wissenschaft und Politik – German Institute for International and Security Affairs

Anja Dahlmann holds a master’s degree in Political Science from the University of Göttingen. She works as a researcher at the Berlin-based think tank Stiftung Wissenschaft und Politik and is the head of the International Panel on the Regulation of Autonomous Weapons (iPRAW). Therefore, she focuses on emerging technologies and disarmament, especially on so-called lethal autonomous weapon systems.

@adahlma


Why, AI?

This post is part of our project “Why, AI?”. It is a learning space which helps you to find out more about the myths and truths surrounding automation, algorithms, society and ourselves. It is continuously being filled with new contributions.

Explore all myths


This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Explore Research issue in focus

Du siehst Eisenbahnschienen. Die vielen verschiedenen Abzweigungen symbolisieren die Entscheidungsmöglichkeiten von Künstlicher Intelligenz in der Gesellschaft. Manche gehen nach oben, unten, rechts. Manche enden auch in Sackgassen. Englisch: You see railway tracks. The many different branches symbolise the decision-making possibilities of artificial intelligence and society. Some go up, down, to the right. Some also end in dead ends.

Artificial intelligence and society

The future of artificial Intelligence and society operates in diverse societal contexts. What can we learn from its political, social and cultural facets?

Sign up for HIIG's Monthly Digest

and receive our latest blog articles.

Further articles

The photo shows a hand holding a digital map on a smartphone, symbolising GIS technology and Geodata.

Navigating the Urban Maze: GIS technology and the blurring boundaries between digital and physical infrastructure

The progression of GIS technology and Geodata questions if digital maps should be regarded as physical public infrastructure.

Toolkit "Making Sense of the Future" lays on the table, representing digital futures in the classroom.

Making Sense of the Future: New brainteasers for digital futures in the classroom

Explore “Making Sense of the Future”, an open educational resource combining futures studies and creative exploration to reimagine our digital futures.

Generic visualizations generated by the author using Stable Diffusion AI representing futuristic visions for futures studies

Honey, we need to talk about the future

Can futures studies challenge the status quo beyond academia and approach public dialogue as an imaginative space for collective endeavours?