Skip to content
169 HD – AI is neutral (1)
27 April 2021| doi: 10.5281/zenodo.4719522

Myth: AI will kill us all!

AI won’t kill us in the form of a time-travelling humanoid robot with an Austrian accent. But: AI is used in various military applications – supporting new concepts of command and control and enabling autonomous targeting functions. This accelerates warfare and erodes human control, causing legal & ethical challenges.

Myth

AI will kill us all! Killer robots will strive for world domination! And invent time travel! While the Sci-Fi Terminator trope might be a bit over the top, AI becomes an integral part of military decision-making all over the world. In that context, AI will help killing people.

Military applications of AI support novel operational concepts and enable autonomous targeting functions. This accelerates warfare and can improve decisions – but also erodes human control.

Watch the talk

Material

Presentation sides
CORE READINGS

Sauer, F. (2020). Stepping back from the brink: Why multilateral regulation of autonomy in weapons systems is difficult, yet imperative and feasible. International Review of the Red Cross, 102(913), 235–259. Read here.

Dahlmann, A., & Dickow, M. (2019). Preventive regulation of autonomous weapon systems: Need for action by Germany at various levels (Vol. 3/2019). Stiftung Wissenschaft und Politik -SWP- Deutsches Institut für Internationale Politik und Sicherheit. Read here.

ADDITIONAL READINGS
Paul Scharre (2018), Army of None. Read here.

IPRAW. (2017, November). International Panel on the Regulation of Autonomous Weapons. Read here.

Schörnig, N. (2019). Paul Scharre: Army of None: Autonomous Weapons and the Future of War, London: W.W. Norton 2018. SIRIUS – Zeitschrift Für Strategische Analysen, 3(1), 107–108. Read here.
UNICORN IN THE FIELD
The International Panel on the Regulation of Autonomous Weapons (iPRAW) is an international, interdisciplinary, and independent network of researchers working on the issue of  lethal autonomous weapons systems (LAWS). It aims at supporting the current debate within the UN CCW with scientifically grounded information and recommendations.

About the author

Anja Dahlmann

Stiftung Wissenschaft und Politik – German Institute for International and Security Affairs

Anja Dahlmann holds a master’s degree in Political Science from the University of Göttingen. She works as a researcher at the Berlin-based think tank Stiftung Wissenschaft und Politik and is the head of the International Panel on the Regulation of Autonomous Weapons (iPRAW). Therefore, she focuses on emerging technologies and disarmament, especially on so-called lethal autonomous weapon systems.

@adahlma


Why, AI?

This post is part of our project “Why, AI?”. It is a learning space which helps you to find out more about the myths and truths surrounding automation, algorithms, society and ourselves. It is continuously being filled with new contributions.

Explore all myths


This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Sign up for HIIG's Monthly Digest

HIIG-Newsletter-Header

You will receive our latest blog articles once a month in a newsletter.

Explore Research issue in focus

Du siehst Eisenbahnschienen. Die vielen verschiedenen Abzweigungen symbolisieren die Entscheidungsmöglichkeiten von Künstlicher Intelligenz in der Gesellschaft. Manche gehen nach oben, unten, rechts. Manche enden auch in Sackgassen. Englisch: You see railway tracks. The many different branches symbolise the decision-making possibilities of artificial intelligence and society. Some go up, down, to the right. Some also end in dead ends.

Artificial intelligence and society

The future of artificial Intelligence and society operates in diverse societal contexts. What can we learn from its political, social and cultural facets?

Further articles

The picture shows seven yellow heads of lego figures, portraying different emotions. This symbolizes the emotions university educators go through in the process of resistance to change due to digitalisation.

Resistance to change: Challenges and opportunities in digital higher education

Resistance to change in higher education is inevitable. However, if properly understood, it can contribute to shaping digital transformation constructively.

The picture shows a young lion, symbolising our automated German text simplifier Simba, which was developed by our research group Public Interest AI.

From theory to practice and back again: A journey in Public Interest AI

This blog post reflects on our initial Public Interest AI principles, using our experiences from developing Simba, an open-source German text simplifier.

The picture shows an invitation to the Berlin living lab, symbolizing citizen participation in digital city administrations.

Mobility transition in the neighbourhood: Simulating citizen participation in Berlin’s digital administration

How can data and digital solutions drive urban development? In the living lab, we tested citizen participation within Berlin's digital administration.