Skip to content
169 HD – AI is neutral (1)
27 April 2021| doi: 10.5281/zenodo.4719522

Myth: AI will kill us all!

AI won’t kill us in the form of a time-travelling humanoid robot with an Austrian accent. But: AI is used in various military applications – supporting new concepts of command and control and enabling autonomous targeting functions. This accelerates warfare and erodes human control, causing legal & ethical challenges.

Myth

AI will kill us all! Killer robots will strive for world domination! And invent time travel! While the Sci-Fi Terminator trope might be a bit over the top, AI becomes an integral part of military decision-making all over the world. In that context, AI will help killing people.

Military applications of AI support novel operational concepts and enable autonomous targeting functions. This accelerates warfare and can improve decisions – but also erodes human control.

Watch the talk

Material

Presentation sides
CORE READINGS

Sauer, F. (2020). Stepping back from the brink: Why multilateral regulation of autonomy in weapons systems is difficult, yet imperative and feasible. International Review of the Red Cross, 102(913), 235–259. Read here.

Dahlmann, A., & Dickow, M. (2019). Preventive regulation of autonomous weapon systems: Need for action by Germany at various levels (Vol. 3/2019). Stiftung Wissenschaft und Politik -SWP- Deutsches Institut für Internationale Politik und Sicherheit. Read here.

ADDITIONAL READINGS
Paul Scharre (2018), Army of None. Read here.

IPRAW. (2017, November). International Panel on the Regulation of Autonomous Weapons. Read here.

Schörnig, N. (2019). Paul Scharre: Army of None: Autonomous Weapons and the Future of War, London: W.W. Norton 2018. SIRIUS – Zeitschrift Für Strategische Analysen, 3(1), 107–108. Read here.
UNICORN IN THE FIELD
The International Panel on the Regulation of Autonomous Weapons (iPRAW) is an international, interdisciplinary, and independent network of researchers working on the issue of  lethal autonomous weapons systems (LAWS). It aims at supporting the current debate within the UN CCW with scientifically grounded information and recommendations.

About the author

Anja Dahlmann

Stiftung Wissenschaft und Politik – German Institute for International and Security Affairs

Anja Dahlmann holds a master’s degree in Political Science from the University of Göttingen. She works as a researcher at the Berlin-based think tank Stiftung Wissenschaft und Politik and is the head of the International Panel on the Regulation of Autonomous Weapons (iPRAW). Therefore, she focuses on emerging technologies and disarmament, especially on so-called lethal autonomous weapon systems.

@adahlma


Why, AI?

This post is part of our project “Why, AI?”. It is a learning space which helps you to find out more about the myths and truths surrounding automation, algorithms, society and ourselves. It is continuously being filled with new contributions.

Explore all myths


This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Sign up for HIIG's Monthly Digest

HIIG-Newsletter-Header

You will receive our latest blog articles once a month in a newsletter.

Explore Research issue in focus

Du siehst Eisenbahnschienen. Die vielen verschiedenen Abzweigungen symbolisieren die Entscheidungsmöglichkeiten von Künstlicher Intelligenz in der Gesellschaft. Manche gehen nach oben, unten, rechts. Manche enden auch in Sackgassen. Englisch: You see railway tracks. The many different branches symbolise the decision-making possibilities of artificial intelligence and society. Some go up, down, to the right. Some also end in dead ends.

Artificial intelligence and society

The future of artificial Intelligence and society operates in diverse societal contexts. What can we learn from its political, social and cultural facets?

Further articles

Abstract paper cut-out on a wall symbolising the complexity and interconnection of global initiatives, representing efforts in mapping the landscape of public interest AI as explored in the article.

Artificial intelligence with purpose: Mapping the landscape of public interest AI

How is AI being used for the common good? A new dataset is mapping the landscape of public interest AI by cataloguing impactful projects worldwide.

Illustration of a woman searching for jobs online, highlighting how generative AI in recruiting influences candidate experiences and job application processes.

Who hired this bot? On the ambivalence of using generative AI in recruiting

Generative AI in recruiting promises efficiency, but may also quietly undermine the human connection that HR decisions and candidate fit rely on.

Blank white paper on a yellow background symbolising how AI emails can lack personal touch and emotion.

Polished yet impersonal: The unintended consequences of writing your emails with AI

AI-written emails can save workers time and improve clarity – but are we losing connection, nuance, and communication skills in the process?