Skip to content

Forget the “killing machine”: why AI is a question of responsibility, not apocalypse

Author: Kettemann, M. C., & Efferenn, F.
Published in: Digital society blog
Year: 2026
Type: Other publications
DOI: https://doi.org/10.5281/zenodo.19094245

At the end of February, Der Spiegel, one of Germany’s most widely read news magazines, published a cover story that paints a dramatic picture of the AI age. Much like the invention of the atomic bomb, artificial intelligence could alter the fundamental logic of war and deterrence, potentially triggering a global technological arms race. AI is portrayed as a potential “killing machine”. It is presented as a technology that accelerates military decision-making, coordinates cyberattacks independently, and could one day surpass its creators. The underlying fear is that machines will become an existential threat to humanity, rather than remaining mere tools. But does this stand up to scientific scrutiny? This article offers an alternative perspective. It asks what lies behind the metaphor of the “killing machine”, why the Spiegel piece conflates so many distinct AI systems, and which debates remain invisible as a result. For instance, that automated decision-making systems are already deployed far beyond the military: in credit lending and in the moderation of content on digital platforms. AI systems are not a force of nature descending on our society from outside. They are developed, deployed and held accountable by people. Is this really about autonomous, lethal super-AI? Or about how we as a society use AI responsibly?

Visit publication

Publication

Connected HIIG researchers

Matthias C. Kettemann, Prof. Dr. LL.M. (Harvard)

Head of Research: New Technologies and Future of Law


  • Open Access

Explore current HIIG Activities

Featured Topics

HIIG is currently working on exciting topics. Learn more about our interdisciplinary pioneering work in public discourse.