At the end of February, Der Spiegel, one of Germany’s most widely read news magazines, published a cover story that paints a dramatic picture of the AI age. Much like the invention of the atomic bomb, artificial intelligence could alter the fundamental logic of war and deterrence, potentially triggering a global technological arms race. AI is portrayed as a potential “killing machine”. It is presented as a technology that accelerates military decision-making, coordinates cyberattacks independently, and could one day surpass its creators. The underlying fear is that machines will become an existential threat to humanity, rather than remaining mere tools. But does this stand up to scientific scrutiny? This article offers an alternative perspective. It asks what lies behind the metaphor of the “killing machine”, why the Spiegel piece conflates so many distinct AI systems, and which debates remain invisible as a result. For instance, that automated decision-making systems are already deployed far beyond the military: in credit lending and in the moderation of content on digital platforms. AI systems are not a force of nature descending on our society from outside. They are developed, deployed and held accountable by people. Is this really about autonomous, lethal super-AI? Or about how we as a society use AI responsibly?