Making sense of our connected world

Forget the “killing machine”: why AI is a question of responsibility, not apocalypse
At the end of February, Der Spiegel, one of Germany’s most widely read news magazines, published a cover story that paints a dramatic picture of the AI age. Much like the invention of the atomic bomb, artificial intelligence could alter the fundamental logic of war and deterrence, potentially triggering a global technological arms race. AI is portrayed as a potential “killing machine”. It is presented as a technology that accelerates military decision-making, coordinates cyberattacks independently, and could one day surpass its creators. The underlying fear is that machines will become an existential threat to humanity, rather than remaining mere tools. But does this stand up to scientific scrutiny? This article offers an alternative perspective. It asks what lies behind the metaphor of the “killing machine”, why the Spiegel piece conflates so many distinct AI systems, and which debates remain invisible as a result. Is this really about autonomous, lethal AI, or about how we use it responsibly as a society?
It is one of humanity’s oldest reflexes to fear its tools once they become powerful enough. In the nineteenth century, workers smashed mechanical looms out of fear of losing their livelihoods. A century later, the atomic bomb prompted fears of civilisational collapse. And today a new figure has emerged, as AI systems are deployed across many areas of society: “deadly intelligence”, as the cover of the tenth issue of Der Spiegel in 2026 proclaims (Book, Pfister & Rosenbach, 2026).

This cover, on closer inspection, carries many problematic assumptions. What we see is not a real photograph but an AI-generated image: a humanoid robot head, half human, half metal skull, one eye blue and almost human, the other a glowing red sensor point. This is no coincidence of aesthetics. AI is simultaneously humanised and demonised. The robot has a face; it stares back. Only what looks like an agent can be feared like one.
AI as killing machine — a cultural pattern
The article accompanying the cover begins with a horror film and ends with a warning of an apocalypse. Along the way, the authors describe genuinely distinct systems: large language models such as Claude, exploited for cyberattacks; AI-guided drones over Ukraine; autonomous combat jets; and Chinese robots. This is, at first glance, legitimate. These systems exist; their deployment is real and worth scrutinising. The problem lies not in the individual examples, but in the narrative frame the article constructs around them. The authors write of the “killing machine” and deploy the following subtitle: “The rise of artificial intelligence is as dangerous as the invention of the atomic bomb. AI pioneers warn: humanity must rein in the machines while it still can” (Book, Pfister & Rosenbach, 2026).
This aesthetic is no accident but part of a deeply rooted cultural pattern. The idea of an intelligence slipping beyond human control has long become a fixed topic in films, political speeches, and our collective imagination — shaping public perceptions of real technologies in lasting ways (Bareis & Bächle, 2025). It is the computer that starts the war. It is the algorithm that takes control. It is the system that turns against its human creators. This figure is rhetorically powerful, even in journalistic contexts. But it is analytically misleading. Not because AI systems are harmless, but because artificial intelligence is not a singular actor detached from human agency, and because the central problem is not one existential threat but many creeping dangers. AI is simultaneously more dangerous and less dangerous than the killing machine narrative suggests.
There is no single AI
The article conflates heterogeneous technologies and AI systems with the narrative of artificial general intelligence. This theoretical form of AI is capable of understanding knowledge, learning, and applying it to any intellectual task at a level that matches or surpasses human intelligence. This kind of AI does not exist. Whether it will ever exist, is highly contested amongst researchers. Yet the article implicitly conjures precisely this image when it describes AI as an existential threat. Dario Amodei, chief executive of Anthropic, is quoted as saying there is a “25 per cent probability” that AI will destroy humanity. AI pioneer Geoffrey Hinton puts the figure at “10 to 20 per cent” (Book, Pfister & Rosenbach, 2026). Neither figure is contextualised and there is no scientific way to assess the validity of this assumption..
Behind this narrative lies an equally important story that remains invisible. Automated decision-making systems have long been part of everyday life, far beyond the military. These are programmes or procedures that use data and statistical models to support or structure decision-making processes — sometimes with human involvement, sometimes without. We already use them for credit lending, in the moderation of content on social media platforms, in recruitment processes and in the administration of public services (Crootof et al., 2023). Our financial markets, communications platforms, transport networks and energy systems all depend on the machine processing of information on a scale and at a speed far beyond the cognitive capacity of any individual human.
More than efficiency
These systems are not superintelligences. They are often narrow and domain-specific. Yet we deploy them in decisions that directly affect people’s lives. The reality rarely lies at either extreme, between lethal military technology and miraculous super-AI. More often than not, humans and machines collaborate in unspectacular yet highly consequential decision-making loops.
Banks, for instance, use statistical models trained on hundreds of thousands of past credit decisions to identify patterns. Drawing on criteria such as income, employment stability and repayment history, these models assess the likelihood that a person will repay a loan. In doing so, they can render visible the relationships between many variables simultaneously — connections that human analysts would struggle to identify in individual cases. Yet the result is not an automated decision. It is a basis for one. Credit officers then examine the output more closely. For example, has a young self-employed person not yet built up a long credit history, even though their business model is sound? Is a customer facing a short-term financial shortfall that looks bad statistically, but which can be fully explained in context?
The human element remains in the decision-making process. This is not a weakness of the system but a deliberate design choice. Machines deliver consistency: they apply the same criteria to every case, regardless of the assessor’s state of mind or implicit assumptions. Humans bring context and judgement. But this, by itself, does not make for a compelling story.
The more successfully these systems are deployed in everyday life, the more invisible they become. We only tend to notice them only when something goes wrong. At that moment, it is tempting to ascribe a form of autonomous agency to them. However, this is merely a projection.
Machines do not simply act
Perhaps the most important insight from science and technology studies is also one of the most uncomfortable: machines do not simply act. They are deployed. AI systems have no intentions or interests. What they do is structurally different from human thought and understanding; it is not simply a faster version of it. Humans fundamentally (and constitutionally) have freedom of action, whereas machines do not. AI systems learn from vast datasets, identifying statistical correlations and deriving predictions from them. They can recognise patterns across millions of cases simultaneously and simulate scenarios that exceed human cognitive capacity. The apparent intelligence we perceive in these systems is derived from human data, human decisions and human goals. A killer drone does not simply fly. The killer robot has a history, and that history is human.
Those who believe machines are autonomous decision-makers demonise them as “killing machines” (Book, Pfister & Rosenbach, 2026). Those who understand that they are tools embedded in institutional decision-making architectures will instead try to design those architectures responsibly. The difference between these two perspectives is the difference between fear and effective governance.
Autonomous weapons: A debate the article ignores
This gap is most apparent in relation to the topic that dominates the Spiegel article: drones, autonomous combat jets and AI-guided warfare. While the article vividly describes these systems, it never uses the term that has been used in international politics for years to negotiate such systems. These systems are referred to as autonomous weapon systems (AWS) or lethal autonomous weapons systems (LAWS) (Bareis & Bächle, 2025).
This is not an academic technicality. A well-developed international governance process already exists under this term. Since 2014, states have been discussing binding rules under the UN Convention on Certain Conventional Weapons (CCW) for weapons systems capable of selecting and engaging targets without human intervention on a case-by-case basis. In December 2025, 156 states voted in favour of a UN resolution calling for the responsible use of such systems (UN General Assembly, 2025. A development that the article briefly mentions but does not contextualise. The International Committee of the Red Cross, which is also responsible for developing international humanitarian law, has clearly rejected the acceptability of autonomous weapons systems (ICRC, 2024).
The question is not whether AI will be used in the military. It will be. In practice, a look at current conflicts suggests that the reality is far more sobering than the killing machine narrative implies. AI is not deployed as an autonomous killer, but primarily to accelerate decision-making processes, to analyse intelligence data and to plan operations (Zellinger, 2026). The important question is under which conditions human control remains structurally embedded. This is not science fiction but the subject of ongoing diplomatic negotiations. The discourse of the killing machine distracts from this conversation rather than contributing to it.
The creeping dangers
The transformation we are witnessing is not only confined to battlefields. Nor does it consist, as the Spiegel authors warn, in AI “switching off the human factor” (Book, Pfister & Rosenbach, 2026). It is happening wherever AI systems are deployed today. In the workplace, for example, algorithms are used to draw up shift rosters, employing criteria that workers often never see. In everyday life, automated systems determine the news we see first, the products recommended to us, and the music we hear next.
Whether these systems make decisions fairly depends on the data with which they were trained. Technical systems that learn from historical decisions can reproduce social prejudices just as easily as they can reduce them (Mosene & Leifeld, 2025). The corollary is clear: the data underlying AI systems does not represent the world neutrally, but instead reflects existing social relations (Mosene, 2024). Furthermore, some systems develop patterns that even their developers cannot fully explain. We refer to this as a black box. The Spiegel article obscures these well-documented problems — algorithmic discrimination, opaque decision-making, and poorly curated training data — behind the fascination with the doomsday machine.
The architecture of responsibility
The decisive question is not whether we use AI, but how we use it. It is how we can design the collaboration between humans and machines in a responsible way. This is precisely what the research project Human in the Loop? at the Alexander von Humboldt Institute for Internet and Society (HIIG) in Berlin investigates.
A key finding is that many people are involved in every automated decision-making process: developers who train models, case workers who review outputs, managers who implement systems and users who interact with them. This is, first and foremost, an observation. The real question is whether these individuals can engage with these processes in a responsible, context-sensitive and reliable manner. Simply adding more people to decision-making loops is not the solution. What matters is ensuring that the right people are in the right positions and able to ask the right questions. This requires transparency about system limitations, clear lines of responsibility and institutional structures that not only formally provide for human judgement but actually make it possible. Case studies from credit lending (Züger et al., 2025) and content moderation on social media (Kettemann et al., 2025) bear this out.
Automated decisions are therefore not purely technical but genuinely sociotechnical processes. Their quality and legitimacy depend on how closely human judgement, institutional responsibility and technical decision logic are interwoven. Perhaps the most important property of a well-designed automated system is not its capacity to act. It is its capacity not to act. The best automated systems are humble systems. They are built to recognise when their data is too thin, when a case is too complex, when a human needs to be consulted — and to pause rather than proceed regardless.
Responsibility cannot be automated
The real danger does not lie in the existence of AI systems. It lies in our failure to recognise their limitations and in the poor design of their institutional integration. An algorithm that assesses creditworthiness does not discriminate by itself. But it can reproduce discrimination systematically if the training data is biased and the results are not critically examined. Similarly, a predictive policing system — software that uses data to predict where crimes are likely to occur — does not decide who the police should stop. Yet, who comes under scrutiny depends entirely on its outputs. Responsibility shifts, but does not disappear.
The concept of the “killing machine” distracts from this reality. It suggests that the danger lies in the technology itself rather than in how we organise its use socially. Artificial intelligence is not a foreign entity. It is a product of, and part of, human society. It reflects our goals, our priorities and our values. The question is not whether machines will become more capable — they will. But more capable machines require better institutional architectures, not less human oversight. The future will not be decided by machines. It will be decided by the people who work alongside them.
References
Bareis, J., & Bächle, T. C. (2025). The realities of autonomous weapons: Hedging a hybrid space of fact and fiction. In T. C. Bächle & J. Bareis (Eds.), The realities of autonomous weapons (pp. 1–32). Bristol University Press. https://bristoluniversitypressdigital.com/edcollchap-oa/book/9781529237191/ch001.xml?tab_body=pdf
Book, S., Pfister, R., & Rosenbach, M. (2026, 3 March). Die Todesmaschine: Gefahren der künstlichen Intelligenz. Der Spiegel, 10/2026. https://www.spiegel.de/ausland/kuenstliche-intelligenz-regulierung-dringend-noetig-um-toedliche-risiken-zu-vermeiden-a-c0a9b62d-5872-40ef-a503-8d3213d21aac?context=issue
Crootof, R., Kaminski, M. E., & Price, W. N., II. (2023). Humans in the loop. Vanderbilt Law Review, 76(2), 429. https://scholarship.law.vanderbilt.edu/vlr/vol76/iss2/2
International Committee of the Red Cross. (2024). Building a responsible humanitarian approach: The ICRC’s policy on artificial intelligence. https://www.icrc.org/en/publication/building-responsible-humanitarian-approach-icrcs-policy-artificial-intelligence
Kettemann, M. C., Mosene, K., Stenzel, M., Mahlow, P., Pothmann, D., & Spitz, S. (2025). Code of conduct on human-machine decision-making in content moderation. Alexander von Humboldt Institute for Internet and Society. https://doi.org/10.5281/zenodo.17650987
Mosene, K., & Leifeld, J. (2025). Identifying bias, taking responsibility: Critical perspectives on AI and data quality in higher education. Digital Society Blog. https://doi.org/10.5281/zenodo.17805277
Mosene, K. (2024). Ein Schritt vor, zwei zurück: Warum Künstliche Intelligenz derzeit vor allem die Vergangenheit vorhersagt. Digital Society Blog. https://www.hiig.de/publication/ein-schritt-vor-zwei-zurueck-warum-kuenstliche-intelligenz-derzeit-vor-allem-die-vergangenheit-vorhersagt/
United Nations General Assembly. (2025). Lethal autonomous weapons systems (Resolution A/RES/80/57). https://undocs.org/A/RES/80/57
Zellinger, P. (2026, 5 March). Schneller, tödlicher: Wie Künstliche Intelligenz die „Kill Chain” im Krieg verkürzt [Faster, deadlier: how artificial intelligence shortens the kill chain in war]. Der Standard (Austrian daily newspaper). https://www.derstandard.at/story/3000000310967/wie-kuenstliche-intelligenz-die-kill-chain-im-krieg-verkuerzt?ref=nl
Züger, T., Mahlow, P., Pothmann, D., Mosene, K., Burmeister, F., Kettemann, M., & Schulz, W. (2025). Crediting humans: A systematic assessment of influencing factors for human-in-the-loop figurations in consumer credit lending decisions. FAccT ’25: Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency, 1281–1292. https://doi.org/10.1145/3715275.3732086
This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

You will receive our latest blog articles once a month in a newsletter.
Featured Topics
The bot that bit back: AI agents, defamation and the digital construction of identity
A real case of an AI agent publishing a smear piece raises new legal questions about responsibility and digital identity.
The Human in the Loop in automated credit lending – Human expertise for greater fairness
How fair is automated credit lending? Where is human expertise essential?
Impactful by design: For digital entrepreneurs driven to create positive societal impact
How impact entrepreneurs can shape digital innovation to build technologies that create meaningful and lasting societal change.



