{"id":113909,"date":"2026-03-18T18:10:53","date_gmt":"2026-03-18T17:10:53","guid":{"rendered":"https:\/\/www.hiig.de\/?p=113909"},"modified":"2026-03-23T06:58:12","modified_gmt":"2026-03-23T05:58:12","slug":"blog-forget-the-killing-machine","status":"publish","type":"post","link":"https:\/\/www.hiig.de\/en\/blog-forget-the-killing-machine\/","title":{"rendered":"Forget the &#8220;killing machine&#8221;: why AI is a question of responsibility, not apocalypse"},"content":{"rendered":"\n<p><strong>At the end of February, <em>Der Spiegel<\/em>, one of Germany&#8217;s most widely read news magazines, published a cover story that paints a dramatic picture of the AI age. Much like the invention of the atomic bomb, artificial intelligence could alter the fundamental logic of war and deterrence, potentially triggering a global technological arms race. AI is portrayed as a potential \u201ckilling machine\u201d. It is presented as a technology that accelerates military decision-making, coordinates cyberattacks independently, and could one day surpass its creators. The underlying fear is that machines will become an existential threat to humanity, rather than remaining mere tools. But does this stand up to scientific scrutiny? This article offers an alternative perspective. It asks what lies behind the metaphor of the \u201ckilling machine\u201d, why the <em>Spiegel<\/em> piece conflates so many distinct AI systems, and which debates remain invisible as a result. <strong>For instance, that automated decision-making systems are already deployed far beyond the military: in credit lending and in the moderation of content on digital platforms. AI systems are not a force of nature descending on our society from outside. They are developed, deployed and held accountable by people.<\/strong> Is this really about autonomous, lethal super-AI? Or about how we as a society use AI responsibly?<\/strong><\/p>\n\n\n\n<p>It is one of humanity&#8217;s oldest reflexes to fear its tools once they become powerful enough. In the nineteenth century, workers smashed mechanical looms out of fear of losing their livelihoods. A century later, the atomic bomb prompted fears of civilisational collapse. And today a new figure has emerged, as AI systems are deployed across many areas of society: &#8220;deadly intelligence&#8221;, as the cover of the tenth issue of <em>Der Spiegel<\/em> in 2026 proclaims (<a href=\"https:\/\/www.spiegel.de\/ausland\/kuenstliche-intelligenz-regulierung-dringend-noetig-um-toedliche-risiken-zu-vermeiden-a-c0a9b62d-5872-40ef-a503-8d3213d21aac?context=issue\" target=\"_blank\" rel=\"noreferrer noopener\">Book, Pfister &amp; Rosenbach, 2026<\/a>).<\/p>\n\n\n\n<div style=\"height:15px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"911\" height=\"1200\" src=\"https:\/\/www.hiig.de\/wp-content\/uploads\/2026\/03\/image-911x1200.png\" alt=\"\" class=\"wp-image-113910\" style=\"aspect-ratio:0.7591719151580196;width:378px;height:auto\" srcset=\"https:\/\/www.hiig.de\/wp-content\/uploads\/2026\/03\/image-911x1200.png 911w, https:\/\/www.hiig.de\/wp-content\/uploads\/2026\/03\/image-607x800.png 607w, https:\/\/www.hiig.de\/wp-content\/uploads\/2026\/03\/image-46x60.png 46w, https:\/\/www.hiig.de\/wp-content\/uploads\/2026\/03\/image-768x1012.png 768w, https:\/\/www.hiig.de\/wp-content\/uploads\/2026\/03\/image-137x180.png 137w, https:\/\/www.hiig.de\/wp-content\/uploads\/2026\/03\/image-439x579.png 439w, https:\/\/www.hiig.de\/wp-content\/uploads\/2026\/03\/image-38x50.png 38w, https:\/\/www.hiig.de\/wp-content\/uploads\/2026\/03\/image-273x360.png 273w, https:\/\/www.hiig.de\/wp-content\/uploads\/2026\/03\/image-1165x1536.png 1165w, https:\/\/www.hiig.de\/wp-content\/uploads\/2026\/03\/image.png 1214w\" sizes=\"auto, (max-width: 911px) 100vw, 911px\" \/><figcaption class=\"wp-element-caption\">Image source: <a href=\"https:\/\/www.spiegel.de\/spiegel\/print\/index-2026-10.html\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.spiegel.de\/spiegel\/print\/index-2026-10.html<\/a><\/figcaption><\/figure>\n<\/div>\n\n\n<p>This cover, on closer inspection, carries many problematic assumptions. What we see is not a real photograph but an AI-generated image: a humanoid robot head, half human, half metal skull, one eye blue and almost human, the other a glowing red sensor point. This is no coincidence of aesthetics. AI is simultaneously humanised and demonised. The robot has a face; it stares back. Only what looks like an agent can be feared like one.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>AI as killing machine \u2014 a cultural pattern<\/strong><\/h2>\n\n\n\n<p>The article accompanying the cover begins with a horror film and ends with a warning of an apocalypse. Along the way, the authors describe genuinely distinct systems: large language models such as <em>Claude<\/em>, exploited for cyberattacks; AI-guided drones over Ukraine; autonomous combat jets; and Chinese robots. This is, at first glance, legitimate. These systems exist; their deployment is real and worth scrutinising. The problem lies not in the individual examples, but in the narrative frame the article constructs around them. The authors write of the &#8220;killing machine&#8221; and deploy the following subtitle: &#8220;The rise of artificial intelligence is as dangerous as the invention of the atomic bomb. AI pioneers warn: humanity must rein in the machines while it still can&#8221; (<a href=\"https:\/\/www.spiegel.de\/ausland\/kuenstliche-intelligenz-regulierung-dringend-noetig-um-toedliche-risiken-zu-vermeiden-a-c0a9b62d-5872-40ef-a503-8d3213d21aac?context=issue\" target=\"_blank\" rel=\"noreferrer noopener\">Book, Pfister &amp; Rosenbach, 2026<\/a>).<\/p>\n\n\n\n<p>This aesthetic is no accident but part of a deeply rooted cultural pattern. The idea of an intelligence slipping beyond human control has long become a fixed topic in films, political speeches, and our collective imagination \u2014 shaping public perceptions of real technologies in lasting ways (<a href=\"https:\/\/bristoluniversitypressdigital.com\/edcollchap-oa\/book\/9781529237191\/ch001.xml?tab_body=pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Bareis &amp; B\u00e4chle, 2025<\/a>). It is <em>the computer<\/em> that starts the war. It is <em>the algorithm<\/em> that takes control. It is <em>the system <\/em>that turns against its human creators. This figure is rhetorically powerful, even in journalistic contexts. But it is analytically misleading. Not because AI systems are harmless, but because artificial intelligence is not a singular actor detached from human agency, and because the central problem is not one existential threat but many creeping dangers. AI is simultaneously more dangerous and less dangerous than the killing machine narrative suggests.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>There is no single AI<\/strong><\/h2>\n\n\n\n<p>The article conflates heterogeneous technologies and AI systems with the narrative of <em>artificial general intelligence<\/em>. This theoretical form of AI is capable of understanding knowledge, learning, and applying it to any intellectual task at a level that matches or surpasses human intelligence. This kind of AI does not exist. Whether it will ever exist, is highly contested amongst researchers. Yet the article implicitly conjures precisely this image when it describes AI as an existential threat. Dario Amodei, chief executive of <em>Anthropic<\/em>, is quoted as saying there is a &#8220;25 per cent probability&#8221; that AI will destroy humanity. AI pioneer Geoffrey Hinton puts the figure at &#8220;10 to 20 per cent&#8221; (<a href=\"https:\/\/www.spiegel.de\/ausland\/kuenstliche-intelligenz-regulierung-dringend-noetig-um-toedliche-risiken-zu-vermeiden-a-c0a9b62d-5872-40ef-a503-8d3213d21aac?context=issue\" target=\"_blank\" rel=\"noreferrer noopener\">Book, Pfister &amp; Rosenbach, 2026<\/a>). Neither figure is contextualised and there is no scientific way to assess the validity of this assumption..<\/p>\n\n\n\n<p>Behind this narrative lies an equally important story that remains invisible. Automated decision-making systems have long been part of everyday life, far beyond the military. These are programmes or procedures that use data and statistical models to support or structure decision-making processes \u2014 sometimes with human involvement, sometimes without. We already use them for credit lending, in the moderation of content on social media platforms, in recruitment processes and in the administration of public services (<a href=\"https:\/\/scholarship.law.vanderbilt.edu\/vlr\/vol76\/iss2\/2\/\" target=\"_blank\" rel=\"noreferrer noopener\">Crootof et al., 2023<\/a>). Our financial markets, communications platforms, transport networks and energy systems all depend on the machine processing of information on a scale and at a speed far beyond the cognitive capacity of any individual human.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>More than efficiency<\/strong><\/h2>\n\n\n\n<p>These systems are not superintelligences. They are often narrow and domain-specific. Yet we deploy them in decisions that directly affect people&#8217;s lives. The reality rarely lies at either extreme, between lethal military technology and miraculous super-AI. More often than not, humans and machines collaborate in unspectacular yet highly consequential decision-making loops.<\/p>\n\n\n\n<p>Banks, for instance, use statistical models trained on hundreds of thousands of past credit decisions to identify patterns. Drawing on criteria such as income, employment stability and repayment history, these models assess the likelihood that a person will repay a loan. In doing so, they can render visible the relationships between many variables simultaneously \u2014 connections that human analysts would struggle to identify in individual cases. Yet the result is not an automated decision. It is a basis for one. Credit officers then examine the output more closely. For example, has a young self-employed person not yet built up a long credit history, even though their business model is sound? Is a customer facing a short-term financial shortfall that looks bad statistically, but which can be fully explained in context?<\/p>\n\n\n\n<p>The human element remains in the decision-making process. This is not a weakness of the system but a deliberate design choice. Machines deliver consistency: they apply the same criteria to every case, regardless of the assessor&#8217;s state of mind or implicit assumptions. Humans bring context and judgement. But this, by itself, does not make for a compelling story.<\/p>\n\n\n\n<p>The more successfully these systems are deployed in everyday life, the more invisible they become. We only tend to notice them only when something goes wrong. At that moment, it is tempting to ascribe a form of autonomous agency to them. However, this is merely a projection.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Machines do not simply act<\/strong><\/h2>\n\n\n\n<p>Perhaps the most important insight from science and technology studies is also one of the most uncomfortable: machines do not simply act. They are deployed. AI systems have no intentions or interests. What they do is structurally different from human thought and understanding; it is not simply a faster version of it. Humans fundamentally (and constitutionally) have freedom of action, whereas machines do not. AI systems learn from vast datasets, identifying statistical correlations and deriving predictions from them. They can recognise patterns across millions of cases simultaneously and simulate scenarios that exceed human cognitive capacity. The apparent intelligence we perceive in these systems is derived from human data, human decisions and human goals. A killer drone does not simply fly. The killer robot has a history, and that history is human.<\/p>\n\n\n\n<p>Those who believe machines are autonomous decision-makers demonise them as &#8220;killing machines&#8221; (<a href=\"https:\/\/www.spiegel.de\/ausland\/kuenstliche-intelligenz-regulierung-dringend-noetig-um-toedliche-risiken-zu-vermeiden-a-c0a9b62d-5872-40ef-a503-8d3213d21aac?context=issue\" target=\"_blank\" rel=\"noreferrer noopener\">Book, Pfister &amp; Rosenbach, 2026<\/a>). Those who understand that they are tools embedded in institutional decision-making architectures will instead try to design those architectures responsibly. The difference between these two perspectives is the difference between fear and effective governance.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Autonomous weapons: A debate the article ignores<\/strong><\/h2>\n\n\n\n<p>This gap is most apparent in relation to the topic that dominates the <em>Spiegel<\/em> article: drones, autonomous combat jets and AI-guided warfare. While the article vividly describes these systems, it never uses the term that has been used in international politics for years to negotiate such systems. These systems are referred to as autonomous weapon systems (AWS) or lethal autonomous weapons systems (LAWS) (<a href=\"https:\/\/bristoluniversitypressdigital.com\/edcollchap-oa\/book\/9781529237191\/ch001.xml?tab_body=pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Bareis &amp; B\u00e4chle, 2025<\/a>).<\/p>\n\n\n\n<p>This is not an academic technicality. A well-developed international governance process already exists under this term. Since 2014, states have been discussing binding rules under the UN Convention on Certain Conventional Weapons (CCW) for weapons systems capable of selecting and engaging targets without human intervention on a case-by-case basis. In December 2025, 156 states voted in favour of a UN resolution calling for the responsible use of such systems (<a href=\"https:\/\/docs.un.org\/en\/A\/RES\/80\/57\" target=\"_blank\" rel=\"noreferrer noopener\">UN General Assembly, 2025<\/a>. A development that the article briefly mentions but does not contextualise. The International Committee of the Red Cross, which is also responsible for developing international humanitarian law, has clearly rejected the acceptability of autonomous weapons systems (<a href=\"https:\/\/www.icrc.org\/en\/publication\/building-responsible-humanitarian-approach-icrcs-policy-artificial-intelligence\" target=\"_blank\" rel=\"noreferrer noopener\">ICRC, 2024<\/a>).<\/p>\n\n\n\n<p>The question is not whether AI will be used in the military. It will be. In practice, a look at current conflicts suggests that the reality is far more sobering than the killing machine narrative implies. AI is not deployed as an autonomous killer, but primarily to accelerate decision-making processes, to analyse intelligence data and to plan operations (<a href=\"https:\/\/www.derstandard.at\/story\/3000000310967\/wie-kuenstliche-intelligenz-die-kill-chain-im-krieg-verkuerzt?ref=nl\" target=\"_blank\" rel=\"noreferrer noopener\">Zellinger, 2026<\/a>). The important question is under which conditions human control remains structurally embedded. This is not science fiction but the subject of ongoing diplomatic negotiations. The discourse of the killing machine distracts from this conversation rather than contributing to it.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>The creeping dangers<\/strong><\/h2>\n\n\n\n<p>The transformation we are witnessing is not only confined to battlefields. Nor does it consist, as the <em>Spiegel<\/em> authors warn, in AI &#8220;switching off the human factor&#8221; (<a href=\"https:\/\/www.spiegel.de\/ausland\/kuenstliche-intelligenz-regulierung-dringend-noetig-um-toedliche-risiken-zu-vermeiden-a-c0a9b62d-5872-40ef-a503-8d3213d21aac?context=issue\" target=\"_blank\" rel=\"noreferrer noopener\">Book, Pfister &amp; Rosenbach, 2026<\/a>). It is happening wherever AI systems are deployed today. In the workplace, for example, algorithms are used to draw up shift rosters, employing criteria that workers often never see. In everyday life, automated systems determine the news we see first, the products recommended to us, and the music we hear next.<\/p>\n\n\n\n<p>Whether these systems make decisions fairly depends on the data with which they were trained. Technical systems that learn from historical decisions can reproduce social prejudices just as easily as they can reduce them (<a href=\"https:\/\/www.hiig.de\/en\/identifying-bias-taking-responsibility\/\" target=\"_blank\" rel=\"noreferrer noopener\">Mosene &amp; Leifeld, 2025<\/a>). The corollary is clear: the data underlying AI systems does not represent the world neutrally, but instead reflects existing social relations (<a href=\"https:\/\/www.hiig.de\/warum-ki-derzeit-vor-allem-vergangenheit-vorhersagt\/\" target=\"_blank\" rel=\"noreferrer noopener\">Mosene, 2024<\/a>). Furthermore, some systems develop patterns that even their developers cannot fully explain. We refer to this as a <em>black box<\/em>. The <em>Spiegel <\/em>article obscures these well-documented problems \u2014 algorithmic discrimination, opaque decision-making, and poorly curated training data \u2014 behind the fascination with the doomsday machine.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>The architecture of responsibility<\/strong><\/h2>\n\n\n\n<p>The decisive question is not whether we use AI, but how we use it. It is how we can design the collaboration between humans and machines in a responsible way. This is precisely what the research project <em><a href=\"https:\/\/www.hiig.de\/project\/human-in-the-loop\/\" target=\"_blank\" rel=\"noreferrer noopener\">Human in the Loop?<\/a><\/em> at the Alexander von Humboldt Institute for Internet and Society (HIIG) in Berlin investigates.<\/p>\n\n\n\n<p>A key finding is that many people are involved in every automated decision-making process: developers who train models, case workers who review outputs, managers who implement systems and users who interact with them. This is, first and foremost, an observation. The real question is whether these individuals can engage with these processes in a responsible, context-sensitive and reliable manner. Simply adding more people to decision-making loops is not the solution. What matters is ensuring that the right people are in the right positions and able to ask the right questions. This requires transparency about system limitations, clear lines of responsibility and institutional structures that not only formally provide for human judgement but actually make it possible. Case studies from credit lending (<a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715275.3732086\" target=\"_blank\" rel=\"noreferrer noopener\">Z\u00fcger et al., 2025<\/a>) and content moderation on social media (<a href=\"https:\/\/graphite.page\/coc-strengthening-trust\/\" target=\"_blank\" rel=\"noreferrer noopener\">Kettemann et al., 2025<\/a>) bear this out.<\/p>\n\n\n\n<p>Automated decisions are therefore not purely technical but genuinely sociotechnical processes. Their quality and legitimacy depend on how closely human judgement, institutional responsibility and technical decision logic are interwoven. Perhaps the most important property of a well-designed automated system is not its capacity to act. It is its capacity not to act. The best automated systems are humble systems. They are built to recognise when their data is too thin, when a case is too complex, when a human needs to be consulted \u2014 and to pause rather than proceed regardless.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Responsibility cannot be automated<\/strong><\/h2>\n\n\n\n<p>The real danger does not lie in the existence of AI systems. It lies in our failure to recognise their limitations and in the poor design of their institutional integration. An algorithm that assesses creditworthiness does not discriminate by itself. But it can reproduce discrimination systematically if the training data is biased and the results are not critically examined. Similarly, a predictive policing system \u2014 software that uses data to predict where crimes are likely to occur \u2014 does not decide who the police should stop. Yet, who comes under scrutiny depends entirely on its outputs. Responsibility shifts, but does not disappear.<\/p>\n\n\n\n<p>The concept of the \u201ckilling machine\u201d distracts from this reality. It suggests that the danger lies in the technology itself rather than in how we organise its use socially. Artificial intelligence is not a foreign entity. It is a product of, and part of, human society. It reflects our goals, our priorities and our values. The question is not whether machines will become more capable \u2014 they will. But more capable machines require better institutional architectures, not less human oversight. The future will not be decided by machines. It will be decided by the people who work alongside them.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>References<\/strong><\/h2>\n\n\n\n<p>Bareis, J., &amp; B\u00e4chle, T. C. (2025). The realities of autonomous weapons: Hedging a hybrid space of fact and fiction. In T. C. B\u00e4chle &amp; J. Bareis (Eds.), <em>The realities of autonomous weapons<\/em> (pp. 1\u201332). Bristol University Press. <a href=\"https:\/\/bristoluniversitypressdigital.com\/edcollchap-oa\/book\/9781529237191\/ch001.xml?tab_body=pdf\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/bristoluniversitypressdigital.com\/edcollchap-oa\/book\/9781529237191\/ch001.xml?tab_body=pdf<\/a><\/p>\n\n\n\n<p>Book, S., Pfister, R., &amp; Rosenbach, M. (2026, 3 March). Die Todesmaschine: Gefahren der k\u00fcnstlichen Intelligenz. <em>Der Spiegel<\/em>, <em>10\/2026<\/em>. <a href=\"https:\/\/www.spiegel.de\/ausland\/kuenstliche-intelligenz-regulierung-dringend-noetig-um-toedliche-risiken-zu-vermeiden-a-c0a9b62d-5872-40ef-a503-8d3213d21aac?context=issue\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.spiegel.de\/ausland\/kuenstliche-intelligenz-regulierung-dringend-noetig-um-toedliche-risiken-zu-vermeiden-a-c0a9b62d-5872-40ef-a503-8d3213d21aac?context=issue<\/a><\/p>\n\n\n\n<p>Crootof, R., Kaminski, M. E., &amp; Price, W. N., II. (2023). Humans in the loop. <em>Vanderbilt Law Review<\/em>, <em>76<\/em>(2), 429. <a href=\"https:\/\/scholarship.law.vanderbilt.edu\/vlr\/vol76\/iss2\/2\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/scholarship.law.vanderbilt.edu\/vlr\/vol76\/iss2\/2<\/a><\/p>\n\n\n\n<p>International Committee of the Red Cross. (2024). <em>Building a responsible humanitarian approach: The ICRC&#8217;s policy on artificial intelligence<\/em>. <a href=\"https:\/\/www.icrc.org\/en\/publication\/building-responsible-humanitarian-approach-icrcs-policy-artificial-intelligence\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.icrc.org\/en\/publication\/building-responsible-humanitarian-approach-icrcs-policy-artificial-intelligence<\/a><\/p>\n\n\n\n<p>Kettemann, M. C., Mosene, K., Stenzel, M., Mahlow, P., Pothmann, D., &amp; Spitz, S. (2025). <em>Code of conduct on human-machine decision-making in content moderation<\/em>. Alexander von Humboldt Institute for Internet and Society. <a href=\"https:\/\/doi.org\/10.5281\/zenodo.17650987\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/doi.org\/10.5281\/zenodo.17650987<\/a><\/p>\n\n\n\n<p>Mosene, K., &amp; Leifeld, J. (2025). Identifying bias, taking responsibility: Critical perspectives on AI and data quality in higher education. <em>Digital Society Blog<\/em>. <a href=\"https:\/\/doi.org\/10.5281\/zenodo.17805277\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/doi.org\/10.5281\/zenodo.17805277<\/a><\/p>\n\n\n\n<p>Mosene, K. (2024). Ein Schritt vor, zwei zur\u00fcck: Warum K\u00fcnstliche Intelligenz derzeit vor allem die Vergangenheit vorhersagt. <em>Digital Society Blog<\/em>. <a href=\"https:\/\/www.hiig.de\/publication\/ein-schritt-vor-zwei-zurueck-warum-kuenstliche-intelligenz-derzeit-vor-allem-die-vergangenheit-vorhersagt\/\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.hiig.de\/publication\/ein-schritt-vor-zwei-zurueck-warum-kuenstliche-intelligenz-derzeit-vor-allem-die-vergangenheit-vorhersagt\/<\/a><\/p>\n\n\n\n<p>United Nations General Assembly. (2025). <em>Lethal autonomous weapons systems<\/em> (Resolution A\/RES\/80\/57). <a href=\"https:\/\/undocs.org\/A\/RES\/80\/57\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/undocs.org\/A\/RES\/80\/57<\/a><\/p>\n\n\n\n<p>Zellinger, P. (2026, 5 March). Schneller, t\u00f6dlicher: Wie K\u00fcnstliche Intelligenz die \u201eKill Chain&#8221; im Krieg verk\u00fcrzt [Faster, deadlier: how artificial intelligence shortens the kill chain in war]. <em>Der Standard<\/em> (Austrian daily newspaper). <a href=\"https:\/\/www.derstandard.at\/story\/3000000310967\/wie-kuenstliche-intelligenz-die-kill-chain-im-krieg-verkuerzt?ref=nl\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.derstandard.at\/story\/3000000310967\/wie-kuenstliche-intelligenz-die-kill-chain-im-krieg-verkuerzt?ref=nl<\/a><\/p>\n\n\n\n<p>Z\u00fcger, T., Mahlow, P., Pothmann, D., Mosene, K., Burmeister, F., Kettemann, M., &amp; Schulz, W. (2025). Crediting humans: A systematic assessment of influencing factors for human-in-the-loop figurations in consumer credit lending decisions. <em>FAccT &#8217;25: Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency<\/em>, 1281\u20131292. <a href=\"https:\/\/doi.org\/10.1145\/3715275.3732086\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/doi.org\/10.1145\/3715275.3732086<\/a><\/p>\n<div class=\"shariff shariff-align-flex-start shariff-widget-align-flex-start\"><ul class=\"shariff-buttons theme-round orientation-horizontal buttonsize-medium\"><li class=\"shariff-button linkedin shariff-nocustomcolor\" style=\"background-color:#1488bf\"><a href=\"https:\/\/www.linkedin.com\/sharing\/share-offsite\/?url=https%3A%2F%2Fwww.hiig.de%2Fen%2Fblog-forget-the-killing-machine%2F\" title=\"Share on LinkedIn\" aria-label=\"Share on LinkedIn\" role=\"button\" rel=\"noopener nofollow\" class=\"shariff-link\" style=\"; background-color:#0077b5; color:#fff\" target=\"_blank\"><span class=\"shariff-icon\" style=\"\"><svg width=\"32px\" height=\"20px\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 27 32\"><path fill=\"#0077b5\" d=\"M6.2 11.2v17.7h-5.9v-17.7h5.9zM6.6 5.7q0 1.3-0.9 2.2t-2.4 0.9h0q-1.5 0-2.4-0.9t-0.9-2.2 0.9-2.2 2.4-0.9 2.4 0.9 0.9 2.2zM27.4 18.7v10.1h-5.9v-9.5q0-1.9-0.7-2.9t-2.3-1.1q-1.1 0-1.9 0.6t-1.2 1.5q-0.2 0.5-0.2 1.4v9.9h-5.9q0-7.1 0-11.6t0-5.3l0-0.9h5.9v2.6h0q0.4-0.6 0.7-1t1-0.9 1.6-0.8 2-0.3q3 0 4.9 2t1.9 6z\"\/><\/svg><\/span><\/a><\/li><li class=\"shariff-button bluesky shariff-nocustomcolor\" style=\"background-color:#84c4ff\"><a href=\"https:\/\/bsky.app\/intent\/compose?text=Forget%20the%20%E2%80%9Ckilling%20machine%E2%80%9D%3A%20why%20AI%20is%20a%20question%20of%20responsibility%2C%20not%20apocalypse https%3A%2F%2Fwww.hiig.de%2Fen%2Fblog-forget-the-killing-machine%2F  via @hiigberlin.bsky.social\" title=\"Share on Bluesky\" aria-label=\"Share on Bluesky\" role=\"button\" rel=\"noopener nofollow\" class=\"shariff-link\" style=\"; background-color:#0085ff; color:#fff\" target=\"_blank\"><span class=\"shariff-icon\" style=\"\"><svg width=\"20\" height=\"20\" version=\"1.1\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 20 20\"><path class=\"st0\" d=\"M4.89,3.12c2.07,1.55,4.3,4.71,5.11,6.4.82-1.69,3.04-4.84,5.11-6.4,1.49-1.12,3.91-1.99,3.91.77,0,.55-.32,4.63-.5,5.3-.64,2.3-2.99,2.89-5.08,2.54,3.65.62,4.58,2.68,2.57,4.74-3.81,3.91-5.48-.98-5.9-2.23-.08-.23-.11-.34-.12-.25,0-.09-.04.02-.12.25-.43,1.25-2.09,6.14-5.9,2.23-2.01-2.06-1.08-4.12,2.57-4.74-2.09.36-4.44-.23-5.08-2.54-.19-.66-.5-4.74-.5-5.3,0-2.76,2.42-1.89,3.91-.77h0Z\"\/><\/svg><\/span><\/a><\/li><li class=\"shariff-button mailto shariff-nocustomcolor\" style=\"background-color:#a8a8a8\"><a href=\"mailto:?body=https%3A%2F%2Fwww.hiig.de%2Fen%2Fblog-forget-the-killing-machine%2F&subject=Forget%20the%20%E2%80%9Ckilling%20machine%E2%80%9D%3A%20why%20AI%20is%20a%20question%20of%20responsibility%2C%20not%20apocalypse\" title=\"Send by email\" aria-label=\"Send by email\" role=\"button\" rel=\"noopener nofollow\" class=\"shariff-link\" style=\"; background-color:#999; color:#fff\"><span class=\"shariff-icon\" style=\"\"><svg width=\"32px\" height=\"20px\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 32 32\"><path fill=\"#999\" d=\"M32 12.7v14.2q0 1.2-0.8 2t-2 0.9h-26.3q-1.2 0-2-0.9t-0.8-2v-14.2q0.8 0.9 1.8 1.6 6.5 4.4 8.9 6.1 1 0.8 1.6 1.2t1.7 0.9 2 0.4h0.1q0.9 0 2-0.4t1.7-0.9 1.6-1.2q3-2.2 8.9-6.1 1-0.7 1.8-1.6zM32 7.4q0 1.4-0.9 2.7t-2.2 2.2q-6.7 4.7-8.4 5.8-0.2 0.1-0.7 0.5t-1 0.7-0.9 0.6-1.1 0.5-0.9 0.2h-0.1q-0.4 0-0.9-0.2t-1.1-0.5-0.9-0.6-1-0.7-0.7-0.5q-1.6-1.1-4.7-3.2t-3.6-2.6q-1.1-0.7-2.1-2t-1-2.5q0-1.4 0.7-2.3t2.1-0.9h26.3q1.2 0 2 0.8t0.9 2z\"\/><\/svg><\/span><\/a><\/li><\/ul><\/div>","protected":false},"excerpt":{"rendered":"<p>The authors challenge the metaphor of artificial intelligence as a &#8220;killing machine&#8221; that will one day surpass its human creators.<\/p>\n","protected":false},"author":313,"featured_media":113952,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1289,1577],"tags":[],"class_list":["post-113909","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial-intelligence","category-digital-so"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Forget the &quot;killing machine&quot; &#8211; Digital Society Blog<\/title>\n<meta name=\"description\" content=\"The authors challenge the widespread metaphor of artificial intelligence as a &quot;killing machine&quot; that will one day surpass its human creators.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.hiig.de\/en\/blog-forget-the-killing-machine\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Forget the &quot;killing machine&quot; &#8211; Digital Society Blog\" \/>\n<meta property=\"og:description\" content=\"The authors challenge the widespread metaphor of artificial intelligence as a &quot;killing machine&quot; that will one day surpass its human creators.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.hiig.de\/en\/blog-forget-the-killing-machine\/\" \/>\n<meta property=\"og:site_name\" content=\"HIIG\" \/>\n<meta property=\"article:published_time\" content=\"2026-03-18T17:10:53+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-03-23T05:58:12+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.hiig.de\/wp-content\/uploads\/2026\/03\/Titelbild_AI-responsibility-not-apocalypse-1.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1144\" \/>\n\t<meta property=\"og:image:height\" content=\"643\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Digital Society Blog\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Digital Society Blog\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"12 minutes\" \/>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Forget the \"killing machine\" &#8211; Digital Society Blog","description":"The authors challenge the widespread metaphor of artificial intelligence as a \"killing machine\" that will one day surpass its human creators.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.hiig.de\/en\/blog-forget-the-killing-machine\/","og_locale":"en_US","og_type":"article","og_title":"Forget the \"killing machine\" &#8211; Digital Society Blog","og_description":"The authors challenge the widespread metaphor of artificial intelligence as a \"killing machine\" that will one day surpass its human creators.","og_url":"https:\/\/www.hiig.de\/en\/blog-forget-the-killing-machine\/","og_site_name":"HIIG","article_published_time":"2026-03-18T17:10:53+00:00","article_modified_time":"2026-03-23T05:58:12+00:00","og_image":[{"width":1144,"height":643,"url":"https:\/\/www.hiig.de\/wp-content\/uploads\/2026\/03\/Titelbild_AI-responsibility-not-apocalypse-1.png","type":"image\/png"}],"author":"Digital Society Blog","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Digital Society Blog","Est. reading time":"12 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.hiig.de\/en\/blog-forget-the-killing-machine\/#article","isPartOf":{"@id":"https:\/\/www.hiig.de\/en\/blog-forget-the-killing-machine\/"},"author":{"name":"Digital Society Blog","@id":"https:\/\/www.hiig.de\/#\/schema\/person\/a921ecfdfcb94cb9c718b90c3a5dedbd"},"headline":"Forget the &#8220;killing machine&#8221;: why AI is a question of responsibility, not apocalypse","datePublished":"2026-03-18T17:10:53+00:00","dateModified":"2026-03-23T05:58:12+00:00","mainEntityOfPage":{"@id":"https:\/\/www.hiig.de\/en\/blog-forget-the-killing-machine\/"},"wordCount":2648,"publisher":{"@id":"https:\/\/www.hiig.de\/#organization"},"image":{"@id":"https:\/\/www.hiig.de\/en\/blog-forget-the-killing-machine\/#primaryimage"},"thumbnailUrl":"https:\/\/www.hiig.de\/wp-content\/uploads\/2026\/03\/Titelbild_AI-responsibility-not-apocalypse-1.png","articleSection":["Artificial Intelligence","Digital Society Blog"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.hiig.de\/en\/blog-forget-the-killing-machine\/","url":"https:\/\/www.hiig.de\/en\/blog-forget-the-killing-machine\/","name":"Forget the \"killing machine\" &#8211; Digital Society Blog","isPartOf":{"@id":"https:\/\/www.hiig.de\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.hiig.de\/en\/blog-forget-the-killing-machine\/#primaryimage"},"image":{"@id":"https:\/\/www.hiig.de\/en\/blog-forget-the-killing-machine\/#primaryimage"},"thumbnailUrl":"https:\/\/www.hiig.de\/wp-content\/uploads\/2026\/03\/Titelbild_AI-responsibility-not-apocalypse-1.png","datePublished":"2026-03-18T17:10:53+00:00","dateModified":"2026-03-23T05:58:12+00:00","description":"The authors challenge the widespread metaphor of artificial intelligence as a \"killing machine\" that will one day surpass its human creators.","breadcrumb":{"@id":"https:\/\/www.hiig.de\/en\/blog-forget-the-killing-machine\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.hiig.de\/en\/blog-forget-the-killing-machine\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.hiig.de\/en\/blog-forget-the-killing-machine\/#primaryimage","url":"https:\/\/www.hiig.de\/wp-content\/uploads\/2026\/03\/Titelbild_AI-responsibility-not-apocalypse-1.png","contentUrl":"https:\/\/www.hiig.de\/wp-content\/uploads\/2026\/03\/Titelbild_AI-responsibility-not-apocalypse-1.png","width":1144,"height":643,"caption":"The authors challenge the widespread metaphor of artificial intelligence as a \"killing machine\" that will one day surpass its human creators."},{"@type":"BreadcrumbList","@id":"https:\/\/www.hiig.de\/en\/blog-forget-the-killing-machine\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.hiig.de\/en\/"},{"@type":"ListItem","position":2,"name":"Forget the &#8220;killing machine&#8221;: why AI is a question of responsibility, not apocalypse"}]},{"@type":"WebSite","@id":"https:\/\/www.hiig.de\/#website","url":"https:\/\/www.hiig.de\/","name":"HIIG","description":"Alexander von Humboldt Institute for Internet and Society","publisher":{"@id":"https:\/\/www.hiig.de\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.hiig.de\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.hiig.de\/#organization","name":"HIIG","url":"https:\/\/www.hiig.de\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.hiig.de\/#\/schema\/logo\/image\/","url":"https:\/\/www.hiig.de\/wp-content\/uploads\/2019\/06\/hiig.png","contentUrl":"https:\/\/www.hiig.de\/wp-content\/uploads\/2019\/06\/hiig.png","width":320,"height":80,"caption":"HIIG"},"image":{"@id":"https:\/\/www.hiig.de\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/www.hiig.de\/#\/schema\/person\/a921ecfdfcb94cb9c718b90c3a5dedbd","name":"Digital Society Blog"}]}},"_links":{"self":[{"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/posts\/113909","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/users\/313"}],"replies":[{"embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/comments?post=113909"}],"version-history":[{"count":13,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/posts\/113909\/revisions"}],"predecessor-version":[{"id":114022,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/posts\/113909\/revisions\/114022"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/media\/113952"}],"wp:attachment":[{"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/media?parent=113909"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/categories?post=113909"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/tags?post=113909"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}