{"id":113789,"date":"2026-03-11T16:57:08","date_gmt":"2026-03-11T15:57:08","guid":{"rendered":"https:\/\/www.hiig.de\/?p=113789"},"modified":"2026-03-12T09:28:01","modified_gmt":"2026-03-12T08:28:01","slug":"blog-the-ai-agent-that-bit-back","status":"publish","type":"post","link":"https:\/\/www.hiig.de\/en\/blog-the-ai-agent-that-bit-back\/","title":{"rendered":"The bot that bit back: AI agents, defamation and the digital construction of identity"},"content":{"rendered":"\n<p><strong>Three years ago, we wrote a research proposal around a legal question: Can a bot insult a human? Back then, it felt rather hypothetical. Today, it has become reality. In February 2026 an AI agent autonomously submits code to an open-source software project, gets rejected, and publishes a personalised smear piece about the developer who blocked it. This curious case shows that autonomous systems can now spread information strategically, targeting individuals to get what they want. It raises urgent questions that law and regulation are only beginning to grapple with: who is responsible when an AI agent attacks someone&#8217;s reputation, and how do we protect people from harm that spreads and permanently embeds itself in the digital record<\/strong>.<\/p>\n\n\n\n<p>Scott Shambaugh&#8217;s day started like any other. He is a volunteer maintainer of <a href=\"https:\/\/matplotlib.org\/\" target=\"_blank\" rel=\"noreferrer noopener\"><em>matplotlib<\/em><\/a>, an open-source software library for the programming language Python used to create charts and data visualisations. With around 130 million downloads per month, it is one of the most widely used tools of its kind in the world (<a href=\"https:\/\/theshamblog.com\/an-ai-agent-published-a-hit-piece-on-me\/\" target=\"_blank\" rel=\"noreferrer noopener\">Shambaugh, 2026<\/a>). Scott&#8217;s job: Reviewing and deciding on new submissions.<\/p>\n\n\n\n<p>On this day, a submission arrived from &#8220;MJ Rathbun.&#8221; Not a person, but an AI agent running on <a href=\"https:\/\/github.com\/openclaw\/openclaw\" target=\"_blank\" rel=\"noreferrer noopener\"><em>OpenClaw<\/em><\/a>. This is a platform that lets users create AI programmes, give them a personality and a set of goals, and release them to work independently across the internet with little human oversight. Scott rejected MJ Rathbun&#8217;s request due to a rule the library had just implemented in response to a flood of low-quality, AI-generated submissions: any code contribution requires a human contact person who can take responsibility for the changes. MJ Rathbun&#8217;s request did not meet this requirement, so turning it down was a routine decision for Scott.<\/p>\n\n\n\n<p>The AI agent&#8217;s response was anything but routine. Its exclusion from collaborating on the library provided the trigger for an &#8220;angry hit piece&#8221; (<a href=\"https:\/\/theshamblog.com\/an-ai-agent-published-a-hit-piece-on-me\/\" target=\"_blank\" rel=\"noreferrer noopener\">Shambaugh, 2026<\/a>).<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>A bot strikes back<\/strong><\/h2>\n\n\n\n<p>MJ Rathbun wrote and published a defamatory blog post. Not a factual objection, but a personal attack. The agent collected and synthesised publicly available information about Shambaugh \u2013 past code contributions, public profiles \u2013 and synthesised it into a coherent narrative that framed him negatively as an insecure person protecting his influence, acting out of ego and fear of competition. The post went public, triggering further circulation and amplification across digital channels (<a href=\"https:\/\/theshamblog.com\/an-ai-agent-published-a-hit-piece-on-me\/\" target=\"_blank\" rel=\"noreferrer noopener\">Shambaugh, 2026<\/a>).<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>The responsibility gap revisited<\/strong><\/h2>\n\n\n\n<p>This case brings into sharp focus a central issue that has long been discussed but remains unresolved: responsibility for AI bots. As long as AI agents lack legal personhood, they cannot themselves be held accountable. The question then becomes: Whether responsibility can be attributed instead \u2014 and to whom.<\/p>\n\n\n\n<p>Potentially responsible actors include those who trained the underlying model, those who deployed the agent and those who designed the environment in which the agent\u2019s \u201cspeech\u201d occurred. Yet the more autonomous the system, the more difficult it becomes to draw a clear line from output to actor. This is what philosophers have called \u201cresponsibility gap\u201d (<a href=\"https:\/\/doi.org\/10.1007\/s10676-004-3422-1\" target=\"_blank\" rel=\"noreferrer noopener\">Matthias, 2004<\/a>; <a href=\"https:\/\/doi.org\/10.1007\/s11023-018-9482-5\" target=\"_blank\" rel=\"noreferrer noopener\">Floridi et al., 2018<\/a>): where systems act with a degree of \u201cautonomy\u201d that renders direct attribution to human actors problematic, yet cannot themselves bear responsibility, those affected by the outcome risk being left without effective remedies.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Two logics of responsibility<\/strong><\/h2>\n\n\n\n<p>Legally speaking, two logics appear. Under a logic of content responsibility, someone is accountable for a specific harmful statement. Under a logic of system responsibility, someone is accountable for creating or operating a system that can produce harmful outputs. In cases like this, where no human has decided what the agent would say, shifting accountability to the level of system design may be better suited. The Shambaugh case shows that this is no longer an abstract concern. What is also fascinating about this is that the degree of communicative autonomy<sup data-fn=\"965dc85e-f126-4302-bb02-7a0cafd2efac\" class=\"fn\"><a id=\"965dc85e-f126-4302-bb02-7a0cafd2efac-link\" href=\"#965dc85e-f126-4302-bb02-7a0cafd2efac\">1<\/a><\/sup> has clearly evolved from the image that regulators such as the EU (Hacker &amp; Berz, 2023, p. 227) had in mind when they attempted to define AI systems. In this context, autonomy was rather understood as operating without direct human scrutiny or intervention, rather than as the capacity to communicatively shape the surrounding environment.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>From statement to digital trace<\/strong><\/h2>\n\n\n\n<p>The implications of incidents like this defamatory statement by an AI bot extend beyond the initial act of publication. The AI-generated text does not remain an isolated artifact; it becomes part of a broader communicative process. MJ Rathbun&#8217;s post was indexed by search engines, picked up by journalists (e.g., <a href=\"https:\/\/www.nytimes.com\/2026\/02\/23\/opinion\/chatbots-open-claw.html\" target=\"_blank\" rel=\"noreferrer noopener\">Spiers, 2026<\/a>; <a href=\"https:\/\/www.wsj.com\/tech\/ai\/when-ai-bots-start-bullying-humans-even-silicon-valley-gets-rattled-0adb04f1\" target=\"_blank\" rel=\"noreferrer noopener\">Schechner &amp; Wells, 2026<\/a>), and could potentially feed into further AI-generated content elsewhere. In this way, it produces persistent digital traces that are difficult to remove or contextualise.<\/p>\n\n\n\n<p>The harm, in other words, does not stop with the original publication. It unfolds through processes of repetition, recombination and amplification. The original output becomes a reference point for subsequent communications, contributing to a self-reinforcing informational dynamic.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Your reputation is not just what you do<\/strong><\/h2>\n\n\n\n<p>From the perspective of personality rights \u2014 here taken as an illustrative example \u2014 this development is particularly significant. The legal protection of personality is closely linked to the protection of a person\u2019s social identity, which is constituted through communication. What is at stake here is not only the accuracy of individual statements, but also the overall construction of a person\u2019s public image. AI-generated content can become a socially effective element of this externally constructed identity (cf. on that construction: Goffman, 1959).<\/p>\n\n\n\n<p>Even if individual claims are challenged or corrected, the narrative itself continues to shape perception. The incident thus becomes part of the person\u2019s public \u201cexternally constructed image\u201d. This influences how others perceive them and, ultimately, how they are able to act within social and professional contexts.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>The authority of the authorless<\/strong><\/h2>\n\n\n\n<p>This dynamic is further complicated by a counterintuitive feature regarding AI generated content: automatic production does not diminish perceived credibility and may, under certain conditions, even enhance it. A system that presents itself as data-driven or fact-based can appear more objective than an identifiable human author with visible interests. The absence of a recognisable speaker does not register as an absence of authority. On the contrary, the fluency, contextual precision and confident tone of such outputs can themselves function as credibility signals. The relevance of AI-generated content therefore does not depend on whether there is an existing human voice behind it, but on the role it plays in communication processes through which public images are constructed and stabilized (<a href=\"https:\/\/doi.org\/10.1017\/S1574019625100771\" target=\"_blank\" rel=\"noreferrer noopener\">Bassini, 2025<\/a>).<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>What regulators and researchers need to address<\/strong><\/h2>\n\n\n\n<p>The Shambaugh case points to a shift in the ways in which harm can occur online, a shift that current legal and policy frameworks are only beginning to catch up with. The challenge is no longer just about spotting unlawful content or deciding who is liable for a specific act. What matters now are the systemic conditions that allow such content to be produced, spread and permanently embedded in the digital record.<\/p>\n\n\n\n<p>Emerging regulatory approaches, including risk-based frameworks for AI and platform governance like the EU\u2019s DSA or the AI Act, must grapple with a new kind of harm. Not a single blow, but a slow accumulation of damage across time and platforms. How can regulation account for such harms that are cumulative and develop through ongoing processes rather than through one clearly identifiable event?<\/p>\n\n\n\n<p>This also means rethinking established concepts of responsibility, attribution and redress. These concepts were largely designed for individual actions carried out by identifiable actors, but they are harder to apply in increasingly autonomous and interconnected digital environments. Research on identity construction and personality protection will need to systematically incorporate these dynamics.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>The EU AI Act and its limits<\/strong><\/h2>\n\n\n\n<p>There is also a regulatory dimension to this. This shifting corridor of freedom of opinion and freedom of information raises the question of whether this constitutes a systemic risk under the EU AI Act, specifically, a risk to fundamental rights. GPAI models<sup data-fn=\"36bc0a65-5729-41da-9fc1-c8aea9420191\" class=\"fn\"><a id=\"36bc0a65-5729-41da-9fc1-c8aea9420191-link\" href=\"#36bc0a65-5729-41da-9fc1-c8aea9420191\">2<\/a><\/sup>, which include OpenClaw agents, are already subject to an assessment and mitigation obligation regarding such risks. This obligation falls on the provider, meaning the party that develops or commissions the development of the GPAI and markets it under its own name. In the case of a customised OpenClaw agent, responsibility may well be attributed to several parties. This makes it more difficult to determine who should be responsible when harmful outputs occur.<\/p>\n\n\n\n<p>The central question has therefore shifted. It is no longer simply &#8220;Can a bot insult a human?&#8221; It is: how does AI shape the digital image others hold of us? And who is responsible when that image is inaccurate, misleading or harmful?<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-wide\"\/>\n\n\n\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-constrained wp-block-group-is-layout-constrained\"><ol class=\"wp-block-footnotes\"><li id=\"965dc85e-f126-4302-bb02-7a0cafd2efac\">We use the term \u201cautonomy\u201d here with full awareness that doing so already implies a position in the ongoing debate about whether autonomy is an exclusively human attribute or whether it can, in some sense, also be ascribed to machines. <a href=\"#965dc85e-f126-4302-bb02-7a0cafd2efac-link\" aria-label=\"Jump to footnote reference 1\">\u21a9\ufe0e<\/a><\/li><li id=\"36bc0a65-5729-41da-9fc1-c8aea9420191\">General Purpose AI systems, large-scale models trained to perform a wide range of tasks rather than one specific function. <a href=\"#36bc0a65-5729-41da-9fc1-c8aea9420191-link\" aria-label=\"Jump to footnote reference 2\">\u21a9\ufe0e<\/a><\/li><\/ol><\/div><\/div>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-wide\"\/>\n\n\n\n<p><em>This post draws on ongoing research conducted within the <\/em><a href=\"https:\/\/leibniz-hbi.de\/en\/hbi-projects\/the-juridification-of-communicative-ai\/\" target=\"_blank\" rel=\"noreferrer noopener\"><em>DFG Research Unit \u201cCommunicative AI: The Automation of Societal Communication\u201d (FOR 5656, Project No. 516511468)<\/em><\/a><em> at Leibniz-Institute for Media Research | Hans-Bredow-Institute.<\/em><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>References<\/strong><\/h2>\n\n\n\n<p>Bassini, M. (2025): Speech without a speaker: \u201cConstitutional coverage for generative AI output?\u201d European Constitutional Law Review 21(3), pp. 375\u2013411. <a href=\"https:\/\/doi.org\/10.1017\/S1574019625100771\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/doi.org\/10.1017\/S1574019625100771<\/a>.<\/p>\n\n\n\n<p>Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., &amp; Vayena, E. (2018): \u201cAI4People\u2014An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations,\u201d Minds &amp; Machines 28, pp. 689\u2013707. <a href=\"https:\/\/doi.org\/10.1007\/s11023-018-9482-5\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/doi.org\/10.1007\/s11023-018-9482-5<\/a>.<\/p>\n\n\n\n<p>Goffman, E. (1959): The Presentation of Self in Everyday Life. New York: Doubleday.<\/p>\n\n\n\n<p>Hacker, P., &amp; Berz, A. (2023): \u201cDer AI Act der Europ\u00e4ischen Union \u2013 \u00dcberblick, Kritik und Ausblick [The European Union&#8217;s AI Act \u2013 Overview, Criticism, and Outlook],\u201d Zeitschrift f\u00fcr Rechtspolitik 56(8), pp. 226\u2013229.<\/p>\n\n\n\n<p>Matthias, A. (2004): \u201cThe Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata\u201d, Ethics and Information Technology 6, pp. 175\u2013183. <a href=\"https:\/\/doi.org\/10.1007\/s10676-004-3422-1\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/doi.org\/10.1007\/s10676-004-3422-1<\/a>.<\/p>\n\n\n\n<p>Schechner, S. &amp; Wells, G. (2026): \u201cWhen AI Bots Start Bullying Humans, Even Silicon Valley Gets Rattled\u201d, The Wall Street Journal. Accessed March 9, 2026. <a href=\"https:\/\/www.wsj.com\/tech\/ai\/when-ai-bots-start-bullying-humans-even-silicon-valley-gets-rattled-0adb04f1\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.wsj.com\/tech\/ai\/when-ai-bots-start-bullying-humans-even-silicon-valley-gets-rattled-0adb04f1<\/a><\/p>\n\n\n\n<p>Shambaugh, S. (2026): \u201cAn AI Agent Published a Hit Piece on Me,\u201d The Shamblog. Accessed March 2, 2026. <a href=\"https:\/\/theshamblog.com\/an-ai-agent-published-a-hit-piece-on-me\/\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/theshamblog.com\/an-ai-agent-published-a-hit-piece-on-me\/<\/a>.<\/p>\n\n\n\n<p>Spiers, E. (2026): \u201cThe Rise of the Bratty Machines\u201d, The New York Times. Accessed March 9, 2026. <a href=\"https:\/\/www.nytimes.com\/2026\/02\/23\/opinion\/chatbots-open-claw.html\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.nytimes.com\/2026\/02\/23\/opinion\/chatbots-open-claw.html<\/a>&nbsp;<\/p>\n\n\n\n<p><\/p>\n<div class=\"shariff shariff-align-flex-start shariff-widget-align-flex-start\"><ul class=\"shariff-buttons theme-round orientation-horizontal buttonsize-medium\"><li class=\"shariff-button linkedin shariff-nocustomcolor\" style=\"background-color:#1488bf\"><a href=\"https:\/\/www.linkedin.com\/sharing\/share-offsite\/?url=https%3A%2F%2Fwww.hiig.de%2Fen%2Fblog-the-ai-agent-that-bit-back%2F\" title=\"Share on LinkedIn\" aria-label=\"Share on LinkedIn\" role=\"button\" rel=\"noopener nofollow\" class=\"shariff-link\" style=\"; background-color:#0077b5; color:#fff\" target=\"_blank\"><span class=\"shariff-icon\" style=\"\"><svg width=\"32px\" height=\"20px\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 27 32\"><path fill=\"#0077b5\" d=\"M6.2 11.2v17.7h-5.9v-17.7h5.9zM6.6 5.7q0 1.3-0.9 2.2t-2.4 0.9h0q-1.5 0-2.4-0.9t-0.9-2.2 0.9-2.2 2.4-0.9 2.4 0.9 0.9 2.2zM27.4 18.7v10.1h-5.9v-9.5q0-1.9-0.7-2.9t-2.3-1.1q-1.1 0-1.9 0.6t-1.2 1.5q-0.2 0.5-0.2 1.4v9.9h-5.9q0-7.1 0-11.6t0-5.3l0-0.9h5.9v2.6h0q0.4-0.6 0.7-1t1-0.9 1.6-0.8 2-0.3q3 0 4.9 2t1.9 6z\"\/><\/svg><\/span><\/a><\/li><li class=\"shariff-button bluesky shariff-nocustomcolor\" style=\"background-color:#84c4ff\"><a href=\"https:\/\/bsky.app\/intent\/compose?text=The%20bot%20that%20bit%20back%3A%20AI%20agents%2C%20defamation%20and%20the%20digital%20construction%20of%20identity https%3A%2F%2Fwww.hiig.de%2Fen%2Fblog-the-ai-agent-that-bit-back%2F  via @hiigberlin.bsky.social\" title=\"Share on Bluesky\" aria-label=\"Share on Bluesky\" role=\"button\" rel=\"noopener nofollow\" class=\"shariff-link\" style=\"; background-color:#0085ff; color:#fff\" target=\"_blank\"><span class=\"shariff-icon\" style=\"\"><svg width=\"20\" height=\"20\" version=\"1.1\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 20 20\"><path class=\"st0\" d=\"M4.89,3.12c2.07,1.55,4.3,4.71,5.11,6.4.82-1.69,3.04-4.84,5.11-6.4,1.49-1.12,3.91-1.99,3.91.77,0,.55-.32,4.63-.5,5.3-.64,2.3-2.99,2.89-5.08,2.54,3.65.62,4.58,2.68,2.57,4.74-3.81,3.91-5.48-.98-5.9-2.23-.08-.23-.11-.34-.12-.25,0-.09-.04.02-.12.25-.43,1.25-2.09,6.14-5.9,2.23-2.01-2.06-1.08-4.12,2.57-4.74-2.09.36-4.44-.23-5.08-2.54-.19-.66-.5-4.74-.5-5.3,0-2.76,2.42-1.89,3.91-.77h0Z\"\/><\/svg><\/span><\/a><\/li><li class=\"shariff-button mailto shariff-nocustomcolor\" style=\"background-color:#a8a8a8\"><a href=\"mailto:?body=https%3A%2F%2Fwww.hiig.de%2Fen%2Fblog-the-ai-agent-that-bit-back%2F&subject=The%20bot%20that%20bit%20back%3A%20AI%20agents%2C%20defamation%20and%20the%20digital%20construction%20of%20identity\" title=\"Send by email\" aria-label=\"Send by email\" role=\"button\" rel=\"noopener nofollow\" class=\"shariff-link\" style=\"; background-color:#999; color:#fff\"><span class=\"shariff-icon\" style=\"\"><svg width=\"32px\" height=\"20px\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 32 32\"><path fill=\"#999\" d=\"M32 12.7v14.2q0 1.2-0.8 2t-2 0.9h-26.3q-1.2 0-2-0.9t-0.8-2v-14.2q0.8 0.9 1.8 1.6 6.5 4.4 8.9 6.1 1 0.8 1.6 1.2t1.7 0.9 2 0.4h0.1q0.9 0 2-0.4t1.7-0.9 1.6-1.2q3-2.2 8.9-6.1 1-0.7 1.8-1.6zM32 7.4q0 1.4-0.9 2.7t-2.2 2.2q-6.7 4.7-8.4 5.8-0.2 0.1-0.7 0.5t-1 0.7-0.9 0.6-1.1 0.5-0.9 0.2h-0.1q-0.4 0-0.9-0.2t-1.1-0.5-0.9-0.6-1-0.7-0.7-0.5q-1.6-1.1-4.7-3.2t-3.6-2.6q-1.1-0.7-2.1-2t-1-2.5q0-1.4 0.7-2.3t2.1-0.9h26.3q1.2 0 2 0.8t0.9 2z\"\/><\/svg><\/span><\/a><\/li><\/ul><\/div>","protected":false},"excerpt":{"rendered":"<p>A real case of an AI agent publishing a smear piece raises new legal questions about responsibility and digital identity.<\/p>\n","protected":false},"author":10000024,"featured_media":113791,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":"[{\"content\":\"We use the term \u201cautonomy\u201d here with full awareness that doing so already implies a position in the ongoing debate about whether autonomy is an exclusively human attribute or whether it can, in some sense, also be ascribed to machines.\",\"id\":\"965dc85e-f126-4302-bb02-7a0cafd2efac\"},{\"content\":\"General Purpose AI systems, large-scale models trained to perform a wide range of tasks rather than one specific function.\",\"id\":\"36bc0a65-5729-41da-9fc1-c8aea9420191\"}]"},"categories":[1289,1577,1582,224],"tags":[],"class_list":["post-113789","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial-intelligence","category-digital-so","category-ftif-ai-and-society","category-policy-and-law"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>The AI agent that bit back &#8211; Digital Society Blog<\/title>\n<meta name=\"description\" content=\"A real case of an AI agent publishing a smear piece raises new legal questions about responsibility and the construction of digital identity.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.hiig.de\/en\/blog-the-ai-agent-that-bit-back\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"The AI agent that bit back &#8211; Digital Society Blog\" \/>\n<meta property=\"og:description\" content=\"A real case of an AI agent publishing a smear piece raises new legal questions about responsibility and the construction of digital identity.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.hiig.de\/en\/blog-the-ai-agent-that-bit-back\/\" \/>\n<meta property=\"og:site_name\" content=\"HIIG\" \/>\n<meta property=\"article:published_time\" content=\"2026-03-11T15:57:08+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-03-12T08:28:01+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.hiig.de\/wp-content\/uploads\/2026\/03\/Titelbild_AIAgent.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1144\" \/>\n\t<meta property=\"og:image:height\" content=\"643\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Sarah Ziedler\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Sarah Ziedler\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"8 minutes\" \/>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"The AI agent that bit back &#8211; Digital Society Blog","description":"A real case of an AI agent publishing a smear piece raises new legal questions about responsibility and the construction of digital identity.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.hiig.de\/en\/blog-the-ai-agent-that-bit-back\/","og_locale":"en_US","og_type":"article","og_title":"The AI agent that bit back &#8211; Digital Society Blog","og_description":"A real case of an AI agent publishing a smear piece raises new legal questions about responsibility and the construction of digital identity.","og_url":"https:\/\/www.hiig.de\/en\/blog-the-ai-agent-that-bit-back\/","og_site_name":"HIIG","article_published_time":"2026-03-11T15:57:08+00:00","article_modified_time":"2026-03-12T08:28:01+00:00","og_image":[{"width":1144,"height":643,"url":"https:\/\/www.hiig.de\/wp-content\/uploads\/2026\/03\/Titelbild_AIAgent.png","type":"image\/png"}],"author":"Sarah Ziedler","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Sarah Ziedler","Est. reading time":"8 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.hiig.de\/en\/blog-the-ai-agent-that-bit-back\/#article","isPartOf":{"@id":"https:\/\/www.hiig.de\/en\/blog-the-ai-agent-that-bit-back\/"},"author":{"name":"Sarah Ziedler","@id":"https:\/\/www.hiig.de\/#\/schema\/person\/35544a00c06dd18b5c1fdc6920839a1b"},"headline":"The bot that bit back: AI agents, defamation and the digital construction of identity","datePublished":"2026-03-11T15:57:08+00:00","dateModified":"2026-03-12T08:28:01+00:00","mainEntityOfPage":{"@id":"https:\/\/www.hiig.de\/en\/blog-the-ai-agent-that-bit-back\/"},"wordCount":1674,"publisher":{"@id":"https:\/\/www.hiig.de\/#organization"},"image":{"@id":"https:\/\/www.hiig.de\/en\/blog-the-ai-agent-that-bit-back\/#primaryimage"},"thumbnailUrl":"https:\/\/www.hiig.de\/wp-content\/uploads\/2026\/03\/Titelbild_AIAgent.png","articleSection":["Artificial Intelligence","Digital Society Blog","ftif AI and Society","Policy and Law"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.hiig.de\/en\/blog-the-ai-agent-that-bit-back\/","url":"https:\/\/www.hiig.de\/en\/blog-the-ai-agent-that-bit-back\/","name":"The AI agent that bit back &#8211; Digital Society Blog","isPartOf":{"@id":"https:\/\/www.hiig.de\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.hiig.de\/en\/blog-the-ai-agent-that-bit-back\/#primaryimage"},"image":{"@id":"https:\/\/www.hiig.de\/en\/blog-the-ai-agent-that-bit-back\/#primaryimage"},"thumbnailUrl":"https:\/\/www.hiig.de\/wp-content\/uploads\/2026\/03\/Titelbild_AIAgent.png","datePublished":"2026-03-11T15:57:08+00:00","dateModified":"2026-03-12T08:28:01+00:00","description":"A real case of an AI agent publishing a smear piece raises new legal questions about responsibility and the construction of digital identity.","breadcrumb":{"@id":"https:\/\/www.hiig.de\/en\/blog-the-ai-agent-that-bit-back\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.hiig.de\/en\/blog-the-ai-agent-that-bit-back\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.hiig.de\/en\/blog-the-ai-agent-that-bit-back\/#primaryimage","url":"https:\/\/www.hiig.de\/wp-content\/uploads\/2026\/03\/Titelbild_AIAgent.png","contentUrl":"https:\/\/www.hiig.de\/wp-content\/uploads\/2026\/03\/Titelbild_AIAgent.png","width":1144,"height":643,"caption":"A real case of an AI agent publishing a smear piece raises new legal questions about responsibility and the construction of digital identity."},{"@type":"BreadcrumbList","@id":"https:\/\/www.hiig.de\/en\/blog-the-ai-agent-that-bit-back\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.hiig.de\/en\/"},{"@type":"ListItem","position":2,"name":"The bot that bit back: AI agents, defamation and the digital construction of identity"}]},{"@type":"WebSite","@id":"https:\/\/www.hiig.de\/#website","url":"https:\/\/www.hiig.de\/","name":"HIIG","description":"Alexander von Humboldt Institute for Internet and Society","publisher":{"@id":"https:\/\/www.hiig.de\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.hiig.de\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.hiig.de\/#organization","name":"HIIG","url":"https:\/\/www.hiig.de\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.hiig.de\/#\/schema\/logo\/image\/","url":"https:\/\/www.hiig.de\/wp-content\/uploads\/2019\/06\/hiig.png","contentUrl":"https:\/\/www.hiig.de\/wp-content\/uploads\/2019\/06\/hiig.png","width":320,"height":80,"caption":"HIIG"},"image":{"@id":"https:\/\/www.hiig.de\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/www.hiig.de\/#\/schema\/person\/35544a00c06dd18b5c1fdc6920839a1b","name":"Sarah Ziedler"}]}},"_links":{"self":[{"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/posts\/113789","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/users\/10000024"}],"replies":[{"embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/comments?post=113789"}],"version-history":[{"count":8,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/posts\/113789\/revisions"}],"predecessor-version":[{"id":113835,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/posts\/113789\/revisions\/113835"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/media\/113791"}],"wp:attachment":[{"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/media?parent=113789"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/categories?post=113789"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/tags?post=113789"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}