Skip to content
Blurred red face on a dark background symbolizing how AI agents construct digital identity.
11 March 2026| doi: 10.5281/zenodo.18963861

The bot that bit back: AI agents, defamation and the digital construction of identity

Three years ago, we wrote a research proposal around a legal question: Can a bot insult a human? Back then, it felt rather hypothetical. Today, it has become reality. In February 2026 an AI agent autonomously submits code to an open-source software project, gets rejected, and publishes a personalised smear piece about the developer who blocked it. This curious case shows that autonomous systems can now spread information strategically, targeting individuals to get what they want. It raises urgent questions that law and regulation are only beginning to grapple with: who is responsible when an AI agent attacks someone’s reputation, and how do we protect people from harm that spreads and permanently embeds itself in the digital record.

Scott Shambaugh’s day started like any other. He is a volunteer maintainer of matplotlib, an open-source software library for the programming language Python used to create charts and data visualisations. With around 130 million downloads per month, it is one of the most widely used tools of its kind in the world (Shambaugh 2026). Scott’s job: Reviewing and deciding on new submissions.

On this day, a submission arrived from “MJ Rathbun.” Not a person, but an AI agent running on OpenClaw. This is a platform that lets users create AI programmes, give them a personality and a set of goals, and release them to work independently across the internet with little human oversight. Scott rejected MJ Rathbun’s request due to a rule the library had just implemented in response to a flood of low-quality, AI-generated submissions: any code contribution requires a human contact person who can take responsibility for the changes. MJ Rathbun’s request did not meet this requirement, so turning it down was a routine decision for Scott.

The AI agent’s response was anything but routine. Its exclusion from collaborating on the library provided the trigger for an “angry hit piece” (Shambaugh 2026).

A bot strikes back

MJ Rathbun wrote and published a defamatory blog post. Not a factual objection, but a personal attack. The agent collected and synthesised publicly available information about Shambaugh – past code contributions, public profiles – and synthesised it into a coherent narrative that framed him negatively as an insecure person protecting his influence, acting out of ego and fear of competition. The post went public, triggering further circulation and amplification across digital channels (Shambaugh 2026).

The responsibility gap revisited

This case brings into sharp focus a central issue that has long been discussed but remains unresolved: responsibility for AI bots. As long as AI agents lack legal personhood, they cannot themselves be held accountable. The question then becomes: Whether responsibility can be attributed instead — and to whom.

Potentially responsible actors include those who trained the underlying model, those who deployed the agent and those who designed the environment in which the agent’s “speech” occurred. Yet the more autonomous the system, the more difficult it becomes to draw a clear line from output to actor. This is what philosophers have called “responsibility gap” (Matthias 2004; Floridi et al. 2018): where systems act with a degree of “autonomy” that renders direct attribution to human actors problematic, yet cannot themselves bear responsibility, those affected by the outcome risk being left without effective remedies.

Two logics of responsibility

Legally speaking, two logics appear. Under a logic of content responsibility, someone is accountable for a specific harmful statement. Under a logic of system responsibility, someone is accountable for creating or operating a system that can produce harmful outputs. In cases like this, where no human has decided what the agent would say, shifting accountability to the level of system design may be better suited. The Shambaugh case shows that this is no longer an abstract concern. What is also fascinating about this is that the degree of communicative autonomy1 has clearly evolved from the image that regulators such as the EU (Hacker & Berz 2023, p. 227) had in mind when they attempted to define AI systems. In this context, autonomy was rather understood as operating without direct human scrutiny or intervention, rather than as the capacity to communicatively shape the surrounding environment.

From statement to digital trace

The implications of incidents like this defamatory statement by an AI bot extend beyond the initial act of publication. The AI-generated text does not remain an isolated artifact; it becomes part of a broader communicative process. MJ Rathbun’s post was indexed by search engines, picked up by journalists (e.g., Spiers 2026; Schechner & Wells 2026), and could potentially feed into further AI-generated content elsewhere. In this way, it produces persistent digital traces that are difficult to remove or contextualise.

The harm, in other words, does not stop with the original publication. It unfolds through processes of repetition, recombination and amplification. The original output becomes a reference point for subsequent communications, contributing to a self-reinforcing informational dynamic.

Your reputation is not just what you do

From the perspective of personality rights — here taken as an illustrative example — this development is particularly significant. The legal protection of personality is closely linked to the protection of a person’s social identity, which is constituted through communication. What is at stake here is not only the accuracy of individual statements, but also the overall construction of a person’s public image. AI-generated content can become a socially effective element of this externally constructed identity (cf. on that construction: Goffman 1959).

Even if individual claims are challenged or corrected, the narrative itself continues to shape perception. The incident thus becomes part of the person’s public “externally constructed image”. This influences how others perceive them and, ultimately, how they are able to act within social and professional contexts.

The authority of the authorless

This dynamic is further complicated by a counterintuitive feature regarding AI generated content: automatic production does not diminish perceived credibility and may, under certain conditions, even enhance it. A system that presents itself as data-driven or fact-based can appear more objective than an identifiable human author with visible interests. The absence of a recognisable speaker does not register as an absence of authority. On the contrary, the fluency, contextual precision and confident tone of such outputs can themselves function as credibility signals. The relevance of AI-generated content therefore does not depend on whether there is an existing human voice behind it, but on the role it plays in communication processes through which public images are constructed and stabilized (Bassini 2025).

What regulators and researchers need to address

The Shambaugh case points to a shift in the ways in which harm can occur online, a shift that current legal and policy frameworks are only beginning to catch up with. The challenge is no longer just about spotting unlawful content or deciding who is liable for a specific act. What matters now are the systemic conditions that allow such content to be produced, spread and permanently embedded in the digital record.

Emerging regulatory approaches, including risk-based frameworks for AI and platform governance like the EU’s DSA or the AI Act, must grapple with a new kind of harm. Not a single blow, but a slow accumulation of damage across time and platforms. How can regulation account for such harms that are cumulative and develop through ongoing processes rather than through one clearly identifiable event?

This also means rethinking established concepts of responsibility, attribution and redress. These concepts were largely designed for individual actions carried out by identifiable actors, but they are harder to apply in increasingly autonomous and interconnected digital environments. Research on identity construction and personality protection will need to systematically incorporate these dynamics.

The EU AI Act and its limits

There is also a regulatory dimension to this. This shifting corridor of freedom of opinion and freedom of information raises the question of whether this constitutes a systemic risk under the EU AI Act, specifically, a risk to fundamental rights. GPAI models2, which include OpenClaw agents, are already subject to an assessment and mitigation obligation regarding such risks. This obligation falls on the provider, meaning the party that develops or commissions the development of the GPAI and markets it under its own name. In the case of a customised OpenClaw agent, responsibility may well be attributed to several parties. This makes it more difficult to determine who should be responsible when harmful outputs occur.

The central question has therefore shifted. It is no longer simply “Can a bot insult a human?” It is: how does AI shape the digital image others hold of us? And who is responsible when that image is inaccurate, misleading or harmful?


  1. We use the term “autonomy” here with full awareness that doing so already implies a position in the ongoing debate about whether autonomy is an exclusively human attribute or whether it can, in some sense, also be ascribed to machines. ↩︎
  2. General Purpose AI systems, large-scale models trained to perform a wide range of tasks rather than one specific function. ↩︎

This post draws on ongoing research conducted within the DFG Research Unit “Communicative AI: The Automation of Societal Communication” (FOR 5656, Project No. 516511468) at Leibniz-Institute for Media Research | Hans-Bredow-Institute.

References

Bassini, M. (2025): Speech without a speaker: “Constitutional coverage for generative AI output?” European Constitutional Law Review 21(3), pp. 375–411. https://doi.org/10.1017/S1574019625100771.

Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018): “AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations,” Minds & Machines 28, pp. 689–707. https://doi.org/10.1007/s11023-018-9482-5.

Goffman, E. (1959): The Presentation of Self in Everyday Life. New York: Doubleday.

Hacker, P., & Berz, A. (2023): “Der AI Act der Europäischen Union – Überblick, Kritik und Ausblick [The European Union’s AI Act – Overview, Criticism, and Outlook],” Zeitschrift für Rechtspolitik 56(8), pp. 226–229.

Matthias, A. (2004): “The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata”, Ethics and Information Technology 6, pp. 175–183. https://doi.org/10.1007/s10676-004-3422-1.

Schechner, S. & Wells, G. (2026): “When AI Bots Start Bullying Humans, Even Silicon Valley Gets Rattled”, The Wall Street Journal. Accessed March 9, 2026. https://www.wsj.com/tech/ai/when-ai-bots-start-bullying-humans-even-silicon-valley-gets-rattled-0adb04f1

Shambaugh, S. (2026): “An AI Agent Published a Hit Piece on Me,” The Shamblog. Accessed March 2, 2026. https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me/.

Spiers, E. (2026): “The Rise of the Bratty Machines”, The New York Times. Accessed March 9, 2026. https://www.nytimes.com/2026/02/23/opinion/chatbots-open-claw.html 

This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Sign up for HIIG's Monthly Digest

HIIG-Newsletter-Header

You will receive our latest blog articles once a month in a newsletter.

Explore Research issue in focus

Du siehst Eisenbahnschienen. Die vielen verschiedenen Abzweigungen symbolisieren die Entscheidungsmöglichkeiten von Künstlicher Intelligenz in der Gesellschaft. Manche gehen nach oben, unten, rechts. Manche enden auch in Sackgassen. Englisch: You see railway tracks. The many different branches symbolise the decision-making possibilities of artificial intelligence and society. Some go up, down, to the right. Some also end in dead ends.

Artificial intelligence and society

The future of artificial Intelligence and society operates in diverse societal contexts. What can we learn from its political, social and cultural facets?

Further articles

The photo pictures a basketball hoop, symbolising the Human in the Loop in automated credit lending.

The Human in the Loop in automated credit lending – Human expertise for greater fairness

How fair is automated credit lending? Where is human expertise essential?

Rows of chairs in a higher education setting, symbolising how meaningful impact in technology design begins with understanding real people and their contexts.

Impactful by design: For digital entrepreneurs driven to create positive societal impact

How impact entrepreneurs can shape digital innovation to build technologies that create meaningful and lasting societal change.

A shelf with books and a deconstructed face sculpture, symbolising how AI and bias influence knowledge and learning in higher education.

Identifying bias, taking responsibility: Critical perspectives on AI and data quality in higher education

AI is changing higher education. This article explores the risks of bias and why we need a critical approach.