Making sense of our connected world

AI resistance: Who says no to AI and why?
A poisoned dataset. A writers’ strike that froze Hollywood for 148 days. Street protests against data centres. Behind each of these acts lies a growing global pushback against artificial intelligence. Drawing on the recent report, “From Rejection to Regulation: Mapping the Landscape of AI Resistance,” by Can Simsek and Ayse Gizem Yasar, this article examines how artists, workers, activists, and scholars challenge the design, deployment, and governance of AI systems. It explores the drivers behind AI resistance and outlines a research agenda that treats these acts not as obstacles, but as vital contributions to democratic AI governance.
Artificial intelligence is catalysing a radical sociotechnical transformation, reshaping not only our technological infrastructures but also the institutions that organise society. In the midst of this shift, crucial questions arise: Who determines the direction of this change and the future we want to build? Who remains unheard in the conversation? Are we passive observers of increasingly deployed powerful algorithms, or do we have the agency and responsibility to challenge and reshape them?
Acts of pushback are already unfolding across diverse domains and geographies. While heterogeneous in form and motivation, these interventions share a critical orientation towards the pace, purpose, and underlying power structures of contemporary AI development. Rather than isolated incidents, they constitute elements of a broader landscape of AI resistance that demands closer attention.
AI resistance: From looms to learning machines
To see today’s pushback against AI in context, it helps to remember that resistance to new technology is nothing new. Technological paradigm shifts have consistently triggered societal concern and resistance, from the 19th century Luddites who opposed textile machinery due to labor displacement, to current debates on digital surveillance and algorithmic bias. As artificial intelligence emerges as a major transformative force, public reactions continue to alternate between optimism and concern. On the one hand, governments and private firms are committing unprecedented levels of investment in AI development; on the other, a growing amount of “AI resistance” raises fundamental objections to how these technologies are being designed, produced, deployed, and governed. But what exactly is AI resistance?
What is AI resistance?
The concept of “resistance” in the context of AI encompasses a wide spectrum of actions and discourses that may be overt or subtle, organised or diffuse, individual or collective, oppositional or reformist. Drawing on insights from critical theory and science and technology studies, resistance to artificial intelligence can be understood as a form of agency exercised within existing systems of power. In this framing, the object of resistance is not technology per se, but the sociotechnical arrangements and asymmetries that both shape and are shaped by the development and application of AI.
Such resistance can manifest in diverse forms, including public protest, legal action, digital subversion, scholarly critique, and grassroots advocacy. Comparable to civil disobedience, these practices reflect a principled commitment to ethical, legal, or democratic norms perceived to be undermined by the development or deployment of certain AI systems. The term “AI resistanceI” therefore covers a broad range of actions and is open to multiple interpretations, given that both “resistance” and “artificial intelligence” are expansive and inherently abstract concepts. But what does AI resistance look like in practice?
Instances of AI resistance
Historical forms of resistance find new analogues in the digital age. For instance: In early 20th-century France, “sabotage” referred to deliberate acts that disrupted industrial machinery — the term is said to come from workers throwing their wooden shoes (sabots) into machines to stop production. In today’s digital context, a similar tactic appears in “data poisoning.” In this phenomenon, artists and other creators, subtly alter their work to mislead AI models — for example, by adding imperceptible changes to images or text — so that when AI systems use these materials for training, the resulting models are misled or degraded. This is not just a technical attack. It is used deliberately to resist the unauthorised use of creative work in AI training, effectively sabotaging the algorithms and transforming data poisoning from a technical vulnerability into a form of resistance.
Besides using such tools, workers in creative industries also relied on traditional forms of resistance, most notably going on strike to assert their rights and influence how new technologies are integrated into their work. A prominent example is the 2023 Hollywood writers’ strike, during which the Writers Guild of America demanded contractual protections against the use of AI to write or rewrite scripts or to use AI-generated material as source content. Lasting 148 days and joined by other unions, the strike brought much of the US film and television industry to a standstill and marked a pivotal moment in collective resistance to the unregulated adoption of AI in creative industries.
From protests to policy
In the report, we recorded numerous instances of AI resistance, including protests against the environmental impacts of data centers, opposition from big tech employees over military applications of AI, public outcry over the UK’s A-level grading fiasco and various other forms of public or institutional resistance. While not intended to be exhaustive, we surveyed six key areas where such resistance has been particularly active:
- (i) creative industries
- (ii) migration and border control
- (iii) medical AI
- (iv) higher education
- (v) defense and security sectors and
- (vi) environmental activism.
Thereby, we highlighted key actors in AI resistance, with particular emphasis on the role of civil society in mobilising public opposition. The report also looks at how governments have turned some forms of resistance into law. One example is the EU AI Act, which prohibits certain AI systems like deliberately manipulative AI practices.
Why do people resist AI?
The report also points to five main reasons why people push back against AI, each illustrated with real-world examples:
- (i) First, there are socio-economic concerns, visible for example in the creative industries, where the 2023 Writers Guild of America strike took aim at AI’s potential to replace human jobs.
- (ii) Second, ethical issues arise when AI systems are opaque or biased, such as migration risk-assessment tools that can unfairly influence decisions about people’s futures.
- (iii) Third, safety risks are a concern, especially in healthcare, where flawed AI diagnostic results have led medical professionals to speak out.
- (iv) Fourth, there are threats to democracy and sovereignty, including the use of AI for large-scale societal manipulation.
- (v) And finally, there’s the environmental impact: climate-focused NGOs have highlighted research showing the significant carbon footprint of training large AI models.
Outlining a research roadmap on AI resistance
Advancing research on AI resistance is crucial for shedding light on these widespread societal concerns that are often overlooked in technical and policy-oriented discourse. These voices of resistance should not be dismissed as mere dissent; rather, they serve as vital guides toward governance frameworks that better reflect democratic values and the public interest. Mapping the motivations and practices underlying AI resistance reveals the breadth and depth of civic engagement. Workers, artists, clinicians, educators, activists, and scholars are all asserting the need for participation in shaping how AI technologies are designed, deployed, and regulated. By recognising these interventions as meaningful contributions rather than obstacles, we aim to open up space for more pluralistic and socially grounded conversations about technological futures. The question is not whether resistance will shape AI, but how.
This research therefore marks the beginning of a wider dialogue. It invites academics, developers, policymakers, and civil society actors to collaborate in building an interdisciplinary knowledge base. To that end, we organised a first dedicated workshop at the HIIG, where we presented our report and facilitated a discussion on our findings. We now seek to continue this work with colleagues from diverse disciplines. Through sharing case studies, co-developing frameworks, and exchanging best practices, we can work collectively to ensure that AI advances in ways that protect human dignity, promote social justice, and support ecological sustainability.
References
Şimşek and Yasar (2025). From Rejection to Regulation: Mapping the Landscape of AI Resistance. Available here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5287068
This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

You will receive our latest blog articles once a month in a newsletter.
Research issues in focus
Blind spot sustainability: Making AI’s environmental impact measurable
AI's environmental impact spans its entire life cycle, but remains a blind spot due to missing data and limited transparency. What must change?
The digital metabolic rift: Why do we live beyond our means online?
We cut plastic and fly less, but scroll and stream nonstop. The digital metabolic rift reveals why our eco-awareness ends where the digital begins.
Escaping the digitalisation backlog: data governance puts cities and municipalities in the digital fast lane
The Data Governance Guide empowers cities to develop data-driven services that serve citizens effectively.