Zum Inhalt springen
Eine große Kreuzung mit viel Verkehr.
22 Oktober 2021| doi: 10.5281/zenodo.5596584

Neues Toolkit mit einfachen Tipps für intersektionale KI

Intersektionale KI (IAI) ist der Schlüssel zu mehr Inklusivität. IAI greift auf Praktiken marginalisierter Perspektiven zurück, um die Entwicklung und Nutzung von KI grundlegend neu zu gestalten. Unser neues Toolkit bietet eine Einführung in IAI und erläutert, warum KI für jede*n zugänglich sein sollte.

AI Bias reinforces discrimination 

AI systems have made how some of us work, move and socialise much easier. However, their promises to enhance user experiences and provide opportunities have not held true equally for everyone. On the contrary: For many, AI systems have further widened the gaps of inequality and worsened discrimination, instead of tackling them at their roots. Even so-called intelligent systems merely reproduce the existing analogue world, including underlying power structures. This means AI applications – like any technology – are never neutral. Allowing only a small but powerful fraction of society to design and implement AI systems means power imbalances remain, or even get amplified by computation. Unfair internet infrastructures will continue to be passed off as impartial ones — and with no one else to say otherwise, we may never be able to imagine it any other way.

Why we need inclusive AI

Already marginalised communities are often left out of conversations about what kinds of AI systems should and should not exist, and how they should be created and used – despite the fact that these groups are disproportionately affected by the harmful impacts of AI systems. Scholars like Joy Buolamwini and 2021 MacArthur Fellow Safiya Noble cite the dangers of algorithmic injustice across insidious but widespread examples from shadow banning to predictive policing. 

With the increasing automation of public and private infrastructures, future AI systems should be made by diverse, interdisciplinary and intersectional communities rather than by a select few. In addition to needing community support in order to address the adverse effects they face, system designers can improve AI for everyone by listening to knowledge gained from many perspectives. Diverse groups — for example Black feminists, and queer and disability theorists — have long been considering aspects of the same questions exacerbated by problematic AI. We can and must rely on a broader variety of perspectives if we are to shift the course of AI’s future toward more inclusive systems.

Building on its research on public interest AI, the HIIG’s AI & Society Lab puts a strong focus on questions in this area: How can AI and other technologies be made more approachable for everyone, to ensure people better understand AI systems and how they affect them? What do particularly marginalised communities wish to change about AI, and how can we support them in doing so?  

How Intersectional AI can help

The Intersectional AI Toolkit helps answer these questions by connecting communities in order to create introductory guides to AI from multiple, approachable perspectives. Developed by Sarah Ciston during a virtual fellowship at the AI & Society Lab, the Intersectional AI Toolkit argues that anyone can and should be able to understand what AI is and what AI ought to be. 

Intersectionality describes how power operates structurally, and how multiple forms of discrimination have compounding, interdependent effects. American lawyer Kimberlé Crenshaw introduced the term in 1989, using the image of an intersection where paths of power cross to illustrate the interwoven nature of social inequalities (1989).

As imagined by this toolkit, Intersectional AI will bring decades of work on Intersectional ideas, ethics, and tactics to the issues of inequality faced by AI. By drawing on established ideas and practices, and understanding how to combine them, Intersectionality can help reshape AI in fundamental ways. Through its layered, structural approach, Intersectional AI connects the dots between concepts — as seen from different disciplines and operating across systems — so that individuals and researchers may be able to help address the gaps that others could not see. 

A toolkit that helps to think about intersectionality and code inclusive AI

The Intersectional AI Toolkit is a collection of small magazines (or zines) that offer practical accessible guides to both AI and Intersectionality. They are written for engineers, artists, activists, academics, makers and anyone who wants to understand the automated systems that impact them. By sharing key concepts, tactics, and resources, they serve as jumping-off points to inspire readers’ own further research and conversation across disciplines and communities, asking questions like “Is decolonizing AI possible?” or “What does it mean to learn to code?” 

The toolkit is available as a digital resource that continues to grow with community contributions, as well as printable zines that can be folded, shared, and discussed offline. With issues like a two-sided glossary: “IAI A-to-Z,” strategy flashcards: “Tactics for Intersectional AI,” and a guide to concepts for skeptics: “Help Me Understand Intersectionality,” the zine collection focuses on using plain language and fostering tangible impacts.

This toolkit is not the first or only resource on intersectionality or AI. Instead, it gathers together some of the amazing people, ideas, and forces working to re-examine the foundational assumptions built into these technologies, such as Catherine D’Ignazio and Lauren Klein’s work on “Data Feminism” or Ruja Benjamin’s “Race after Technology”. It also looks at which people are (not) involved when AI is developed or which processes and safeguards do or should exist. In this way, it helps us understand power and aims to link AI development back to democratic processes. 

Why is the future of AI intersectional?

Current approaches to AI fail to address two major problems. First: Those who create AI systems – from code to policy to infrastructure — fail to listen to the needs or wisdom of the marginalised communities most injured by those systems. Second: Current language and tools for AI put up intimidating barriers that prevent outsiders from understanding, building, or changing these systems. If we want improved, inclusive AI systems, we must consider a broader range of people’s needs as much as we must consider a broader range of people’s knowledge. Otherwise we face a future perpetuating the same problems, under the guise of fairness and automation. 

The Intersectional AI Toolkit tries to intervene by facilitating much-needed exchange between different groups around these issues. The AI & Society Lab hosted the launch of the Toolkit as an Edit-a-thon workshop, in order to gain multiple valuable perspectives through diverse public participation. Over the next months, more digital and in-person zine-making workshops are planned to keep building the Toolkit while advocating for Intersectional approaches to AI in various sectors like AI governance. 

All AI systems are socio-technical; they interconnect humans and machines. Intersectionality reminds us how power imbalances affect those connections. By addressing the gap between those who want to understand and shape AI, and those who already make and regulate it, Intersectional AI can help us find the shared language we need to reimagine AI together. 


Crenshaw, K. (1989). Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of Antidiscrimination Doctrine, Feminist Theory and Antiracist Politics


The Intersectional AI Toolkit will remain accessible for contributions and comments at intersectionalai.com.
The Intersectional AI Toolkit Edit-a-thon took place on Sep 1, 2021 and was hosted by HIIG’s AI & Society Lab in collaboration with our partner from MOTIF, netzforma* e.V., SUPERRR and the The Leibniz Institute for Media Research | Hans-Bredow-Institut (HBI).

Dieser Beitrag spiegelt die Meinung der Autorinnen und Autoren und weder notwendigerweise noch ausschließlich die Meinung des Institutes wider. Für mehr Informationen zu den Inhalten dieser Beiträge und den assoziierten Forschungsprojekten kontaktieren Sie bitte info@hiig.de

Sarah Ciston

Ehem. Assoziierte Forscherin: AI & Society Lab

Daniela Dicks

Ehem. Co-Leitung und Sprecherin: AI & Society Lab

Aktuelle HIIG-Aktivitäten entdecken

Forschungsthemen im Fokus

Das HIIG beschäftigt sich mit spannenden Themen. Erfahren Sie mehr über unsere interdisziplinäre Pionierarbeit im öffentlichen Diskurs.

Forschungsthema im Fokus Entdecken

Du siehst Eisenbahnschienen. Die vielen verschiedenen Abzweigungen symbolisieren die Entscheidungsmöglichkeiten von Künstlicher Intelligenz in der Gesellschaft. Manche gehen nach oben, unten, rechts. Manche enden auch in Sackgassen. Englisch: You see railway tracks. The many different branches symbolise the decision-making possibilities of artificial intelligence and society. Some go up, down, to the right. Some also end in dead ends.

Künstliche Intelligenz und Gesellschaft

Die Zukunft der künstliche Intelligenz funktioniert in verschiedenen sozialen Kontexten. Was können wir aus ihren politischen, sozialen und kulturellen Facetten lernen?

HIIG Monthly Digest

Jetzt anmelden und  die neuesten Blogartikel gesammelt per Newsletter erhalten.

Weitere Artikel

Das Foto zeigt Hände nebeneinander. Das symbolisiert die Integration von Geschlecht und Inklusivität in digitale Kulturpolitiken.

Geschlecht und Inklusivität in digitalen Kulturpolitiken: Erkenntnisse aus Berlin und Barcelona

Können Berlins und Barcelonas integrativer Umgang mit der Digitalisierung als Blaupause für neue europäische Kulturpolitiken im digitalen Zeitalter dienen?

Das Bild zeigt bunte Puzzleteile. Sie repräsentieren, dass KI für den Umweltschutz nur ein kleiner Teil von vielen sein kann, um unseren Planeten zu schützen.

Ein kleiner Teil von vielen – KI für den Umweltschutz

Welche Rolle spielt KI in Anwendungen für den Umweltschutz? Dieser Blogbeitrag wirft einen Blick auf deutsche Projekte, die KI zu diesem Zweck einsetzen.

Eine Hand hält eine digitale Karte auf einem Smartphone. Dies repräsentiert GIS-Technologie und Geodaten.

Wege durch das Großstadtlabyrinth: GIS-Technologie und die Grenzen zwischen digitaler und physischer Infrastruktur

Mit der Entwicklung von GIS-Technologie stellt sich die Frage, ob digitale Karten wie physische öffentliche Infrastrukturen behandelt werden sollten.