Skip to content
Eine große Kreuzung mit viel Verkehr.
26 October 2021| doi: 10.5281/zenodo.5596584

New toolkit collects easy tips for intersectional AI

By drawing on marginalized practices to fundamentally reshape the development and use of AI technologies, intersectional approaches to AI (IAI) are key in ensuring more inclusiveness. Our new toolkit provides an introductory guide to IAI and argues that anyone should be able to understand what AI is and what AI ought to be.

AI Bias reinforces discrimination 

AI systems have made how some of us work, move and socialise much easier. However, their promises to enhance user experiences and provide opportunities have not held true equally for everyone. On the contrary: For many, AI systems have further widened the gaps of inequality and worsened discrimination, instead of tackling them at their roots. Even so-called intelligent systems merely reproduce the existing analogue world, including underlying power structures. This means AI applications – like any technology – are never neutral. Allowing only a small but powerful fraction of society to design and implement AI systems means power imbalances remain, or even get amplified by computation. Unfair internet infrastructures will continue to be passed off as impartial ones — and with no one else to say otherwise, we may never be able to imagine it any other way.

Why we need inclusive AI

Already marginalised communities are often left out of conversations about what kinds of AI systems should and should not exist, and how they should be created and used – despite the fact that these groups are disproportionately affected by the harmful impacts of AI systems. Scholars like Joy Buolamwini and 2021 MacArthur Fellow Safiya Noble cite the dangers of algorithmic injustice across insidious but widespread examples from shadow banning to predictive policing. 

With the increasing automation of public and private infrastructures, future AI systems should be made by diverse, interdisciplinary and intersectional communities rather than by a select few. In addition to needing community support in order to address the adverse effects they face, system designers can improve AI for everyone by listening to knowledge gained from many perspectives. Diverse groups — for example Black feminists, and queer and disability theorists — have long been considering aspects of the same questions exacerbated by problematic AI. We can and must rely on a broader variety of perspectives if we are to shift the course of AI’s future toward more inclusive systems.

Building on its research on public interest AI, the HIIG’s AI & Society Lab puts a strong focus on questions in this area: How can AI and other technologies be made more approachable for everyone, to ensure people better understand AI systems and how they affect them? What do particularly marginalised communities wish to change about AI, and how can we support them in doing so?  

How Intersectional AI can help

The Intersectional AI Toolkit helps answer these questions by connecting communities in order to create introductory guides to AI from multiple, approachable perspectives. Developed by Sarah Ciston during a virtual fellowship at the AI & Society Lab, the Intersectional AI Toolkit argues that anyone can and should be able to understand what AI is and what AI ought to be. 

Intersectionality describes how power operates structurally, and how multiple forms of discrimination have compounding, interdependent effects. American lawyer Kimberlé Crenshaw introduced the term in 1989, using the image of an intersection where paths of power cross to illustrate the interwoven nature of social inequalities (1989).

As imagined by this toolkit, Intersectional AI will bring decades of work on Intersectional ideas, ethics, and tactics to the issues of inequality faced by AI. By drawing on established ideas and practices, and understanding how to combine them, Intersectionality can help reshape AI in fundamental ways. Through its layered, structural approach, Intersectional AI connects the dots between concepts — as seen from different disciplines and operating across systems — so that individuals and researchers may be able to help address the gaps that others could not see. 

A toolkit that helps to think about intersectionality and code inclusive AI

The Intersectional AI Toolkit is a collection of small magazines (or zines) that offer practical accessible guides to both AI and Intersectionality. They are written for engineers, artists, activists, academics, makers and anyone who wants to understand the automated systems that impact them. By sharing key concepts, tactics, and resources, they serve as jumping-off points to inspire readers’ own further research and conversation across disciplines and communities, asking questions like “Is decolonizing AI possible?” or “What does it mean to learn to code?” 

The toolkit is available as a digital resource that continues to grow with community contributions, as well as printable zines that can be folded, shared, and discussed offline. With issues like a two-sided glossary: “IAI A-to-Z,” strategy flashcards: “Tactics for Intersectional AI,” and a guide to concepts for skeptics: “Help Me Understand Intersectionality,” the zine collection focuses on using plain language and fostering tangible impacts.

This toolkit is not the first or only resource on intersectionality or AI. Instead, it gathers together some of the amazing people, ideas, and forces working to re-examine the foundational assumptions built into these technologies, such as Catherine D’Ignazio and Lauren Klein’s work on “Data Feminism” or Ruja Benjamin’s “Race after Technology”. It also looks at which people are (not) involved when AI is developed or which processes and safeguards do or should exist. In this way, it helps us understand power and aims to link AI development back to democratic processes. 

Why is the future of AI intersectional?

Current approaches to AI fail to address two major problems. First: Those who create AI systems – from code to policy to infrastructure — fail to listen to the needs or wisdom of the marginalised communities most injured by those systems. Second: Current language and tools for AI put up intimidating barriers that prevent outsiders from understanding, building, or changing these systems. If we want improved, inclusive AI systems, we must consider a broader range of people’s needs as much as we must consider a broader range of people’s knowledge. Otherwise we face a future perpetuating the same problems, under the guise of fairness and automation. 

The Intersectional AI Toolkit tries to intervene by facilitating much-needed exchange between different groups around these issues. The AI & Society Lab hosted the launch of the Toolkit as an Edit-a-thon workshop, in order to gain multiple valuable perspectives through diverse public participation. Over the next months, more digital and in-person zine-making workshops are planned to keep building the Toolkit while advocating for Intersectional approaches to AI in various sectors like AI governance. 

All AI systems are socio-technical; they interconnect humans and machines. Intersectionality reminds us how power imbalances affect those connections. By addressing the gap between those who want to understand and shape AI, and those who already make and regulate it, Intersectional AI can help us find the shared language we need to reimagine AI together. 

References

Crenshaw, K. (1989). Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of Antidiscrimination Doctrine, Feminist Theory and Antiracist Politics

tl;dr

The Intersectional AI Toolkit will remain accessible for contributions and comments at intersectionalai.com.
The Intersectional AI Toolkit Edit-a-thon took place on Sep 1, 2021 and was hosted by HIIG’s AI & Society Lab in collaboration with our partner from MOTIF, netzforma* e.V., SUPERRR and the The Leibniz Institute for Media Research | Hans-Bredow-Institut (HBI).

This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Sarah Ciston

Former Associated Researcher: AI & Society Lab

Daniela Dicks

Fromer Co-Lead & spokesperson: AI & Society Lab

Explore Research issue in focus

Du siehst Eisenbahnschienen. Die vielen verschiedenen Abzweigungen symbolisieren die Entscheidungsmöglichkeiten von Künstlicher Intelligenz in der Gesellschaft. Manche gehen nach oben, unten, rechts. Manche enden auch in Sackgassen. Englisch: You see railway tracks. The many different branches symbolise the decision-making possibilities of artificial intelligence and society. Some go up, down, to the right. Some also end in dead ends.

Artificial intelligence and society

The future of artificial Intelligence and society operates in diverse societal contexts. What can we learn from its political, social and cultural facets?

Sign up for HIIG's Monthly Digest

and receive our latest blog articles.

Further articles

two Quechuas, sitting on green grass and looking at their smartphones, symbolising What are the indigenous perspectives of digitalisation? Quechuas in Peru show openness, challenges, and requirements to grow their digital economies

Exploring digitalisation: Indigenous perspectives from Puno, Peru

What are the indigenous perspectives of digitalisation? Quechuas in Peru show openness, challenges, and requirements to grow their digital economies.

eine mehrfarbige Baumlandschaft von oben, die eine bunte digitale Publikationslandschaft symbolisiert

Diamond OA: For a colourful digital publishing landscape

The blog post raises awareness of new financial pitfalls in the Open Access transformation and proposes a collaborative funding structure for Diamond OA in Germany.

a pile of crumpled up newspapers symbolising the spread of disinformation online

Disinformation: Are we really overestimating ourselves?

How aware are we of the effects and the reach of disinformation online and does the public discourse provide a balanced picture?