Skip to content
169 HD – AI is neutral – 1
10 May 2021| doi: 10.5281/zenodo.4745653

Myth: AI will end discrimination

As an allegedly objective state-of-the-art technology, there are hopes that AI may overcome human weaknesses. Some people believe that AI might be able to gain privileged access to knowledge, free of human biases and errors and thus end discrimination by realizing all in all fair and objective decisions.
We approach the de-mystification of this claim by looking at concrete examples of how AI (re)produces inequalities and connect those to several aspects which help to illustrate socio-technical entanglements. Drawing on a range of critical scholars, we argue that this simplifying myth might even be dangerous and point out what to do about it.

Myth

AI will end discrimination (or is at least less discriminatory than fallible and unfair human beings).

As part of society, AI is deeply rooted in it and as such not separable from structures of discrimination. Due to this socio-technical embeddedness, AI cannotmake discrimination disappear by itself.

Watch the talk

Material

Presentation slides
CORE READINGS

Benjamin, R. (2019a): Captivating Technology. Race, Carceral Technoscience, and Liberatory Imagination in Everyday Life. Durham: Duke University Press.

Benjamin, R. (2019b): Race after technology: abolitionist tools for the new Jim code. Cambridge: UKPolity.

Criado-Perez, C. (2020): Unsichtbare Frauen. Wie eine von Daten beherrschte Welt die Hälfte der Bevölkerung ignoriert. München: btb Verlag.

D’Ignazio, C.; Klein, L. F. (2020): Data Feminism.
Strong ideas series Cambridge, Massachusetts London, England: The MIT Press.

Buolamwini, J.; Gebru, T. (2018): Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In: Proceedings of Machine Learning Research 81. Paper präsentiert bei der Conference on Fairness, Accountability, and Transparency, 1–15.

ADDITIONAL READINGS

Eubanks, V. (2017): Automating inequality. How high-tech tools profile, police, and punish the poor. First Edition. New York, NY: St. Martin’s Press

O’Neil, C. (2016): Weapons of math destruction. How big data increases inequality and threatens democracy. First edition. New York: Crown.

Zuboff, S. (2020): The Age of Surveillance Capitalism. The Fight for a Human Future at the new Frontier of Power. First Trade Paperback Edition. New York: PublicAffairs.

Cave, S.; Dihal, K. (2020): The Whiteness of AI. In: Philosophy & Technology 33(4), 685–703.
UNICORN IN THE FIELD

Epicenter.works
AlgorithmWatch
netzforma* e.V.

About the authors

Miriam Fahimi, Digital Age Research Center (D!ARC), University of Klagenfurt

Miriam, MA BSc is Marie Skłodowska-Curie Fellow within the ITN-ETN Marie Curie Training Network „NoBIAS – Artificial Intelligence without Bias“, funded by the EU through Horizon 2020 at the Digital Age Research Center (D!ARC), University of Klagenfurt. She is also a PhD candidate in Science and Technology Studies at the University of Klagenfurt, supervised by Katharina Kinder-Kurlanda. Her research interests include algorithmic fairness, philosophy of science, science and technology studies, and feminist theory.

@feminasmus

Phillip Lücking, Gender/Diversity in Informatics Systems (GeDIS), University of Kassel

Phillip is a research associate and PhD candidate at the University of Kassel. He graduated from Bielefeld University in Intelligent Systems (MSc). His research interest encompasses machine learning and robotics in relation to their societal impacts, as well as questions on how these technologies can be utilized for social good.


Why, AI?

This post is part of our project “Why, AI?”. It is a learning space which helps you to find out more about the myths and truths surrounding automation, algorithms, society and ourselves. It is continuously being filled with new contributions.

Explore all myths


This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Explore Research issue in focus

Du siehst Eisenbahnschienen. Die vielen verschiedenen Abzweigungen symbolisieren die Entscheidungsmöglichkeiten von Künstlicher Intelligenz in der Gesellschaft. Manche gehen nach oben, unten, rechts. Manche enden auch in Sackgassen. Englisch: You see railway tracks. The many different branches symbolise the decision-making possibilities of artificial intelligence and society. Some go up, down, to the right. Some also end in dead ends.

Artificial intelligence and society

The future of artificial Intelligence and society operates in diverse societal contexts. What can we learn from its political, social and cultural facets?

Sign up for HIIG's Monthly Digest

and receive our latest blog articles.

Further articles

two Quechuas, sitting on green grass and looking at their smartphones, symbolising What are the indigenous perspectives of digitalisation? Quechuas in Peru show openness, challenges, and requirements to grow their digital economies

Exploring digitalisation: Indigenous perspectives from Puno, Peru

What are the indigenous perspectives of digitalisation? Quechuas in Peru show openness, challenges, and requirements to grow their digital economies.

eine mehrfarbige Baumlandschaft von oben, die eine bunte digitale Publikationslandschaft symbolisiert

Diamond OA: For a colourful digital publishing landscape

The blog post raises awareness of new financial pitfalls in the Open Access transformation and proposes a collaborative funding structure for Diamond OA in Germany.

a pile of crumpled up newspapers symbolising the spread of disinformation online

Disinformation: Are we really overestimating ourselves?

How aware are we of the effects and the reach of disinformation online and does the public discourse provide a balanced picture?