Skip to content
169 HD – AI is inaccessible.-2
02 June 2021| doi: 10.5281/zenodo.4811512

Myth: AI understands me, but I can’t understand it

Everyone can and should understand how AI works, so that – rather than be intimidated or misled by algorithmic decision-making – we can contribute multiple perspectives to designing and implementing the systems that impact us all differently.

Myth

AI understands me, but I can’t understand it.

AI ist NOT smarter than us. AI should be understandable and accessible.

Watch the talk

Material

Folien der Präsentation
SCHLÜSSELLITERATUR

Crawford, K. & Paglen, T. (2019, September 19). Excavating AI: The Politics of Images in Machine Learning Training Sets.

Timnit Gebru. (2021, April 14). The Hierarchy of Knowledge in Machine Learning & Related Fields and Its Consequences.

Zubarev, V. (2018, November 21). Machine Learning for Everyone.

ZUSATZLITERATUR

Griffith, C. (2017). Visualizing Algorithms.

Kogan, G. (n.d.). Neural networks. Retrieved 18 May 2021.

McPherson, T., & Parham, M. (2019, October 24). ‘What is a Feminist Lab?’ Symposium.
UNICORN IN THE FIELD

Algorithmic Justice League
Color Coded LA
Data Nutrition Project
School of Machines, Making, & Make-Believe

About the author

Sarah Ciston, Fellow | HIIG

Sarah Ciston (she/they) is a Virtual Fellow at the Humboldt Institute for Internet and Society, and a Mellon Fellow and PhD Candidate in Media Arts + Practice at University of Southern California. Their research investigates how to bring intersectionality to artificial intelligence by employing queer, feminist, and anti-racist ethics and tactics. They lead Creative Code Collective—a student community for co-learning programming using approachable, interdisciplinary strategies. Their projects include a machine-learning interface that ‘rewrites’ the inner critic and a chatbot that explains feminism to online misogynists. They are currently developing a library of digital-print zines on Intersectional AI.

@sarahciston


Why, AI?

This post is part of our project “Why, AI?”. It is a learning space which helps you to find out more about the myths and truths surrounding automation, algorithms, society and ourselves. It is continuously being filled with new contributions.

Explore all myths


This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Sign up for HIIG's Monthly Digest

HIIG-Newsletter-Header

You will receive our latest blog articles once a month in a newsletter.

Explore Research issue in focus

Du siehst Eisenbahnschienen. Die vielen verschiedenen Abzweigungen symbolisieren die Entscheidungsmöglichkeiten von Künstlicher Intelligenz in der Gesellschaft. Manche gehen nach oben, unten, rechts. Manche enden auch in Sackgassen. Englisch: You see railway tracks. The many different branches symbolise the decision-making possibilities of artificial intelligence and society. Some go up, down, to the right. Some also end in dead ends.

Artificial intelligence and society

The future of artificial Intelligence and society operates in diverse societal contexts. What can we learn from its political, social and cultural facets?

Further articles

Abstract paper cut-out on a wall symbolising the complexity and interconnection of global initiatives, representing efforts in mapping the landscape of public interest AI as explored in the article.

Artificial intelligence with purpose: Mapping the landscape of public interest AI

How is AI being used for the common good? A new dataset is mapping the landscape of public interest AI by cataloguing impactful projects worldwide.

Illustration of a woman searching for jobs online, highlighting how generative AI in recruiting influences candidate experiences and job application processes.

Who hired this bot? On the ambivalence of using generative AI in recruiting

Generative AI in recruiting promises efficiency, but may also quietly undermine the human connection that HR decisions and candidate fit rely on.

Blank white paper on a yellow background symbolising how AI emails can lack personal touch and emotion.

Polished yet impersonal: The unintended consequences of writing your emails with AI

AI-written emails can save workers time and improve clarity – but are we losing connection, nuance, and communication skills in the process?