Making sense of our connected world
Myth: AI treats everyone equally and makes predictions fairly
Algorithmic power is accumulated at the hands of data companies that re-enforce new colonial dynamics. By taking a look at the Cambridge Analytica Scandal and the so-called content managers based in Manila, the myth about equal treatment in AI and predictive analytics will be tested. I argue that the data companies aim to control and change the political and social climates of the distance geographies by using both the analogue and the digital infrastructures.
Myth
AI treats everyone equally and makes predictions fairly.
AI is expanding political hierarchies between the global north and the global south.
Watch the talk
Materials
Presentation slides | |
CORE READINGS Algorithms of Oppression. (2018). NYU Press. Retrieved 21 April 2021, from NYU Press. Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press. Benjamin, R. (2020). Race After Technology: Abolitionist Tools for the New Jim Code. Social Forces, 98(4), 1–3. Read here. Tuzcu, P. (2016). “Allow access to location?”: Digital feminist geographies. Feminist Media Studies, 16(1), 150–163. Read here. Tuzcu, P. (2020). Cyberkolonialismus und dekoloniale feministische Applikationen. In B. Hoffarth, E. Reuter, & S. Richter (Eds.), Geschlecht und Medien – Räume, Deutungen, Repräsentationen (pp. 126–148). Campus Verlag. |
About the Author
Pinar Tuzcu
Post-doc fellow at the Department of Sociology of Diversity at the University of Kassel, Germany
Pinar Tuzcu completed her PhD in 2015 at the Sociology of Diversity department at the University of Kassel. Between 2015-2019 she worked in different projects as a Post-doc researcher and held an interim professorship for the General Sociology department at the Justus-Liebig-Universität, Giessen from October 2019 to April 2020. Since April 2020, as one of the main applicants, she works as a coordinator of the project “Re:coding Algorithmic Culture” funded by the Volkswagen Foundation.
Why, AI?
This post is part of our project “Why, AI?”. It is a learning space which helps you to find out more about the myths and truths surrounding automation, algorithms, society and ourselves. It is continuously being filled with new contributions.
This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.
You will receive our latest blog articles once a month in a newsletter.
Artificial intelligence and society
Between time savings and additional effort: Generative AI in the workplace
Generative AI in the workplace is enhancing productivity, yet employees face mixed results. This post examines chatbots' paradoxical impact on efficiency.
Resistance to change: Challenges and opportunities in digital higher education
Resistance to change in higher education is inevitable. However, if properly understood, it can contribute to shaping digital transformation constructively.
From theory to practice and back again: A journey in Public Interest AI
This blog post reflects on our initial Public Interest AI principles, using our experiences from developing Simba, an open-source German text simplifier.