Skip to content
AI-busted-2
29 October 2019

Busted: AI will fix it

There is a strong belief on the internet that AI will solve basically all of future society’s problems, if we just give it enough time. Christian Katzenbach took a close look at this myth to determine whether there is truth to it.


In time for this year’s Internet Governance Forum (IGF), Matthias C. Kettemann (HIIG) and Stephan Dreyer (Leibniz Insitut für Medienforschung | Hans-Bredow-Institut (HBI)) will be publishing a volume called “Busted! The Truth About the 50 Most Common Internet Myths“. As an exclusive sneak peek, we are publishing an assortment of these myths here on our blog – some of those have been busted by HIIGs own researchers and associates.

The entire volume will be accessible soon at internetmyths.eu.


Myth

“Artificial intelligence” (AI) is the key technological development of our time. AI will not only change how we live, communicate, work and travel tomorrow, AI-based solutions will fix the fundamental problems of our societies from the detection of illnesses and misinformation to online hate speech and urban mobility.

Busted

The current hype about AI is strongly connected to the myth that AI will by itself solve key problems of our societies. In the 2018 US congressional hearings, Facebook’s CEO Marc Zuckerberg used phrases such as “AI will fix this” and “in the future we will have technology that addresses these issues” more than a dozen times when pressed upon issues of misinformation, hate speech and privacy. In other sectors, businesses and technologists promise that AI-powered technologies and products will detect cancer in early stages, identify tax fraud patterns, guide vehicles efficiently through urban areas and identify antisocial and criminal behaviour in public spaces.

The narrative that technology will fix social problems is a recurrent theme in the history of technology and society. The “technological fix” (Rudi Volti) seeks functional solutions for problems that are social and political in nature: Autonomous vehicles might drive more safely through the city (by some criteria), but will not provide urban mobility to broad segments of the population. Filtering software might get better by identifying misinformation and hate speech, but will not eradicate it and will always be unable to strike the perfect (and widely accepted) balance between freedom of expression and harmful speech. These problems are fundamentally social in nature, so there is not that one single right answer that can be technologically implemented.

Talk about ‘AI fixing things’ is also misleading because it obfuscates the human labour and the social relations that the seemingly autonomously operating technologies are building upon. AI-based products don’t just appear, they are man-made. Typical AI-powered devices and services such as autonomous vehicles and imagedetection solutions are products of companies with commercial interests and normative assumptions – and these are inscribed into the products itself. What is more, AI products are the results of immense amounts of human labour, ranging from developing complex mathematical models to mundane activities such as training image recognition AIs picture by picture.

Consequently, even if AI-powered services and devices perform perfectly functional according to preset criteria in the future, the phrase “AI will fix this” will still be utterly misleading. Many of these problems are fundamentally social in nature and do not yield a functional solution. AI technology is not an autonomous agent but constructed by humans and society.

Thruth

While AI cannot fix everything, humans using AI might fix some things. Rapid developments in AI technologies provide opportunities for many stakeholders to be more responsive to societal challenges. These technologies will contribute to innovations across many societal sectors and change the way we live, communicate, work and travel – not automatically for the public good, though.


Sources

Evgeny Morozov, To save everything, click here: The folly of technological
solutionism (New York: PublicAffairs, 2013)
Julia Powles and Helen Nissenbaum, The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence, Medium, 8 December 2018, https://medium.com/s/story/the-seductive-diversion-of-solving-bias-in-artificial- intelligence-890df5e5ef53.

This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Christian Katzenbach, Prof. Dr.

Associated researcher: The evolving digital society

Explore current HIIG Activities

Research issues in focus

HIIG is currently working on exciting topics. Learn more about our interdisciplinary pioneering work in public discourse.

Sign up for HIIG's Monthly Digest

and receive our latest blog articles.

Further articles

Generic visualizations generated by the author using Stable Diffusion AI representing futuristic visions for futures studies

Honey, we need to talk about the future

Can futures studies challenge the status quo beyond academia and approach public dialogue as an imaginative space for collective endeavours?

two Quechuas, sitting on green grass and looking at their smartphones, symbolising What are the indigenous perspectives of digitalisation? Quechuas in Peru show openness, challenges, and requirements to grow their digital economies

Exploring digitalisation: Indigenous perspectives from Puno, Peru

What are the indigenous perspectives of digitalisation? Quechuas in Peru show openness, challenges, and requirements to grow their digital economies.

eine mehrfarbige Baumlandschaft von oben, die eine bunte digitale Publikationslandschaft symbolisiert

Diamond OA: For a colourful digital publishing landscape

The blog post raises awareness of new financial pitfalls in the Open Access transformation and proposes a collaborative funding structure for Diamond OA in Germany.