Zum Inhalt springen
mike-wilson-181835-unsplash-Cropped
20 März 2018| doi: 10.5281/zenodo.1204395

Omen und Algorithmen: Eine Antwort auf Elena Esposito

Können Algorithmen wirklich die Zukunft vorhersagen? Würde sie das zu Göttern unserer modernen Gesellschaft machen? In ihrem Vortrag „Zukunft und Ungewissheit in der digitalen Gesellschaft“ hinterfragt Elena Esposito diese Annahmen und warnt davor, sich zu sehr auf Algorithmen und ihre Vorhersagen zu verlassen. HIIG-Forscherin Rebecca Kahn nimmt zum Vortrag Stellung: Wenn wir Algorithmen nach unserem eigenen Abbild erschaffen, riskieren wir damit Monstrositäten hervorzubringen.

Faith in algorithms

Are algorithms a substitute for god? Do they know things that people don’t and can’t know? And if so, then who are their priests – which figures have the knowledge to interpret their predictions? These were some of the provocations posed by Elena Esposito in her lecture ‘Future and Uncertainty in the Digital Society’.


While the use of religious terms such as ‘god’ and ‘priest’ may have made some of us uncomfortable, they were entirely appropriate in the context. Many people are more likely to put faith in an algorithm than in the traditional idea of an omnipotent god. Esposito’s lecture explored the relationships between algorithmic prediction and the ancient art of divination, both practices which claim to make predictions about the future based on the processing of data or information gleaned from the present day.

In the era of boundless data and unlimited computing capacity, the possibility offered by algorithmic prediction is for a certainty free of subjectivity, and correlations computed at scale, without the risk of the uncertainties created by sampling and generalisations. Rather than providing a broad view of the overall picture, algorithmic predictions offers a specific ‘truth’ tailored to the individual as a result of ‘their’ data, and regardless of context.

Revival of a divinatory tradition

In the ancient world, divination was a mechanism for seeing into a future which was unknowable to most humans, but was pre-existing and determined, and most significantly, known to the gods. From the Latin, divinare, meaning “to foresee” or “to be inspired by a god”, divination was (and in many places, still is) practiced by priests, oracles and soothsayers who read and interpret certain omens and signs.

Esposito argues that algorithmic prediction revives many of the characteristics of the divinatory tradition. Unlike in science, which is interested in explaining why a phenomena occurs, divination and algorithmic prediction have no interest in explaining ‘why’ – they focus on the ‘what’.  They are invoked in response to a particular reality, but do not try to understand how it has come about. Rather, both mechanisms share the goal of producing a response which can be coordinated with the cosmic or algorithmic order, and produce a future which optimises the use of available resources. In the ancient world, this may have been knowing when to plant crops or when to go to war. In the present time, it may be automated fraud detection, pre-emptive illness prevention or predictive policing.

In this context, it is easy to conflate the idea of the algorithmic prediction and the idea of an all-knowing god. However, Esposito pointed to one critical difference between the result of algorithmic prediction and divination – namely the context in which they take place and the temporal aspect of this context. In the ancient world, divination depended on the unavoidability of the outcomes. They were essential for preserving the existence of an invisible higher order and a pre-established, already existent (although unknown) future.  Algorithms, on the other hand, cannot predict anything more than a present-future, based only on the data which is used to power them. They are unable to know what might happen in a slightly more distant future, in which their predictions are acted upon. Put in another way, while divination needed to produce true outcomes, in order to justify the practice, algorithms aren’t required to be true, to prove their value – they just have to be accurate.

In ancient world, the inevitability of the prediction proved the existence of a higher order. In our time, the accuracy of the prediction is not a reflection of the all-encompassing ability of the algorithm, but proof only that it knows it’s own data. And here is the critical issue, which Esposito touched upon, and which is increasingly causing unease among scholars and researchers: we know that data is not, and can never be, neutral[1].

Zur Redenreihe “Making sense of the digital society”

The AI bias

Esposito’s anxieties dovetail with other red flags raised by those who work on the theoretical and practical implications of predictive algorithms, Big Data an AI for our society. Just as successful divination depended on balancing accurate predictions with just the right amount of mystique about the methods of prediction, the black box nature of algorithmic prediction and deep machine learning depends on the majority of people accepting the results without questioning the mechanics which created them too closely. However, issues such as algorithmic bias, which may already be prevalent in some AI systems[2] are a reminder that if machines are given biased data, they will produced biased results. These biases may not be intentional, or even visible, but they affect the accuracy of the prediction in significant ways.

Many people of colour who uploaded selfies to the recent Google Art and Culture selfie-matching service noticed that the results were heavily skewed towards images of non-white people represented in exoticized ways, and some reported having their race misread by the algorithm[3]. This example illustrates the complex nature of the problem: the dataset of cultural heritage materials used by Google is heavily Eurocentric to begin with; meanwhile the creators of the algorithm may have been unaware of (or not accounted for) that bias before releasing the tool into the public. The algorithm itself is not capable of responding to the contextual complexities it highlighted, resulting in a reinforcement of the representative bias in the results.

Less benign examples of this opacity, which researchers and civil society groups are increasingly concerned about, is the use of algorithms in predictive policing. A study by ProPublica in 2016[4] showed how algorithmic prediction, as well as being a less-than-accurate when it came to predicting whether or not individuals classed as “high-risk” were in fact likely to commit certain crimes, was also found to falsely flag individuals of colour as being likely future criminals, at almost twice the rate of white individuals.

Algorithmic bias, and the overall lack of will on the part of tech companies to address the risk this poses in real-world application[5] is a real cause for concern. The influence of algorithms in our day-to-day knowledge gathering practices means that their bias has the potential to subtly reinforce stereotypes already in existence, as explored by Dr Safiya Umoja Noble in her book Algorithms of Oppression (NYU Press, 2018). As Esposito put it “About the future they produce, algorithms are blind.” And it is in the blindness, and society’s blindness to it, that the risk is located. If we don’t spend time considering the ‘how’ of the algorithm, and critically questioning the ways in which we deploy them, they risk duplicating and mirroring our worst traits.

References

[1] Boyd, Keller & Tijerina (2016) Supporting Ethical Data Research: An Exploratory Study of Emerging Issues in Big Data and Technical Research; Working Paper, Data&Society.net
https://www.datasociety.net/pubs/sedr/SupportingEthicsDataResearch_Sept2016.pdf

[2 ]https://www.technologyreview.com/s/608986/forget-killer-robotsbias-is-the-real-ai-danger/

[3] https://mashable.com/2018/01/16/google-arts-culture-app-race-problem-racist/#1htlxqJqpsqR

[4] https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

[5] https://www.technologyreview.com/s/608248/biased-algorithms-are-everywhere-and-no-one-seems-to-care/


Rebecca Kahn completed her PhD in the Department of Digital Humanities at King’s College, London in 2017. Her research examines the impact and effect of digital transformation on cultural heritage institutions, their documentation, data models and internal ontologies. Her research also examines how the identity of an institution can be traced and observed throughout their digital assets.


This article is a response to Elena Esposito’s lecture in our lectures series Making Sense of the Digital Society.


This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Dieser Beitrag spiegelt die Meinung der Autorinnen und Autoren und weder notwendigerweise noch ausschließlich die Meinung des Institutes wider. Für mehr Informationen zu den Inhalten dieser Beiträge und den assoziierten Forschungsprojekten kontaktieren Sie bitte info@hiig.de

Rebecca Kahn, Dr.

Assoziierte Forscherin: Wissen & Gesellschaft

Aktuelle HIIG-Aktivitäten entdecken

Forschungsthemen im Fokus

Das HIIG beschäftigt sich mit spannenden Themen. Erfahren Sie mehr über unsere interdisziplinäre Pionierarbeit im öffentlichen Diskurs.

Forschungsthema im Fokus Entdecken

Du siehst Eisenbahnschienen. Die vielen verschiedenen Abzweigungen symbolisieren die Entscheidungsmöglichkeiten von Künstlicher Intelligenz in der Gesellschaft. Manche gehen nach oben, unten, rechts. Manche enden auch in Sackgassen. Englisch: You see railway tracks. The many different branches symbolise the decision-making possibilities of artificial intelligence and society. Some go up, down, to the right. Some also end in dead ends.

Künstliche Intelligenz und Gesellschaft

Die Zukunft der künstliche Intelligenz funktioniert in verschiedenen sozialen Kontexten. Was können wir aus ihren politischen, sozialen und kulturellen Facetten lernen?

HIIG Monthly Digest

Jetzt anmelden und  die neuesten Blogartikel gesammelt per Newsletter erhalten.

Weitere Artikel

2 Quechuas, die auf einer grünen Wiese sitzen und im Sonnenlicht auf ihre Smartphones schauen, was folgendes symbolisiert: Was sind indigene Perspektiven der Digitalisierung? Die Quechuas in Peru zeigen Offenheit für die Anforderungen an das Wachstum ihrer digitalen Wirtschaft.

Digitalisierung erkunden: Indigene Perspektiven aus Puno, Peru

Was sind indigene Perspektiven der Digitalisierung? Die Quechuas in Peru zeigen Offenheit für die Anforderungen an das Wachstum ihrer digitalen Wirtschaft.

eine mehrfarbige Baumlandschaft von oben, die eine bunte digitale Publikationslandschaft symbolisiert

Diamond OA: Für eine bunte, digitale Publikationslandschaft

Der Blogpost macht auf neue finanzielle Fallstricke in der Open-Access-Transformation aufmerksam und schlägt eine gemeinschaftliche Finanzierungsstruktur für Diamond OA in Deutschland vor.

ein Haufen zusammengeknüllter Zeitungen, die Desinformation im Netz repräsentieren

Desinformation: Überschätzen wir uns wirklich selbst?

Wie bewusst sind wir im Umgang mit Desinformation im Internet und vermittelt der öffentliche Diskurs ein ausgewogenes Bild der Reichweite von Desinformationen?