Skip to content
mike-wilson-181835-unsplash-Cropped
20 March 2018| doi: 10.5281/zenodo.1204395

Omens and algorithms: A response to Elena Esposito

Can algorithms actually predict the future? And if so, does this make them the gods of our modern society? In her lecture ‘Future and uncertainty in the digital society’, Elena Esposito questions these assumptions and gives us reason to be concerned about over-reliance on prediction. HIIG researcher Rebecca Kahn responds to the lecture, and argues that by creating algorithms in our own image, we risk creating monstrosities.

Faith in algorithms

Are algorithms a substitute for god? Do they know things that people don’t and can’t know? And if so, then who are their priests – which figures have the knowledge to interpret their predictions? These were some of the provocations posed by Elena Esposito in her lecture ‘Future and Uncertainty in the Digital Society’.


While the use of religious terms such as ‘god’ and ‘priest’ may have made some of us uncomfortable, they were entirely appropriate in the context. Many people are more likely to put faith in an algorithm than in the traditional idea of an omnipotent god. Esposito’s lecture explored the relationships between algorithmic prediction and the ancient art of divination, both practices which claim to make predictions about the future based on the processing of data or information gleaned from the present day.

In the era of boundless data and unlimited computing capacity, the possibility offered by algorithmic prediction is for a certainty free of subjectivity, and correlations computed at scale, without the risk of the uncertainties created by sampling and generalisations. Rather than providing a broad view of the overall picture, algorithmic predictions offers a specific ‘truth’ tailored to the individual as a result of ‘their’ data, and regardless of context.

Revival of a divinatory tradition

In the ancient world, divination was a mechanism for seeing into a future which was unknowable to most humans, but was pre-existing and determined, and most significantly, known to the gods. From the Latin, divinare, meaning “to foresee” or “to be inspired by a god”, divination was (and in many places, still is) practiced by priests, oracles and soothsayers who read and interpret certain omens and signs.

Esposito argues that algorithmic prediction revives many of the characteristics of the divinatory tradition. Unlike in science, which is interested in explaining why a phenomena occurs, divination and algorithmic prediction have no interest in explaining ‘why’ – they focus on the ‘what’.  They are invoked in response to a particular reality, but do not try to understand how it has come about. Rather, both mechanisms share the goal of producing a response which can be coordinated with the cosmic or algorithmic order, and produce a future which optimises the use of available resources. In the ancient world, this may have been knowing when to plant crops or when to go to war. In the present time, it may be automated fraud detection, pre-emptive illness prevention or predictive policing.

In this context, it is easy to conflate the idea of the algorithmic prediction and the idea of an all-knowing god. However, Esposito pointed to one critical difference between the result of algorithmic prediction and divination – namely the context in which they take place and the temporal aspect of this context. In the ancient world, divination depended on the unavoidability of the outcomes. They were essential for preserving the existence of an invisible higher order and a pre-established, already existent (although unknown) future.  Algorithms, on the other hand, cannot predict anything more than a present-future, based only on the data which is used to power them. They are unable to know what might happen in a slightly more distant future, in which their predictions are acted upon. Put in another way, while divination needed to produce true outcomes, in order to justify the practice, algorithms aren’t required to be true, to prove their value – they just have to be accurate.

In ancient world, the inevitability of the prediction proved the existence of a higher order. In our time, the accuracy of the prediction is not a reflection of the all-encompassing ability of the algorithm, but proof only that it knows it’s own data. And here is the critical issue, which Esposito touched upon, and which is increasingly causing unease among scholars and researchers: we know that data is not, and can never be, neutral[1].

Lecture series “Making sense of the digital society”

The AI bias

Esposito’s anxieties dovetail with other red flags raised by those who work on the theoretical and practical implications of predictive algorithms, Big Data an AI for our society. Just as successful divination depended on balancing accurate predictions with just the right amount of mystique about the methods of prediction, the black box nature of algorithmic prediction and deep machine learning depends on the majority of people accepting the results without questioning the mechanics which created them too closely. However, issues such as algorithmic bias, which may already be prevalent in some AI systems[2] are a reminder that if machines are given biased data, they will produced biased results. These biases may not be intentional, or even visible, but they affect the accuracy of the prediction in significant ways.

Many people of colour who uploaded selfies to the recent Google Art and Culture selfie-matching service noticed that the results were heavily skewed towards images of non-white people represented in exoticized ways, and some reported having their race misread by the algorithm[3]. This example illustrates the complex nature of the problem: the dataset of cultural heritage materials used by Google is heavily Eurocentric to begin with; meanwhile the creators of the algorithm may have been unaware of (or not accounted for) that bias before releasing the tool into the public. The algorithm itself is not capable of responding to the contextual complexities it highlighted, resulting in a reinforcement of the representative bias in the results.

Less benign examples of this opacity, which researchers and civil society groups are increasingly concerned about, is the use of algorithms in predictive policing. A study by ProPublica in 2016[4] showed how algorithmic prediction, as well as being a less-than-accurate when it came to predicting whether or not individuals classed as “high-risk” were in fact likely to commit certain crimes, was also found to falsely flag individuals of colour as being likely future criminals, at almost twice the rate of white individuals.

Algorithmic bias, and the overall lack of will on the part of tech companies to address the risk this poses in real-world application[5] is a real cause for concern. The influence of algorithms in our day-to-day knowledge gathering practices means that their bias has the potential to subtly reinforce stereotypes already in existence, as explored by Dr Safiya Umoja Noble in her book Algorithms of Oppression (NYU Press, 2018). As Esposito put it “About the future they produce, algorithms are blind.” And it is in the blindness, and society’s blindness to it, that the risk is located. If we don’t spend time considering the ‘how’ of the algorithm, and critically questioning the ways in which we deploy them, they risk duplicating and mirroring our worst traits.

References

[1] Boyd, Keller & Tijerina (2016) Supporting Ethical Data Research: An Exploratory Study of Emerging Issues in Big Data and Technical Research; Working Paper, Data&Society.net
https://www.datasociety.net/pubs/sedr/SupportingEthicsDataResearch_Sept2016.pdf

[2 ]https://www.technologyreview.com/s/608986/forget-killer-robotsbias-is-the-real-ai-danger/

[3] https://mashable.com/2018/01/16/google-arts-culture-app-race-problem-racist/#1htlxqJqpsqR

[4] https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

[5] https://www.technologyreview.com/s/608248/biased-algorithms-are-everywhere-and-no-one-seems-to-care/


Rebecca Kahn completed her PhD in the Department of Digital Humanities at King’s College, London in 2017. Her research examines the impact and effect of digital transformation on cultural heritage institutions, their documentation, data models and internal ontologies. Her research also examines how the identity of an institution can be traced and observed throughout their digital assets.


This article is a response to Elena Esposito’s lecture in our lectures series Making Sense of the Digital Society.


This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Rebecca Kahn, Dr.

Associated Researcher: Knowledge & Society

Explore Research issue in focus

Du siehst eine Tastatur auf der eine Taste rot gefärbt ist und auf der „Control“ steht. Eine bildliche Metapher für die Regulierung von digitalen Plattformen im Internet und Data Governance. You see a keyboard on which one key is coloured red and says "Control". A figurative metaphor for the regulation of digital platforms on the internet and data governance.

Data governance

We develop robust data governance frameworks and models to provide practical solutions for good data governance policies.

Sign up for HIIG's Monthly Digest

and receive our latest blog articles.

Further articles

two Quechuas, sitting on green grass and looking at their smartphones, symbolising What are the indigenous perspectives of digitalisation? Quechuas in Peru show openness, challenges, and requirements to grow their digital economies

Exploring digitalisation: Indigenous perspectives from Puno, Peru

What are the indigenous perspectives of digitalisation? Quechuas in Peru show openness, challenges, and requirements to grow their digital economies.

eine mehrfarbige Baumlandschaft von oben, die eine bunte digitale Publikationslandschaft symbolisiert

Diamond OA: For a colourful digital publishing landscape

The blog post raises awareness of new financial pitfalls in the Open Access transformation and proposes a collaborative funding structure for Diamond OA in Germany.

a pile of crumpled up newspapers symbolising the spread of disinformation online

Disinformation: Are we really overestimating ourselves?

How aware are we of the effects and the reach of disinformation online and does the public discourse provide a balanced picture?