Skip to content

How (un-)intelligent is our collaboration with AI?

02 May 2019| doi: 10.5281/zenodo.3087912

Discussing whether a machine is intelligent is relevant, but more urgent is the question whether our collaborations with machines are intelligent. An intelligent collaboration with Artificial Intelligence (AI) requires complementary traits, since there is no point in teamwork when all actors possess similar qualities. Consciousness, common sense, intuition, intentionality, imagination, morality, emotional intelligence and phenomenological experience are capacities that computers don’t have. Contrary to computers, humans can learn things that natural selection did not pre-program us with. Siri Beerends stresses in her article, that these human capacities are important to complement artificial intelligence and guide further developments in AI.

Unintelligent collaborations with AI

In her book The Age of Surveillance Capitalism philosopher Shoshana Zuboff shows how monetisation of data, captured through monitoring and predicting people’s behaviours on- and offline, is shaping our environments, behaviours and choices without us being aware. The moment we switch on our devices, the algorithms of the ‘Big Five’ are in charge: gluing us to our screens, keeping us clicking, liking and swiping: generating data-fuel to train their artificial intelligence.

Based on our clicks, algorithms pin us down on categories, for example ‘white-xenophobic-heterosexual-catlover’, forming a base for all recommendations shown to this person. As a result our capacity to explore alternative routes and redefine our identity gets narrowed down. Of course, algorithms can do great things. Stanford researchers for example have developed an algorithm that diagnoses pneumonia better than radiologists. But when it comes to predicting human behaviour there are many proven pitfalls. Mathematician Cathy ‘O Neil has put these on the international agenda with her bestseller Weapons of Math Destruction. Algorithms mix-up correlation and causality and they can suffer from biased datasets and feedback loops. The models are based on majorities and averages, excluding minority perspectives and automating inequality. By scramming non-measurable aspects into quantified models, we lose ambiguity and diversity out of sight. These problems have not stopped governments from implementing algorithmic predictions in smart cities, predictive policing and social welfare. Instead of judging us on the basis of what we do, governments and companies are judging us on the basis of what we might do. Although these predictions do not reflect our behaviour, they guide how we are approached in the on – and offline world.

Machine-like humans

According to Zuboff, humans are slowly transforming into automates: becoming just as predictable and programmable as machines. Alongside Zuboff, data-driven technologies are problematised by a growing number of scientists. Their advice is to stop worrying about a superintelligence that will replace us, and start discussing the devices and algorithms that are replacing human decision making, without having proper understandings, interpretations and social intelligence. Smartphones, health apps, wearables, digital assistants and smart toys are not neutral devices. They represent social and emotional regimes that eliminate irrational behaviour and encourage us to behave in accordance with the moral standards programmed into these devices: reduce your sugar intake, call a friend etc, implying that we can continuously control our behavior in a rational manner. To determine which neighbourhoods are unsafe, what we want to watch and listen, who we want to date or hire for a job, we don’t have to think or rely on our senses, we outsource it to algorithms that guide our decisions and confirm our own worldviews. Why opt-out if you can spend your entire life in a warm bath with filter bubbles and quantified simplifications?

Intelligent Collaborations with AI

The answer: because it makes us more machine-like and less able to establish intelligent, complementary collaborations with AI. The field of AI mainly derived from a mathematical approach. Since AI is used in many non-mathematical domains, technological engineers need to focus more on divergent cultural logics, respecting our ambiguous worlds in everyday contexts.

A complementary collaboration requires a better understanding of our own intelligence as well as artificial intelligence. We need more nuanced understandings of what AI is and how our perceptions of AI are shaped by marketing messages from Silicon Valley and the commercial tech industry. It is important to have interdisciplinary discussions what we think of as intelligence and consciousness, recognising surveillance capitalism and data-driven technologies are also changing our intelligence and consciousness. Presupposing we stop optimising people for data-markets and start investing in machine- as well as human learning, AI will enable us to outsource repetitive tasks, creating more room for value and significance in our own work.

Lastly, and this is rarely mentioned in context of AI: we need to consider the impact of our expanding digital ecosystem on the environment. IoT devices contribute to energy savings, but the total sum of data storage centers, satellite pollution and high-tech consumerism will contribute to the world’s energy bill. Experts are divided on whether AI-applications spell doom or salvation for the environment, but they agree that we can’t wait too long to find out.

Read the full article here!

Siri Beerends is a cultural sociologist, writer and researcher at medialab SETUP, preparing a PhD at University of Twente.

This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact

Siri Beerends

Sign up for HIIG's Monthly Digest

and receive our latest blog articles.

Man sieht in Leuchtschrift das Wort "Ethical"

Digital Ethics

Whether civil society, politics or science – everyone seems to agree that the New Twenties will be characterised by digitalisation. But what about the tension of digital ethics? How do we create a digital transformation involving society as a whole, including people who either do not have the financial means or the necessary know-how to benefit from digitalisation?  And what do these comprehensive changes in our actions mean for democracy? In this dossier we want to address these questions and offer food for thought on how we can use digitalisation for the common good.

Discover all 11 articles

Further articles

Image shows a visualized human brain with blue light effects

AI as a flying blue brain? How metaphors influence our visions about AI

Why is Artificial Intelligence so commonly depicted as a machine with a human brain? This article shows why one misleading metaphor became so prevalent.

Person in wheelchair taking photos outside

Exploiting potentials: Teaching AI Systems to See Accessibility Barriers

Barriers in our physical environment are still widespread. While AI systems could eventually support detecting them, it first needs open training data. Here we provide a dataset for detecting steps...

You can see a group of people from above doing lessons online. It symoblises digital teaching/digitale Lehre.

Sharing knowledge: Impact of Covid-19 on digital teaching

How can we address the many inequalities in access to digital resources and lack of digital skills that were revealed by the COVID-19 pandemic?