Wissen über unsere vernetzte Welt


Wissen über unsere vernetzte Welt


Wie intelligent ist unser Umgang mit KI?

02 Mai 2019

Die Debatte, ob eine Maschine intelligent ist, ist relevant. Doch dringender ist die Frage, ob unsere Zusammenarbeit mit Maschinen intelligent ist. Eine intelligente Zusammenarbeit mit Künstlicher Intelligenz (KI) erfordert ergänzende Charakterzüge, da kein Nutzen in Teamarbeit liegt, wenn alle Akteure ähnliche Qualitätsmerkmale aufweisen. Bewusstsein, gesunder Menschenverstand, Intuition, Intentionalität, Phantasie, Moral, emotionale Intelligenz und phänomenologische Erfahrung sind Fähigkeiten, die Computer nicht haben. Im Gegensatz zu Computern kann der Mensch Dinge lernen, die nicht durch die natürliche Selektion vorprogrammiert wurden. Siri Beerends unterstreicht in ihrem Artikel, dass diese menschlichen Fähigkeiten wichtig sind, um Künstliche Intelligenz zu ergänzen und die weitere Entwicklung von KI zu beeinflussen.

Unintelligent collaborations with AI

In her book The Age of Surveillance Capitalism philosopher Shoshana Zuboff shows how monetisation of data, captured through monitoring and predicting people’s behaviours on- and offline, is shaping our environments, behaviours and choices without us being aware. The moment we switch on our devices, the algorithms of the ‘Big Five’ are in charge: gluing us to our screens, keeping us clicking, liking and swiping: generating data-fuel to train their artificial intelligence.

Based on our clicks, algorithms pin us down on categories, for example ‘white-xenophobic-heterosexual-catlover’, forming a base for all recommendations shown to this person. As a result our capacity to explore alternative routes and redefine our identity gets narrowed down. Of course, algorithms can do great things. Stanford researchers for example have developed an algorithm that diagnoses pneumonia better than radiologists. But when it comes to predicting human behaviour there are many proven pitfalls. Mathematician Cathy ‘O Neil has put these on the international agenda with her bestseller Weapons of Math Destruction. Algorithms mix-up correlation and causality and they can suffer from biased datasets and feedback loops. The models are based on majorities and averages, excluding minority perspectives and automating inequality. By scramming non-measurable aspects into quantified models, we lose ambiguity and diversity out of sight. These problems have not stopped governments from implementing algorithmic predictions in smart cities, predictive policing and social welfare. Instead of judging us on the basis of what we do, governments and companies are judging us on the basis of what we might do. Although these predictions do not reflect our behaviour, they guide how we are approached in the on – and offline world.

Machine-like humans

According to Zuboff, humans are slowly transforming into automates: becoming just as predictable and programmable as machines. Alongside Zuboff, data-driven technologies are problematised by a growing number of scientists. Their advice is to stop worrying about a superintelligence that will replace us, and start discussing the devices and algorithms that are replacing human decision making, without having proper understandings, interpretations and social intelligence. Smartphones, health apps, wearables, digital assistants and smart toys are not neutral devices. They represent social and emotional regimes that eliminate irrational behaviour and encourage us to behave in accordance with the moral standards programmed into these devices: reduce your sugar intake, call a friend etc, implying that we can continuously control our behavior in a rational manner. To determine which neighbourhoods are unsafe, what we want to watch and listen, who we want to date or hire for a job, we don’t have to think or rely on our senses, we outsource it to algorithms that guide our decisions and confirm our own worldviews. Why opt-out if you can spend your entire life in a warm bath with filter bubbles and quantified simplifications?

Intelligent Collaborations with AI

The answer: because it makes us more machine-like and less able to establish intelligent, complementary collaborations with AI. The field of AI mainly derived from a mathematical approach. Since AI is used in many non-mathematical domains, technological engineers need to focus more on divergent cultural logics, respecting our ambiguous worlds in everyday contexts.

A complementary collaboration requires a better understanding of our own intelligence as well as artificial intelligence. We need more nuanced understandings of what AI is and how our perceptions of AI are shaped by marketing messages from Silicon Valley and the commercial tech industry. It is important to have interdisciplinary discussions what we think of as intelligence and consciousness, recognising surveillance capitalism and data-driven technologies are also changing our intelligence and consciousness. Presupposing we stop optimising people for data-markets and start investing in machine- as well as human learning, AI will enable us to outsource repetitive tasks, creating more room for value and significance in our own work.

Lastly, and this is rarely mentioned in context of AI: we need to consider the impact of our expanding digital ecosystem on the environment. IoT devices contribute to energy savings, but the total sum of data storage centers, satellite pollution and high-tech consumerism will contribute to the world’s energy bill. Experts are divided on whether AI-applications spell doom or salvation for the environment, but they agree that we can’t wait too long to find out.

Read the full article here!

Siri Beerends is a cultural sociologist, writer and researcher at medialab SETUP, preparing a PhD at University of Twente.

Dieser Beitrag spiegelt die Meinung der Autorinnen und Autoren und weder notwendigerweise noch ausschließlich die Meinung des Institutes wider. Für mehr Informationen zu den Inhalten dieser Beiträge und den assoziierten Forschungsprojekten kontaktieren Sie bitte

Siri Beerends

Bleiben Sie in Kontakt

und melden Sie sich für unseren monatlichen Newsletter mit den neusten Blogartikeln an.


Weitere Artikel

ai imaginaries

„Ich vermisse Visionen, die KI als öffentliches Gut präsentieren“

Christian Katzenbach arbeitet mit seinem Kollegen Jascha Bareis an einem Projekt, das die nationalen KI-Strategien verschiedener Länder wie Frankreich, den USA oder China vergleicht, und identifiziert die unterschiedlichen Ansätze zwischen...

autonomous cars

Killerautos? Autonome Fahrzeuge und strafrechtliche Verantwortung

Wer trägt die strafrechtliche Verantwortung, wenn ein autonomes Fahrzeug in einen Unfall verwickelt ist, wie dem in Arizona letzten Jahres? Zu den potentiellen Verdächtigen gehören: die NutzerInnen, das Softwareunternehmen, die...

demystifying ai

Ein Autopilot für Online-Marketing

Online-Marketing ist für viele Unternehmen ein wichtiger Vertriebskanal. Um jedoch Kampagnen effizient zu gestalten, müssen oft große Datenmengen analysiert werden. KI-basierte Lösungen wie die Software Adspert der Bidmanagement GmbH in...

Hinterlassen Sie einen Kommentar