Zum Inhalt springen
Zeichenfläche – 2

Research Clinic: “Explainable AI”

30 Juli 2021

Überbrückung von Erklärungslücken bei der automatisierten Entscheidungsfindung aus der Perspektive von Governance, Technik und Design

Artificial Intelligence (AI) changes how we think about deciding – and about thinking. It challenges economic dependencies, enables new business models and intensifies the datafication of our economies. Yet, the use of AI entails risks on an individual as well as on a societal level, especially for marginalized groups. AI systems are trained with data which handle people only as members of groups rather than as individuals. Groupthink can lead to the objectification of a person which means a violation of human dignity. But not only the outcomes of AI-based decision-making can pose challenges and lead to discriminatory results. The opacity of machine-learning algorithms and the functioning of (deep) neural networks make it difficult to adequately explain how AI systems reach results (‘black box phenomenon’). Calls for more insights into how automated decisions are made have increasingly grown louder over the past couple of years.

The solution seems clear: We need to make sure that we know enough about automated decision-making processes in order to be able to provide the reasons for a decision to those touched by that same decision – in a way they understand (explainability). Simple enough for it to be understood- sufficiently complex so that the AI’s complexity is not glossed over. Explainability is the necessary first step in a row of conditions which lead to a decision to be perceived as legitimate: Decisions which can be justified are perceived as legitimate. But only what is questioned is justified and only what is understood is questioned. And to be understood, the decision has to be explained. Thus, explainability is a precondition for a decision to be perceived as legitimate (justifiability).

Given these circumstances, it is therefore not easy to ensure we can harness the power of AI for good, and to make it explain to us how decisions were reached – albeit being a requirement  under European law, such as the GDPR. 

In our Clinic “Explainable AI” we aim to tackle these challenges and explore the following key questions from an interdisciplinary perspective:

  • governance perspective: What are the requirements regarding explainability of the GDPR and what must be explained in order to meet these requirements?
  • technical perspective: What can be explained?
  • design perspective: What should explanations look like in order to be meaningful to affected users?

We will invite around 10 international researchers from law, computer science, and UX design to participate in an impact-driven, interdisciplinary Clinic focused on specific use cases. The Clinic will span five intense days (8 – 12 September 2021) and is hosted by the Alexander von Humboldt Institute for Internet and Society (HIIG), is a joint initiative by the Ethics of Digitalisation project and the AI & Society Lab, a research structure within the institute that explores new formats and perspectives on AI. 

About the research project

The clinic is part of the NoC research project “The Ethics of Digitalisation – From Principles to Practices”, which aims to develop viable answers to challenges at the intersection of ethics and digitalisation. Innovative formats facilitate interdisciplinary scientific work on application-, and practice-oriented questions and achieve outputs of high societal relevance and impact. Previous formats included a research sprint on AI in content moderation and a clinic on fairness in online advertising. The project promotes an active exchange between science, politics and society and thus contributes to a global dialogue on the ethics of digitalisation.

Besides the HIIG, the main project partners are the Berkman Klein Center at Harvard University, the Digital Asia Hub, and the Leibniz Institute for Media Research I Hans-Bredow-Institut.

Dieser Beitrag spiegelt die Meinung der Autorinnen und Autoren und weder notwendigerweise noch ausschließlich die Meinung des Institutes wider. Für mehr Informationen zu den Inhalten dieser Beiträge und den assoziierten Forschungsprojekten kontaktieren Sie bitte info@hiig.de

Nadine Birner

Koordinatorin: Ethik der Digitalisierung | NoC

HIIG Monthly Digest

Jetzt anmelden und  die neuesten Blogartikel gesammelt per Newsletter erhalten.

Titelbild European Platform Alternatives. Ein Schwimmbad mit zwei Sprungtürmen von oben.

European Platform Alternatives

Im Jahr 2020 begann das Platform Alternatives Projekt mit der Erforschung der europäischen Plattformökonomie, um die strukturellen Auswirkungen der großen amerikanischen Plattformen und die Strategien ihrer europäischen Wettbewerber zu verstehen. Das Team fand hier eine äußerst vielfältige und aktive Landschaft digitaler Plattformen vor, in der häufig andere Motivationen als Wachstum und Marktherrschaft im Zentrum stehen. Zwei Jahre später bieten die hier versammelten Beiträge nun eine Alternative zu den aktuellen öffentlichen und politischen Debatten, die sich oft nur um die Fragen der Regulierung großer Plattformen drehen. Neben vielfältigen organisatorischen Lösungen und Regulierungsfragen geht es vor allem die Frage, wie sie europäische Plattformen zu echten Alternativen im globalen Markt entwickeln können.

Discover all 5 articles