Zum Inhalt springen

What to explain when explaining is difficult. An interdisciplinary primer on XAI and meaningful information in automated decision-making

Author: Asghari, H., Birner, N., Burchardt, A., Dicks, D., Fassbender, J., Feldhus, N., Hewett, F., Hofmann, V., Kettemann, M. C., Schulz, W., Simon, J., Stolberg-Larsen, J., & Züger, T.
Published in: HIIG Impact Publication Series
Year: 2022
Type: Other publications
DOI: 10.5281/zenodo.6375784

Explanations of how automated decision making (ADM) systems make decisions (explainable AI, or XAI) can be considered a promising way to mitigate their negative effects. The EU GDPR provides a legal framework for explaining ADM systems. “Meaningful information about the logic involved” has to be provided. Nonetheless, neither the text of the GDPR itself nor the commentaries on the GDPR provide details on what this precisely is. This report approaches these terms from a legal, technical and design perspective.Legally, the explanation has to enable the user to appeal the decision made by the ADM system and balance the power of the ADM developer with those of the user. “The logic” can be understood as “the structure and sequence of the data processing”. The GDPR focuses on individual rather than collective rights. Therefore, we recommend putting the individual at the centre of the explanation in a first step in order to comply with the GDPR.From a technical perspective, the term “logic involved” is – at best – misleading. ADM systems are complex and dynamic socio-technical ecosystems. Understanding “the logic” of such diverse systems requires action from different actors and at numerous stages from conception to deployment. Transparency at the input level is a core requirement for mitigating potential bias, as post-hoc interpretations are widely perceived as being too problematic to tackle the root cause. The focus should therefore shift to making the underlying rationale, design and development process transparent—documenting the input data as part of the “logic involved”. The explanation of an ADM system should also be part of the development process from the very beginning.When it comes to the target group of an explanation, public or community advocates should play a bigger role. These advocate groups support individuals confronted with an automated decision. Their interest will be more in understanding the models and their limitations as a whole instead of only focussing on the result of one individual decision.

Visit publication
Download Publication

xai

Connected HIIG researchers

Vincent Hofmann

Wissenschaftlicher Mitarbeiter: AI & Society Lab

Jakob Stolberg-Larsen

Ehem. Wissenschaftlicher Mitarbeiter: AI & Society Lab

Hadi Asghari, Dr.

Wissenschaftlicher Mitarbeiter: AI & Society Lab

Freya Hewett

Wissenschaftliche Mitarbeiterin: AI & Society Lab

Judith Faßbender

Wissenschaftliche Mitarbeiterin: AI & Society Lab

Nadine Birner

Ehem. Koordinatorin: Ethik der Digitalisierung | NoC

Daniela Dicks

Ehem. Co-Leitung und Sprecherin: AI & Society Lab

Matthias C. Kettemann, Prof. Dr. LL.M. (Harvard)

Forschungsgruppenleiter und Assoziierter Forscher: Globaler Konstitutionalismus und das Internet

Theresa Züger, Dr.

Leiterin AI & Society Lab & Forschungsgruppe Public Interest AI

Wolfgang Schulz, Prof. Dr.

Forschungsdirektor

Forschungsthema im Fokus

Du siehst Eisenbahnschienen. Die vielen verschiedenen Abzweigungen symbolisieren die Entscheidungsmöglichkeiten von Künstlicher Intelligenz in der Gesellschaft. Manche gehen nach oben, unten, rechts. Manche enden auch in Sackgassen. Englisch: You see railway tracks. The many different branches symbolise the decision-making possibilities of artificial intelligence and society. Some go up, down, to the right. Some also end in dead ends.

Künstliche Intelligenz und Gesellschaft

Die Zukunft der künstliche Intelligenz funktioniert in verschiedenen sozialen Kontexten. Was können wir aus ihren politischen, sozialen und kulturellen Facetten lernen?