Zum Inhalt springen

Explaining AI – How to explain the unexplainable?

Author: Hofmann, V.
Published in: Digital society blog
Year: 2022
Type: Other publications
DOI: 10.5281/zenodo.6397649

Automated decision making (ADM) systems have become ubiquitous in our everyday lives and are complex to understand. So should we even care how AI-based decisions are made? Most definitely, since the use of these systems entails risks on an individual as well as on a societal level — such as perpetuated stereotypes or incorrect results due to incorrect data input. These risks are not new to be discussed. And nonetheless, there is strong evidence that humans working with automated decisions tend to follow the systems in almost 100 % of the cases. So how can we empower people to think with AI, to question or challenge it, instead of blindly trusting their correctness? The solutions are meaningful explanations for lay users by and about the system and its decision-making procedure. But how? This blog article shows which criteria these explanations should fulfil and how they can meet the criteria of the General Data Protection Regulation (GDPR).

Visit publication

Publication

Connected HIIG researchers

Vincent Hofmann

Wissenschaftlicher Mitarbeiter: AI & Society Lab

Aktuelle HIIG-Aktivitäten entdecken

Forschungsthemen im Fokus

Das HIIG beschäftigt sich mit spannenden Themen. Erfahren Sie mehr über unsere interdisziplinäre Pionierarbeit im öffentlichen Diskurs.