Zum Inhalt springen

How should an explanation be? A mapping of technical and legal desiderata of explanations for machine learning models

Author: Bringas Colmenarejo, A., & State, L., & Comandé, G.
Published in: International Review of Law, Computers & Technology
Year: 2025
Type: Academic articles
DOI: https://doi.org/10.1080/13600869.2025.2497633

Machine learning (ML) systems are abundant in our world. However, most of these systems are not understandable, which poses several challenges, including their safety, proper functioning and accountability. Further, ML models are susceptible to social biases, which can lead to unjust and discriminatory situations. The field of eXplainable Artificial Intelligence (XAI) attempts to answer these challenges by providing explanation methods for ML models. However, there is still an open debate about the necessary desiderata of such methods, including the often-missing consideration of the legal side of explanation desiderata. In this work, we put forward a set of five technical and five legal desiderata of XAI and develop a multi-layered mapping encompassing the dynamics among and between the two sets and linking them to actual requirements. From the standpoint of legality, we rely on the European requirements explainability and justificability. In our mapping, we draw the interdependencies and the intersections between the technical and legal desiderata, creating an image that visualises the assessment of the technical and legal driving forces (desiderata matching requirements) in the design and provision of explanations. Ultimately, explainability and justificability desiderata must be systematic; understood as a dynamic, circular and iterative process.

Visit publication

Publication

Connected HIIG researchers

Laura State, PhD

Postdoktorandin: Impact AI

Aktuelle HIIG-Aktivitäten entdecken

Forschungsthemen im Fokus

Das HIIG beschäftigt sich mit spannenden Themen. Erfahren Sie mehr über unsere interdisziplinäre Pionierarbeit im öffentlichen Diskurs.