Skip to content

From Explaining to Diagnosing: A Justice-Oriented Framework of Explainable AI for Bias Detection

Author: Fahimi, M., State, L., & Kasirzadeh, A.
Published in: Proceedings of the Eighth AAAI/ACM Conference on AI, Ethics, and Society (AIES-25) - Main Track I, 8(1)
Year: 2025
Type: Academic articles
DOI: https://doi.org/10.1609/aies.v8i1.36597

Explainable AI (XAI) methods can support the identification of biases in automated decision-making (ADM) systems. However, existing research does not sufficiently address whether these biases originate from the ADM system or mirror underlying societal inequalities. This distinction is important because it has major implications for how to act upon an explanation: while the societal bias produced by the ADM system can be algorithmically fixed, societal inequalities demand societal actions. To address this gap, we propose the RR-XAI-framework (recognition-redistribution through XAI) that builds on a distinction between socio-technical and societal bias and Nancy Fraser's justice theory of recognition and redistribution. In our framework, explanations can play two distinct roles: as a socio-technical diagnosis when they reveal biases produced by the ADM system itself, or as a societal diagnosis when they expose biases that reflect broader societal inequalities. We then outline the operationalization of the framework and discuss its applicability for cases in algorithmic hiring and credit scoring. Based on our findings, we argue that the diagnostic functions of XAI are contingent on the provision of such explanations, the resources of the audiences, as well as the current limits of XAI techniques.

Visit publication

Publication

Connected HIIG researchers

Laura State, PhD

Postdoctoral researcher: Impact AI

Explore current HIIG Activities

Research issues in focus

HIIG is currently working on exciting topics. Learn more about our interdisciplinary pioneering work in public discourse.