Skip to content

Explaining short text classification with diverse synthetic exemplars and counter-exemplars

Author: Lampridis, O., & State, L., & Guidotti, R., & Ruggieri, S.
Published in: Machine Learning, 112, 4289-4322
Year: 2023
Type: Academic articles
DOI: https://doi.org/10.1007/s10994-022-06150-7

We present XSPELLS, a model-agnostic local approach for explaining the decisions of black box models in classification of short texts. The explanations provided consist of a set of exemplar sentences and a set of counter-exemplar sentences. The former are examples classified by the black box with the same label as the text to explain. The latter are examples classified with a different label (a form of counter-factuals). Both are close in meaning to the text to explain, and both are meaningful sentences – albeit they are synthetically generated. XSPELLS generates neighbors of the text to explain in a latent space using Variational Autoencoders for encoding text and decoding latent instances. A decision tree is learned from randomly generated neighbors, and used to drive the selection of the exemplars and counter-exemplars. Moreover, diversity of counter-exemplars is modeled as an optimization problem, solved by a greedy algorithm with theoretical guarantee. We report experiments on three datasets showing that XSPELLS outperforms the well-known LIME method in terms of quality of explanations, fidelity, diversity, and usefulness, and that is comparable to it in terms of stability.

Visit publication

Publication

Connected HIIG researchers

Laura State, PhD

Postdoctoral researcher: Impact AI


  • Open Access
  • Peer Reviewed

Explore current HIIG Activities

Research issues in focus

HIIG is currently working on exciting topics. Learn more about our interdisciplinary pioneering work in public discourse.