Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Artif. Intell.

Sec. Machine Learning and Artificial Intelligence

This article is part of the Research TopicFrontiers in Explainable AI: Positioning XAI for Action, Human-Centered Design, Ethics, and UsabilityView all 5 articles

A Framework for Causal Concept-based Model Explanations

Provisionally accepted
  • Norwegian University of Science and Technology, Trondheim, Norway

The final, formatted version of the article will be published soon.

This work presents a conceptual framework for causal concept-based post-hoc Explainable Artificial Intelligence (XAI), based on the requirements that explanations for non-interpretable models should be understandable as well as faithful to the model being explained. Local and global explanations are generated by calculating the probability of sufficiency of concept interventions. Example explanations are presented, generated with a proof-of-concept model made to explain classifiers trained on the CelebA dataset. Understandability is demonstrated through a clear concept-based vocabulary, subject to an implicit causal interpretation. Fidelity is addressed by highlighting important framework assumptions, stressing that the context of explanation interpretation must align with the context of explanation generation.

Keywords: causal explanation, Concept attribution, Counterfactual explanation, post-hoc XAI, probability of sufficiency

Received: 02 Dec 2025; Accepted: 29 Dec 2025.

Copyright: © 2025 Bjøru, Lysnæs-Larsen, Jørgensen, Strümke and Langseth. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Anna Rodum Bjøru

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.