Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Artif. Intell.

Sec. AI in Finance

Designing Effective Explainable AI: A Human-Centered Evaluation of Explanation Formats in Financial Decision-Making

Provisionally accepted
Henry  MaathuisHenry Maathuis1,2*Marcel  StalenhoefMarcel Stalenhoef1Sieuwert  Van OtterlooSieuwert Van Otterloo1Raymond  ZwaalRaymond Zwaal3Kees  Van MontfortKees Van Montfort3Danielle  SentDanielle Sent1,2
  • 1HU University of Applied Sciences Utrecht, Utrecht, Netherlands
  • 2Tilburg University Jheronimus Academy of Data Science, 's Hertogenbosch, Netherlands
  • 3Hogeschool van Amsterdam, Amsterdam, Netherlands

The final, formatted version of the article will be published soon.

As artificial intelligence (AI) systems are increasingly deployed in high-risk financial decision-making contexts, the demand for transparency and interpretability becomes critical. Explainable AI (XAI) has emerged as a key research domain addressing these needs. While most existing XAI studies emphasize objective quality measures such as correctness and completeness of explanations, they often overlook the role of end-user requirements and the broader ecosystem of stakeholders. This study presents a human-centered evaluation of different visual explanation designs in financial AI applications, assessing their effectiveness. A two-phase mixed-method evaluation was conducted, combining user studies with end-users and a stakeholder workshop, to rank visual prototypes across four explanation types: feature importance, counterfactuals, contrastive/similar examples, and rule-based explanations. A key finding is the divergence between end-users and other stakeholders — including compliance officers, XAI consultants, and developers — with end-users indicating a preference for concise, contextually visual explanations (e.g., small sets of decision rules or risk plots relative to similar cases), while other stakeholders often favor more complete, technically detailed representations. This highlights a critical trade-off between interpretability and completeness. This suggests that visual encoding choices may affect the effectiveness of AI explanations across different stakeholder groups.

Keywords: Explainable AI, explanation formats, finance, Graphical design, human-centered evaluation

Received: 17 Jul 2025; Accepted: 30 Jan 2026.

Copyright: © 2026 Maathuis, Stalenhoef, Van Otterloo, Zwaal, Van Montfort and Sent. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Henry Maathuis

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.