The rapid evolution of artificial intelligence (AI) is redefining industries, decision-making processes, and human-AI interactions. As AI-powered systems integrate into critical domains such as healthcare, finance, law, and autonomous systems, they bring both transformative potential and significant challenges. Chief among these challenges is the opacity of AI decision-making, which hinders adoption, trust, and ethical oversight. Explainable AI (XAI) has emerged as a critical response to these concerns, aiming to make AI more transparent, interpretable, and accountable. However, while XAI research has made theoretical strides, its real-world usability remains limited. Practitioners, regulators, and end-users often find AI explanations either too technical to be actionable or too simplistic to provide meaningful insights.
This Research Topic, "Frontiers in Explainable AI: Positioning XAI for Action, Human-Centered Design, Ethics, and Usability," seeks to explore practical, ethical, and user-centered advancements in XAI. Moving beyond algorithmic transparency, the issue will focus on:
• Usability of XAI methods and their alignment with diverse stakeholder needs. • Ethical challenges, including fairness, bias mitigation, and responsible AI deployment. • Human-AI collaboration, ensuring explanations foster trust, understanding, and actionable insights. • Regulatory and governance implications, as frameworks like the EU AI Act mandate transparency in automated decision-making. • Explainability enhancing robustness and robustness to improve explainability • The role of XAI in large language models and generative AI
Explainability must be more than a compliance requirement; it should be an integral part of AI design that supports decision confidence, human oversight, and trust calibration. As AI systems increasingly impact lives, XAI must transition from a technical ambition to a practical necessity, shaping AI that is not only interpretable but also useful, ethical, and adaptable to real-world complexities.
This article collection aims to position the discussion of XAI as a fundamental technology for real-world decision-making rather than just an academic concept. By focusing on usability, ethics, human-centered design, and governance, this issue will help define the next frontier in deployable, trustworthy, and responsible XAI. We invite contributions from researchers, industry practitioners, and policymakers to push the boundaries of explainability and ensure AI is not only interpretable but also actionable, ethical, and user-aligned.
This Research Topic invites position papers, deep reflections, and critical studies that challenge, refine, and advance the practical implementation of XAI. It welcomes research that bridges XAI theory and real-world application, ensuring that AI remains a trusted, transparent, and ethically responsible partner in decision-making.
This article collection seeks contributions that critically examine the usability, ethics, and governance of XAI, including but not limited to:
1. Bridging the Gap Between XAI Research and Real-world Applications • How can explainability methods be designed to effectively address the practical requirements of professionals across various industries? • What steps are necessary to mature XAI approaches for widespread industry adoption? • The Explainability-Performance Trade-off: Is Transparency Always the Right Choice? • Should lower-performing but interpretable models be favoured over high-performing black-box models in safety-critical applications? • What is the role of data explanations?
2. Human-Centered XAI: Designing for Different Stakeholder Needs • How should XAI be adapted for regulators, decision-makers, end-users, and data subjects? • What are the broader ethical responsibilities of XAI, particularly concerning bias detection, fairness, and societal impact? • How can we prevent users from blindly trusting AI explanations, even when flawed or misleading? • Can interactive explanations—where users refine, question, and challenge AI outputs—lead to better decision-making than static explanations? • How do different explanation styles affect user trust, decision confidence, and cognitive load?
3. The Role of XAI in AI Governance and Compliance • How can explainability be effectively embedded into AI regulation, ensuring transparency is meaningful rather than a bureaucratic checkbox?
4. Explainability and robustness • How can enhancing explainability contribute to improved robustness in AI models, especially under adversarial conditions? • In what ways can robust machine learning techniques facilitate clearer and more trustworthy explanations of model predictions?
5. Explainability in AI Generative? • How can generative models explain outputs that are open-ended, subjective, or probabilistic? • What kinds of explanations do humans expect from creative AI (e.g., for writing, image generation, or reasoning)? • Can LLMs themselves be explanation agents for other generative models?
Article types and fees
This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:
Brief Research Report
Clinical Trial
Community Case Study
Conceptual Analysis
Data Report
Editorial
FAIR² Data
FAIR² DATA Direct Submission
General Commentary
Articles that are accepted for publication by our external editors following rigorous peer review incur a publishing fee charged to Authors, institutions, or funders.
Article types
This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:
Important note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.