Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Psychiatry

Sec. Digital Mental Health

This article is part of the Research TopicAdvances in Artificial Intelligence Applications that Support Psychosocial HealthView all 15 articles

Integrating Explainable AI with Clinical Features to Enhance ADHD Diagnostic Understanding

Provisionally accepted
  • 1Leeds Beckett University School for Built Environment Engineering and Computing, Leeds, United Kingdom
  • 2South West Yorkshire Partnership NHS Foundation Trust, Wakefield, United Kingdom

The final, formatted version of the article will be published soon.

Attention Deficit Hyperactivity Disorder (ADHD) is frequently under or over diagnosed due to reliance on subjective assessments, leading to delayed treatment and poor out-comes. Although machine learning (ML) classifiers achieve high accuracy, their "black-box" nature limits clinical trust and hampers the integration of demographic and co-morbidity data. Existing studies rarely integrate comprehensive clinical, substance-use, and quality-of-life measures in a single predictive framework, nor do they systematically compare explainable artificial intelligence (XAI) outputs against traditional exploratory analyses. We retrospectively analysed 786 anonymised adult assessments (Jan-uary 2019–December 2024) from a UK mental health service, incorporating features spanning demographics, standardised symptom scales (MDQ, GAD-7, PHQ-9, CAARS, DIVA), substance-use screens (AUDIT, DAST), and EQ-5D-3L quality-of-life indices. An XGBoost classifier was trained and interpreted using SHapley Additive exPlanations (SHAP). SHAP attributions were compared with correlation, and t-test results to validate feature relevance and reveal interaction effects. The model achieved 77% accuracy and an AUC-ROC of 0.82 on the test set. SHAP analyses identified CAARS ADHD Raw scores and DIVA adulthood inattentiveness as the strongest predictors, with PHQ-9 interactions modulating risk. Age and gender emerged as significant moderators of symptom expression. Traditional EDA confirmed these features' statistical significance, while SHAP uncovered non-linear dependencies such as elevated depressive symptoms ampli-fying CAARS contributions. By fusing clinical data with explainable AI, our framework offers transparent, patient-centered insights into ADHD diagnosis. Clinicians can leverage identified thresholds and interaction patterns to enhance screening accuracy and tailor interventions. Future work should validate this approach in diverse settings and explore longitudinal dynamics of feature influences.

Keywords: ADHD, CAARS, DIVA, Mental Health, Explainability, machine learning, Model interpretability

Received: 15 Sep 2025; Accepted: 31 Oct 2025.

Copyright: © 2025 Shakeel, Antoniou and Adamou. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Hafiz Muhammad Shakeel, hmshakeel.567@gmail.com

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.