SYSTEMATIC REVIEW article
Front. Bioeng. Biotechnol.
Sec. Biomechanics
Volume 13 - 2025 | doi: 10.3389/fbioe.2025.1671344
This article is part of the Research TopicRevolutionizing sports science: Biomechanical models, wearable tech, and AIView all 12 articles
Explainable Artificial Intelligence for Gait Analysis: Advances, Pitfalls, and Challenges - A Systematic Review
Provisionally accepted- 1KTH Royal Institute of Technology, Stockholm, Sweden
- 2University of Calgary, Calgary, Canada
- 3The University of Auckland, Auckland, New Zealand
- 4Ningbo University, Ningbo, China
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Machine learning (ML) has emerged as a powerful tool to analyze gait data, yet the "black-box" nature of many ML models hinders their clinical application. Explainable artificial intelligence (XAI) promises to enhance the interpretability and transparency of ML models, making them more suitable for clinical decision-making. This systematic review, registered on PROSPERO (CRD42024622752), assessed the application of XAI in gait analysis by examining its methods, performance, and potential for clinical utility. A comprehensive search across four electronic databases yielded 3676 unique records, of which 31 studies met inclusion criteria. These studies were categorized into model-agnostic (n = 16), model-specific (n = 12), and hybrid (n = 3) interpretability approaches. Most applied local interpretation methods such as SHAP and LIME, while others used Grad-CAM, attention mechanisms, and Layer-wise Relevance Propagation. Clinical populations studied included Parkinson's disease, stroke, sarcopenia, cerebral palsy, and musculoskeletal disorders. Reported outcomes highlighted biomechanically relevant features such as stride length and joint angles as key discriminators of pathological gait. Overall, the findings demonstrate that XAI can bridge the gap between predictive performance and interpretability, but significant challenges remain in standardization, validation, and balancing accuracy with transparency. Future research should refine XAI frameworks and assess their real-world clinical applicability across diverse gait disorders.
Keywords: gait analysis, machine learning, explainable artificial intelligence (XAI), Biomechanics, Black-box models
Received: 22 Jul 2025; Accepted: 17 Oct 2025.
Copyright: © 2025 Xiang, Gao, Fernandez, Fernandez, Gu, Wang and Gutierrez-Farewik. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Liangliang Xiang, liaxi@kth.se
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.