Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Bioeng. Biotechnol.

Sec. Biosensors and Biomolecular Electronics

Volume 13 - 2025 | doi: 10.3389/fbioe.2025.1631910

This article is part of the Research TopicBiomechanics, Sensing and Bio-inspired Control in Rehabilitation and Assistive Robotics, Volume IIView all 20 articles

Deep Multimodal Biomechanical Analysis for Lower Back Pain Rehabilitation to Improve Patients Stability

Provisionally accepted
Abrar  AshrafAbrar Ashraf1Yanfeng  WuYanfeng Wu2Shaheryar  NajamShaheryar Najam3Mohammed  AlshehriMohammed Alshehri4Yahya  AlqahtaniYahya Alqahtani4Hanan  AljuaidHanan Aljuaid5Ahmad  JalalAhmad Jalal6*Hui  LiuHui Liu7*
  • 1Riphah International University, Rawalpindi, Punjab, Pakistan
  • 2Guodian Nanjing Automation Co., LTD, Nanjing, China
  • 3Bahria University, Islamabad, Islamabad, Pakistan
  • 4King Khalid University, Abha, Saudi Arabia
  • 5Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
  • 6Air University, Islamabad, Pakistan
  • 7University of Bremen, Bremen, Bremen, Germany

The final, formatted version of the article will be published soon.

Advancements in artificial intelligence are transforming rehabilitation by enabling scalable, patient-centric solutions. This study presents 3D-PoseFormer, a deep multimodal framework specifically designed for the telerehabilitation of individuals with lower back pain (LBP). The system leverages synchronized RGB and depth video streams to perform real-time, markerless, and sensor-free analysis of physiotherapy exercises. From depth data, it extracts 3D body joint positions and generates SMPL-based mesh vertices to capture detailed biomechanical and postural information. Simultaneously, RGB frames are processed using keypoint detection techniques—including Shi-Tomasi, AKAZE, BRISK, SIFT, and Harris corner. These features are then enhanced via semantic contour analysis of segmented body parts to extract localized appearance-based features relevant to LBP therapy. These multimodal features are fused and passed to a Transformer network that models temporal motion patterns for accurate classification and performance assessment. The system removes the need for wearable sensors and complements clinician supervision by enabling autonomous and continuous monitoring in home settings as a supplementary tool for rehabilitation. Validation on the KIMORE dataset (baseline, including rehabilitation exercises by patients with lower back pain), mRI dataset (rehabilitation exercises), and UTKinect-Action3D dataset (comprising diverse subjects and activity scenarios) yielded state-of-the-art accuracies (94.73%, 91%, and 94.2%, respectively), demonstrating the framework's robustness, generalizability, and clinical potential in AI-assisted rehabilitation for musculoskeletal disorders.

Keywords: Rehabilitation, data acquisition, depth sensing, Biomechanical Analysis, Machinelearning, Intention recognition, Healthcare system

Received: 20 May 2025; Accepted: 10 Oct 2025.

Copyright: © 2025 Ashraf, Wu, Najam, Alshehri, Alqahtani, Aljuaid, Jalal and Liu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence:
Ahmad Jalal, ahmadjalal@mail.au.edu.pk
Hui Liu, hui.liu@uni-bremen.de

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.