ORIGINAL RESEARCH article

Front. Physiol.

Sec. Medical Physics and Imaging

Volume 16 - 2025 | doi: 10.3389/fphys.2025.1605426

Multimodal Learning for Enhanced SPECT/CT Imaging in Sports Injury Diagnosis

Provisionally accepted
  • Zhongyuan University of Technology, Zhengzhou, China

The final, formatted version of the article will be published soon.

Single-photon emission computed tomography/computed tomography (SPECT/CT) imaging plays a critical role in sports injury diagnosis by offering both anatomical and functional insights. However, traditional SPECT/CT techniques often suffer from poor image quality, low spatial resolution, and limited capacity for integrating multiple data sources, which can hinder accurate diagnosis and intervention. To address these limitations, this study propose a novel multimodal learning framework that enhances SPECT/CT imaging through biomechanical data integration and deep learning. Our method introduces a hybrid model combining convolutional neural networks for spatial feature extraction and transformer-based temporal attention for sequential pattern recognition. This study further incorporate a biomechanics-aware injury detection module (BID-Net), which leverages kinematic signals, motion data, and physiological context to refine lesion detection accuracy.Experimental results on a curated sports injury dataset demonstrate that our framework significantly improves image clarity, diagnostic precision, and interpretability over traditional approaches. The integration of biomechanical constraints and adaptive attention mechanisms not only enhances SPECT/CT imaging quality but also bridges the gap between AI-driven analytics and clinical practice in sports medicine.Our study presents a promising direction for intelligent, real-time diagnostic tools capable of supporting injury prevention, early detection, and rehabilitation planning in athletic care.

Keywords: SPECT/CT imaging, multimodal learning, sports injury diagnosis, deep learning, Biomechanics-Aware Analysis

Received: 03 Apr 2025; Accepted: 04 Jul 2025.

Copyright: © 2025 Shen. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Yawen Shen, Zhongyuan University of Technology, Zhengzhou, China

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.