ORIGINAL RESEARCH article
Front. Physiol.
Sec. Computational Physiology and Medicine
Volume 16 - 2025 | doi: 10.3389/fphys.2025.1605406
This article is part of the Research TopicComputational Intelligence for Multimodal Biomedical Data FusionView all articles
Fusion-driven multimodal Learning for Biomedical Time Series in Surgical Care
Provisionally accepted- Xinxiang Central Hospital, Xinxiang, China
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
The integration of multimodal data has become a crucial aspect of biomedical time series prediction, offering improved accuracy and robustness in clinical decision-making.Traditional approaches often rely on unimodal learning paradigms, which fail to fully exploit the complementary information across heterogeneous data sources such as physiological signals, imaging, and electronic health records. These methods suffer from modality misalignment, suboptimal feature fusion, and lack of adaptive learning mechanisms, leading to performance degradation in complex biomedical scenarios. To address these challenges, we propose a novel multimodal Deep Learning framework that dynamically captures inter-modal dependencies and optimizes cross-modal interactions for time series prediction. Our approach introduces an Adaptive multimodal Fusion Network (AMFN), which leverages attention-based alignment, graphbased representation learning, and a modality-adaptive fusion mechanism to enhance information integration. Furthermore, we develop a Dynamic Cross-Modal Learning Strategy (DCMLS) that optimally selects relevant features, mitigates modality-specific noise, and incorporates uncertaintyaware learning to improve model generalization. Experimental evaluations on biomedical datasets demonstrate that our method outperforms state-of-the-art techniques in predictive accuracy, robustness, and interpretability. By effectively bridging the gap between heterogeneous biomedical data sources, our framework offers a promising direction for AI-driven disease diagnosis and treatment planning.
Keywords: multimodal learning, deep learning, biomedical time series, Adaptive fusion, Uncertainty-Aware Learning
Received: 24 Apr 2025; Accepted: 14 Aug 2025.
Copyright: © 2025 Che, Sun and Wang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Jinshan Che, Xinxiang Central Hospital, Xinxiang, China
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.