ORIGINAL RESEARCH article
Front. Med.
Sec. Pulmonary Medicine
Volume 12 - 2025 | doi: 10.3389/fmed.2025.1601736
This article is part of the Research TopicApplication of Multimodal Data and Artificial Intelligence in Pulmonary DiseasesView all 5 articles
COPD-MMDDxNet: A Multimodal Deep Learning Framework for Accurate COPD Diagnosis Using Electronic Medical Records
Provisionally accepted- 1Department of Respiratory Medicine, The First Affiliated Hospital of Dalian Medical University, Dalian, China
- 2Department of Emergency Medicine, First Affiliated Hospital, Dalian Medical University, Dalian, Liaoning Province, China
- 3Dalian Medical University, Dalian, Liaoning, China
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
COPD affects approximately 391 million people globally. While spirometry is recognized as the gold standard for diagnosing COPD according to the GOLD guidelines, its availability is limited in primary healthcare settings, particularly in low-and middle-income countries.Furthermore, spirometry requires patient cooperation, which may be challenging for individuals with physical limitations or comorbidities, potentially impacting its accuracy. As a result, there is a need for alternative diagnostic methods, particularly those suited for resourceconstrained environments.This study proposes a novel multimodal deep learning framework, COPD-MMDDxNet, which integrates structured pulmonary CT reports, blood gas analysis, and hematological analysis from electronic medical records (EMRs) to overcome the limitations of existing diagnostic methods. This framework develops the first multimodal diagnostic tool for COPD that does not rely on spirometry. It innovatively fuses cross-modal data, incorporating four key components: parametric numerical embedding, hierarchical interaction mechanisms, contrastive regularization strategies, and dynamic fusion coefficients. These innovations significantly enhance the model's ability to capture complex cross-modal relationships, thereby improving diagnostic accuracy.The dataset used in this study comprises 800 COPD patients, with a balanced age and sex distribution, and data were collected over a 24-month period.Experimental results demonstrate that COPD-MMDDxNet outperforms traditional single-modality models and other state-of-the-art multimodal models in terms of accuracy (81.76%), precision (78.87%), recall (77.78%), and F1 score (78.32%). Ablation studies confirm the critical importance of each model component, particularly the contrastive learning module and cross-modal attention mechanism, in enhancing model performance.This framework offers a robust solution for more accurate and accessible COPD diagnosis, particularly in resource-constrained environments, without the need for spirometry.
Keywords: COPD, Multimodal deep learning, diagnosis, cross-modal interaction, Electronic Medical Records
Received: 28 Mar 2025; Accepted: 09 May 2025.
Copyright: © 2025 Yi, Shi, Liu, WANG, Feng and Li. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Min Feng, Department of Respiratory Medicine, The First Affiliated Hospital of Dalian Medical University, Dalian, China
Yanxia Li, Department of Respiratory Medicine, The First Affiliated Hospital of Dalian Medical University, Dalian, China
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.