The final, formatted version of the article will be published soon.
EDITORIAL article
Front. Radiol.
Sec. Cardiothoracic Imaging
Volume 4 - 2024 |
doi: 10.3389/fradi.2024.1412404
This article is part of the Research Topic Artificial Intelligence and Multimodal Medical Imaging Data Fusion for Improving Cardiovascular Disease Care View all 5 articles
Editorial: Artificial Intelligence and Multimodal Medical Imaging Data Fusion for Improving Cardiovascular Disease Care
Provisionally accepted- 1 Bioengineering Dept., College of Engineering, Northeastern University, The Roux Institute, Northeastern University, Portland, United States
- 2 Maine Medical Center, Maine Health, Portland, Maine, United States
- 3 Dana–Farber Cancer Institute, Boston, Massachusetts, United States
Today's digital health aims to provide an improved efficiency of healthcare delivery, and personalized, and timely disease care. Cardiovascular Disease (CVD) is a leading cause of death worldwide. In the United States, 1 out of 3 adults have some form of CVD. It is projected that nearly half of the US population will have at least one type of CVD by 2035, with total direct and indirect costs potentially surpassing $1 trillion (1-3). Medical imaging data encompass multiple modalities that are primarily utilized in silos. These include Computed Tomography (CT), Magnetic Resonance Imaging (MRI), CT-derived fractional flow reserve (CT-FFR), cardiac MRI, whole-heart dynamic 3D cardiac MRI perfusion, 3D cardiac MRI late gadolinium enhancement, cardiac positron emission tomography (PET), echocardiography and coronary angiography. However, only a few modalities are utilized in hybrid configurations, such as Positron Emission Tomography combined with Computed Tomography (PET/CT),Single-Photon Emission Computed Tomography combined with CT (SPECT/CT), Echocardiography and invasive angiography. Integrating these different imaging modalities becomes a burden on clinicians as it can lead to added complexity, potential inaccuracies, and increased healthcare costs. This research topic focused on fusion techniques that enable the integration and modeling of these multiple modalities to offer complementary information that will help improve CVD care. These modalities will leverage Machine Learning (ML) and Deep Learning (DL) techniques as well as other state-of-the-art techniques. Following are some insights and findings from this research topic: Milosevic et al.) conducted a systematic and comprehensive review on the state-of-the-art multi-modal medical data fusion in the context of CVD (4). Their review indicated that there are limited open multimodal datasets that are constrained both in size and modality scope. This scarcity of open datasets of labeled pathologies contributes to the comparatively few published papers on the diagnosis or prediction of cardiovascular diseases and conditions. The review indicated that over the last 5 years, there has been a considerable amount of work in artificial intelligence employing fusion techniques of multi-modal imaging involving various magnetic resonance imaging (MRI) and CT scans. However, the integration of modalities like x-ray, echocardiography, and non-imaging modalities remains relatively scarce.
Keywords: artificial intelligence, Multimodal Medical Data Fusion, cardiovascular disease, multimodal machine learning, Multimodal deep learning, machine learning (ML), machine learning and AI
Received: 04 Apr 2024; Accepted: 09 Oct 2024.
Copyright: © 2024 Amal, Sawyer and Könik. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Saeed Amal, Bioengineering Dept., College of Engineering, Northeastern University, The Roux Institute, Northeastern University, Portland, United States
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.