ORIGINAL RESEARCH article
Front. Oncol.
Sec. Breast Cancer
Volume 15 - 2025 | doi: 10.3389/fonc.2025.1558880
This article is part of the Research TopicAdvancing Breast Cancer Care Through Transparent AI and Federated Learning: Integrating Radiological, Histopathological, and Clinical Data for Diagnosis, Recurrence Prediction, and SurvivorshipView all 11 articles
Integrating Multimodal Ultrasound Imaging and Machine Learning for Predicting Luminal and Non-Luminal Breast Cancer Subtypes
Provisionally accepted- 1Department of Medical Ultrasound, Affiliated Hospital of Nantong University, Nantong, Jiangsu Province, China
- 2Department of Medical Informatics, School of Medicine, Nantong University, Nantong, Jiangsu Province, China
- 3Department of Ultrasound, First Affiliated Hospital of Anhui Medical University, Hefei, Anhui, China
- 4Department of General Surgery, Affiliated Hospital of Nantong University, Nantong, Jiangsu Province, China
- 5Department of Ultrasound, Affiliated Hospital of Nantong University, Nantong, China
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
This is a provisional file, not the final typeset article Rationale and Objectives Breast cancer molecular subtypes significantly influence treatment outcomes and prognoses, necessitating precise differentiation to tailor individualized therapies. This study leverages multimodal ultrasound imaging combined with machine learning to preoperatively classify luminal and non-luminal subtypes, aiming to enhance diagnostic accuracy and clinical decision-making. Methods This retrospective study included 247 patients with breast cancer, with 192 meeting the inclusion criteria. Patients were randomly divided into a training set (134 cases) and a validation set (58 cases) in a 7:3 ratio. Image segmentation was conducted using 3D Slicer software, adhering to IBSI-standardized radiomics feature extraction. We constructed four model configurations— monomodal, dual-modal, trimodal, and four-modal—through optimized feature selection. These included monomodal datasets comprising 2D ultrasound (US) images, dual-modal datasets integrating 2D US with color Doppler flow imaging (CDFI) (US+CDFI), trimodal datasets incorporating strain elastography (SE) alongside 2D US and CDFI (US+CDFI+SE), and four-modal datasets combining all modalities, including ABVS coronal imaging (US+CDFI+SE+ABVS). . Machine learning classifiers such as logistic regression (LR), support vector machines (SVM), AdaBoost (adaptive boosting), random forests(RF), linear discriminant analysis(LDA), and ridge regression were utilized. Results The four-modal model achieved the highest performance (AUC: 0.947, 95% CI: 0.884-0.986), significantly outperforming the monomodal model (AUC 0.758, ΔAUC +0.189). Multimodal integration progressively enhanced performance: trimodal models surpassed dual-modal and monomodal approaches (AUC 0.865 vs 0.741 and 0.758), and the four-modal framework showed marked improvements in sensitivity (88.4% vs 71.1% for monomodal), specificity (92.7% vs 70.1%), and F1 scores (0.905). Conclusion This study establishes a multimodal machine learning model integrating advanced ultrasound imaging techniques to preoperatively distinguish luminal from non-luminal breast cancers. The model demonstrates significant potential to improve diagnostic accuracy and generalization, representing a notable advancement in non-invasive breast cancer diagnostics.
Keywords: breast cancer, machine learning, Ultrasound diagnostics, Radiomics, molecularsubtypes, Multimodal Imaging, Automated Breast Volume Scanner (ABVS)
Received: 13 Apr 2025; Accepted: 10 Sep 2025.
Copyright: © 2025 Fu, Chen, Zhang, Liu, Chen, Qiu, Lu, Bai, Li, Li, Shen, Gu, Zhang and Ni. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
ChangJiang Gu, Department of General Surgery, Affiliated Hospital of Nantong University, Nantong, Jiangsu Province, China
Yuanpeng Zhang, Department of Medical Informatics, School of Medicine, Nantong University, Nantong, Jiangsu Province, China
XueJun Ni, Department of Ultrasound, Affiliated Hospital of Nantong University, Nantong, China
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.