ORIGINAL RESEARCH article

Front. Oncol.

Sec. Breast Cancer

Volume 15 - 2025 | doi: 10.3389/fonc.2025.1600057

This article is part of the Research TopicAdvancing Breast Cancer Care Through Transparent AI and Federated Learning: Integrating Radiological, Histopathological, and Clinical Data for Diagnosis, Recurrence Prediction, and SurvivorshipView all 7 articles

CLGB-Net: Fusion Network for Identifying Local and Global Information of Lesions in Digital Mammography Images

Provisionally accepted
Ningxuan  HuNingxuan Hu1Zhizhen  GaoZhizhen Gao1,2*Zongyu  XieZongyu Xie2*Lei  LiLei Li1
  • 1Bengbu Medical College, Bengbu, China
  • 2Department of Radiology, The First Affiliated Hospital of Bengbu Medical College, Bengbu, Anhui, China

The final, formatted version of the article will be published soon.

Worldwide, breast cancer ranks among the cancers with the highest incidence rate. Early diagnosis is crucial to improve the survival rate of patients. Digital Mammography (DM) is widely used for breast cancer diagnosis. The disadvantage is that DM relies too much on the doctor's experience, which can easily lead to missed diagnoses and misdiagnoses. In order to address the shortcomings of traditional methods, a CLGB-Net deep learning model integrating local and global information is proposed for the early screening of breast cancer. Four network architectures are integrated into the CLGB-Net model: ResNet50, Swin Transformer, Feature Pyramid Network (FPN), and Class Activation Mapping (CAM). ResNet50 is used to extract local features. The Swin Transformer is utilized to capture global contextual information and extract global features. FPN achieves efficient fusion of multi-scale features. CAM generates a class activation weight matrix to weight the feature map, thereby enhancing the sensitivity and classification performance of the model to key regions. In breast cancer early screening, the CLGB-Net demonstrates the following performance metrics: an accuracy rate of 0.900, a recall rate of 0.935, an F1-score of 0.900, and an accuracy rate of 0.904. Experimental data from 3,598 samples, including normal, benign, and malignant cases, support these results. The accuracy of this model was improved by 0.182, 0.038, 0.023, and 0.021 compared to ResNet50, ResNet101, Vit transformer, and Swin transformer, respectively. The CLGB-Net model is capable of capturing both local and global information, particularly in terms of sensitivity to subtle details. It significantly improves the accuracy and robustness of identifying lesions in mammography images and reduces the risk of missed diagnosis and misdiagnosis.

Keywords: breast cancer, Early Screening, CAD, deep learning, CLGB-Net

Received: 25 Mar 2025; Accepted: 17 Jun 2025.

Copyright: © 2025 Hu, Gao, Xie and Li. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence:
Zhizhen Gao, Bengbu Medical College, Bengbu, China
Zongyu Xie, Department of Radiology, The First Affiliated Hospital of Bengbu Medical College, Bengbu, 233004, Anhui, China

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.