ORIGINAL RESEARCH article

Front. Bioeng. Biotechnol.

Sec. Biomechanics

Volume 13 - 2025 | doi: 10.3389/fbioe.2025.1526260

This article is part of the Research TopicUse of Digital Human Modeling for Promoting Health, Care and Well-BeingView all 14 articles

Accurate Classification of Benign and Malignant Breast Tumors in Ultrasound Imaging with an Enhanced Deep Learning Model

Provisionally accepted
BaoQin  LiuBaoQin Liu1*Shouyao  LiuShouyao Liu1Zijian  CaoZijian Cao2Junning  ZhangJunning Zhang3Xiaoqi  PuXiaoqi Pu1Junjie  YuJunjie Yu1
  • 1China-Japan Friendship Hospital, Beijing, China
  • 2Tsinghua University, Beijing, Beijing, China
  • 3School of Traditional Chinese Medicine, Beijing University of Chinese Medicine, Beijing, China

The final, formatted version of the article will be published soon.

Background:Breast cancer is the most common malignant tumor in women worldwide, and early detection is crucial to improving patient prognosis. However, traditional ultrasound examinations rely heavily on physician judgment, and diagnostic results are easily influenced by individual experience, leading to frequent misdiagnosis or missed diagnosis. Therefore, there is a pressing need for an automated, highly accurate diagnostic method to support the detection and classification of breast cancer. This study aims to build a reliable breast ultrasound image benign and malignant classification model through deep learning technology to improve the accuracy and consistency of diagnosis.Methods : This study proposed an innovative deep learning model RcdNet. RcdNet combines deep separable convolution and Convolutional Block Attention Module (CBAM) attention modules to enhance the ability to identify key lesion areas in ultrasound images. The model was internally validated and externally independently tested, and compared with commonly used models such as ResNet, MobileNet, RegNet, ViT and ResNeXt to verify its performance advantage in benign and malignant classification tasks. In addition, the model's attention area was analyzed by heat map visualization to evaluate its clinical interpretability.Results : The experimental results show that RcdNet outperforms other mainstream deep learning models, including ResNet, MobileNet, and ResNeXt, across all key evaluation metrics. On the external test set, RcdNet achieved an accuracy of 0.9351, a precision of 0.9168, a recall of 0.9495, and an F1-score of 0.9290, demonstrating superior classification performance and strong generalization ability. Furthermore, heat map visualizations confirm that RcdNet accurately attends to clinically relevant features such as tumor edges and irregular structures, aligning well with radiologists' diagnostic focus and enhancing the interpretability and credibility of the model in clinical applications.Conclusion:The RcdNet model proposed in this study performs well in the classification of benign and malignant breast ultrasound images, with high classification accuracy, strong generalization ability and good interpretability. RcdNet can be used as an auxiliary diagnostic tool to help physicians quickly and accurately screen breast cancer, improve the consistency and reliability of diagnosis, and provide strong support for early detection and precise diagnosis and treatment of breast cancer. Future work will focus on integrating RcdNet

Keywords: breast ultrasound, deep learning, Deep separable convolution, attention mechanism, benign and malignant diagnosis

Received: 11 Nov 2024; Accepted: 12 Jun 2025.

Copyright: © 2025 Liu, Liu, Cao, Zhang, Pu and Yu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: BaoQin Liu, China-Japan Friendship Hospital, Beijing, China

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.