ORIGINAL RESEARCH article
Front. Oncol.
Sec. Cancer Imaging and Image-directed Interventions
Volume 15 - 2025 | doi: 10.3389/fonc.2025.1582035
This article is part of the Research TopicAdvances in Oncological Imaging TechniquesView all 9 articles
Prostate Cancer Classification Using 3D Deep Learning and Ultrasound Video Clips: A Multicenter Study
Provisionally accepted- 1Affliiated Dongyang Hospital of Wenzhou Medical University, Dongyang, China
- 2College of Optical Science and Engineering, Zhejiang University, Hangzhou, Zhejiang Province, China
- 3School of Medicine, Zhejiang University, Hangzhou, China
- 4Quzhou Affiliated Hospital of Wenzhou Medical University, Quzhou, China
- 5Ultrasound Imaging & Interventional Therapy, Zhejiang Cancer Hospital, Hangzhou, China
- 6Hangzhou Institute of Medicine(HIM), Chianese Academy of Sciences, Hangzhou, China
- 7Dongyang Hospital of Wenzhou Medical University, Dongyang, China
- 8Yiwu Tianxiang Medical Oriental Hospital, Yiwu, China
- 9Taizhou Cancer Hospital, Taizhou, Zhejiang Province, China
- 10Department of Diagnostic Key Laboratory of Head & Neck Cancer Translational Research of Zhejiang Province, Hangzhou, China
- 11Zhejiang Provincial Research Center for Cancer Intelligent Diagnosis and Molecular Technology, Hangzhou, China
- 12Taizhou Key Laboratory of Minimally Invasive Interventional Therapy & Artificial Intelligence, Taizhou, China
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Objective: This study aimed to evaluate the effectiveness of deep-learning models using transrectal ultrasound (TRUS) video clips in predicting prostate cancer.: We manually segmented TRUS video clips from consecutive men who underwent examination with EsaoteMyLabâ„¢ Class C ultrasonic diagnostic machines between January 2021 and October 2022. The deep learning-inflated 3D ConvNet (I3D) model was internally validated using split-sample validation on the development set through cross-validation. The final performance was evaluated on two external test sets using geographic validation. We compared the results obtained from a ResNet 50 model, four ML models, and the diagnosis provided by five senior sonologists. Results: A total of 815 men (median age: 71 years; IQR: 67-77 years) were included. The development set comprised 552 men (median age: 71 years; IQR: 67-77 years), the internal test set included 93 men (median age: 71 years; IQR: 67-77 years), external test set 1 consisted of 96 men (median age: 70 years; IQR: 65-77 years), and external test set 2 had 74 men (median age: 72 years; IQR: 68-78 years). The I3D model achieved diagnostic classification AUCs greater than 0.86 in the internal test set as well as in the independent external test sets 1 and 2. Moreover, it demonstrated greater consistency in sensitivity, specificity, and accuracy compared to pathological diagnosis (kappa > 0.62, p < 0.05). It exhibited a statistically significant superior ability to classify and predict prostate cancer when compared to other AI models, and the diagnoses provided by sonologists (p<0.05).The I3D model, utilizing TRUS prostate video clips, proved to be valuable for classifying and predicting prostate cancer.
Keywords: prostate cancer, ultrasound, deep learning, I3D model, multicenter study Abbreviations I3D: inflated 3D ConvNet, TRUS: transrectal ultrasound, PCa: prostate cancer, PSA: prostatespecific antigen, MRI: magnetic resonance imaging, DCNNs: deep convolutional neural networks, ML: machine learning, SVM: support vector machine
Received: 23 Feb 2025; Accepted: 11 Jun 2025.
Copyright: © 2025 Lou, Chen, Wu, Liu, Zhou, Zhang, Tu, Hu, Lv, Yang, Qi, Sun, Du, Liu, Zhou, Yuanzhen, Chen, Wang, Yao and Wang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Kai Wang, Affliiated Dongyang Hospital of Wenzhou Medical University, Dongyang, China
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.