ORIGINAL RESEARCH article
Front. Oncol.
Sec. Cancer Imaging and Image-directed Interventions
Volume 15 - 2025 | doi: 10.3389/fonc.2025.1608963
Risk Assessment of Thyroid Nodules with a Multi-Instance Convolutional Neural Network
Provisionally accepted- 1Department of Ultrasound, The Affiliated Hospital of Hangzhou Normal University, Hangzou, China
- 2Department of Radiology, The Fourth People’s Hospital of Harbin, Harbin, China
- 3Department of Computer Science, School of Informatics, Xiamen University, Xiamen, China
- 4Department of Radiology, The Affiliated Hospital of Hangzhou Normal University,, Hangzhou, China
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Ultrasonography is the primary imaging modality for evaluating thyroid nodules, and artificial intelligence (AI) has advanced the automated diagnosis of thyroid cancer. However, existing AI-assisted methods often suffer from limited diagnostic performance. In this study, we propose a novel multi-instance learning (MIL) convolutional neural network (CNN) model tailored for ultrasound-based thyroid cancer diagnosis. Specifically, the model extracts nodule-level ultrasound features from instance-level images using CNNs, and employs an attention mechanism to assign importance scores and aggregate features across instances. This enables effective feature extraction and localization of key instance features, facilitating risk assessment of thyroid nodules. The dataset consists of ultrasound images from 2000 patients at the Affiliated Hospital of Hangzhou Normal University, collected between 2018 and 2024.The images were divided into training (75%, 1500 patients) and testing (25%, 500 patients) sets. The model's performance was evaluated using metrics, including accuracy, precision, recall, F1-Score, and AUC. To assess the statistical significance of the model's performance relative to other methods, a paired t-test was conducted based on the prediction results. The performance of the model developed in this study was evaluated and compared with popular ultrasound image classification models for 2 thyroid nodules. The model outperformed the other two classification models (accuracy 0.8386 ± 0.0334, 0.7999 ± 0.0188, 0.7839± 0.0267; precision 0.8512 ± 0.0301, 0.9039±0.0154, 0.9267±0.0235; recall0.8427±0.0313, 0.7497±0.0163, 0.6987±0.0249; F1-Score 0.8380±0.0344, 0.8196±0.0178, 0.7967±0.0251; AUC 0.8900±0.0309, 0.8851 ±0.0124, 0.6340±0.0200), , where values are under 95% confidence interval. Statistical analysis showed that the performance differences were statistically significant (p < 0.0001). These results demonstrate the effectiveness and clinical utility of the proposed MIL-CNN framework in non-invasively stratifying thyroid nodule risk, supporting more informed clinical decisions and potentially reducing unnecessary biopsies and surgeries. Codes will be available at https://github.com/rrrr-ops/Thyroid-AI.
Keywords: Multi-instance learning, Convolutional Neural Network, Thyroid Nodule, ultrasound image feature, thyroid cancer diagnosis
Received: 09 Apr 2025; Accepted: 01 Jul 2025.
Copyright: © 2025 Yu, Song, Yu, Zhang, Gao, Wang and Wang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Zirong Wang, Department of Radiology, The Affiliated Hospital of Hangzhou Normal University,, Hangzhou, China
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.