Your new experience awaits. Try the new design now and help us make it even better

EDITORIAL article

Front. Imaging

Sec. Imaging Applications

This article is part of the Research TopicDeep Learning for Medical Imaging ApplicationsView all 18 articles

Editorial: Deep Learning for Medical Imaging Applications

Provisionally accepted
Simone  BonechiSimone Bonechi1*Monica  BianchiniMonica Bianchini1Paolo  AndreiniPaolo Andreini1Sandeep Kumar  MishraSandeep Kumar Mishra2*
  • 1Universita degli Studi di Siena, Siena, Italy
  • 2Yale University, New Haven, United States

The final, formatted version of the article will be published soon.

annual publications on AI in radiology have surged sevenfold, with MRI and CT dominating the field of data acquisition techniques and neuroradiology leading in contributions, followed by musculoskeletal, cardiovascular, breast, urogenital, thoracic, and abdominal subspecialties. [2] AI has evolved into numerous practical tools with significant clinical impact. Modern systems largely depend on artificial neural networks (ANNs) inspired by brain circuitry, including Convolutional Neural Networks (CNNs), recurrent models, and newer transformer architectures. These approaches achieve high performance across MRI, CT, PET, and ultrasound data, uncovering subtle diagnostic features beyond human perception and supporting earlier disease detection and more efficient clinical workflows. [3] As datasets grow and computational frameworks mature, DL continues to reshape the future of precision medicine. Ongoing challenges include model interpretability, generalizability, and unbiased clinical deployment, but the field is rapidly progressing toward robust, trustworthy, and clinically integrated AI systems. [4] Despite strong research potential on AI, its real-world clinical deployment remains limited, as effective integration into healthcare requires coordinated efforts among stakeholders and careful resolution of ethical challenges. [4,5] Gabriel et al. explored the critical challenge of integrating AI into patient monitoring to support continuous, real-time clinical assessment. Developed by LookDeep Health, the system showed strong performance in object detection and patient-role classification. Their study demonstrated the feasibility of computer vision as a core technology for passive, uninterrupted patient monitoring within operational hospital environments. Performance evaluation showed high accuracy in both object detection and patient-role classification. Using this platform, the investigators compiled a substantial dataset comprising computer-vision, derived predictions from more than 300 high-risk fall patients, totaling over 1,000 monitored patient-days.Abulajiang et al. explored important insights into the association between age at menopause and the risk of major gynecologic malignancies, including cervical, ovarian, and uterine cancers.Using restricted cubic spline (RCS) regression models, the study rigorously characterized nonlinear relationships between menopausal age and subsequent cancer risk. The findings suggest that menopausal age may serve as a meaningful clinical indicator, with potential value in refining individualized cancer risk assessment and informing personalized screening strategies.Chen et al. conducted a systematic review and meta-analysis evaluating the prognostic significance of growth pattern-based grading in mucinous ovarian carcinoma (MOC). The analysis indicates that expansile MOC is associated with more favorable outcomes, whereas infiltrative MOC correlates with advanced disease and poorer prognosis. The findings further underscore the importance of complete surgical staging for infiltrative MOC, while suggesting that comprehensive staging may be optional in patients with early stage expansile MOC. Their findings showed that constructing positive pairs from nearby frames within the same video improves performance compared with pairs derived from the same image, although optimal IVPP hyperparameters vary across downstream tasks. Notably, SimCLR consistently achieved top performance for key B-mode and M-mode lung ultrasound tasks, suggesting that contrastive learning may be better suited than non-contrastive methods for ultrasound imaging applications. Overall, this compilation demonstrates the researchers collectively push forward the development of advanced deep-learning models, reflecting their strong commitment to improving accuracy, reliability, and impact in medical imaging applications.

Keywords: artificial intelligence, cancer diagnosis, Computed tomography (CT), deep learning, machine learning, magnetic resonance imaging (MRI), medical imaging, Ultrasound(US)

Received: 05 Dec 2025; Accepted: 09 Dec 2025.

Copyright: © 2025 Bonechi, Bianchini, Andreini and Mishra. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence:
Simone Bonechi
Sandeep Kumar Mishra

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.