ORIGINAL RESEARCH article
Front. Med.
Sec. Pathology
Volume 12 - 2025 | doi: 10.3389/fmed.2025.1574514
This article is part of the Research TopicArtificial Intelligence-Assisted Medical Imaging Solutions for Integrating Pathology and Radiology Automated Systems - Volume IIView all 19 articles
Deep Learning-Based Image Classification for AI-Assisted Integration of Pathology and Radiology in Medical Imaging
Provisionally accepted- GuiZhou University of Commerce, Guiyang, China
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
The integration of pathology and radiology through artificial intelligence (AI) represents a groundbreaking advancement in medical imaging, providing a powerful tool for accurate diagnostics and the optimization of clinical workflows. Traditional image classification methods encounter substantial challenges due to the inherent complexity and heterogeneity of medical imaging datasets, which include multi-modal data sources, imbalanced class distributions, and the critical need for interpretability in clinical decision-making. Addressing these limitations, this study introduces an innovative deep learning-based framework tailored for AI-assisted medical imaging tasks. It incorporates two novel components: the Adaptive Multi-Resolution Imaging Network (AMRI-Net) and the Explainable Domain-Adaptive Learning (EDAL) strategy. AMRI-Net enhances diagnostic accuracy by leveraging multi-resolution feature extraction, attention-guided fusion mechanisms, and task-specific decoders, Allowing the model to accurately identify both detailed and overarching patterns across various imaging techniques, such as X-rays, CT, and MRI scans. Complementing this architecture, EDAL significantly improves domain generalizability through advanced domain alignment techniques while integrating uncertainty-aware learning to prioritize high-confidence predictions. it employs attention-based interpretability tools to highlight critical image regions, improving transparency and clinical trust in AI-driven diagnoses.Experimental results on multi-modal medical imaging datasets underscore the framework's superior performance, with classification accuracies reaching up to 94.95% and F1-Scores up to 94.85%, thereby enhancing transparency and clinical trust in AI-driven diagnoses. This research bridges the gap between pathology and radiology, offering a comprehensive AI-driven solution that aligns with the evolving demands of modern healthcare by ensuring precision, reliability, and interpretability in medical imaging.
Keywords: medical imaging, deep learning, multi-modal integration, Domain adaptation, Interpretability
Received: 11 Feb 2025; Accepted: 05 May 2025.
Copyright: © 2025 Luan. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Lan Luan, GuiZhou University of Commerce, Guiyang, China
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.