Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Artif. Intell.

Sec. Pattern Recognition

Volume 8 - 2025 | doi: 10.3389/frai.2025.1527980

This article is part of the Research TopicAI-Enabled Breakthroughs in Computational Imaging and Computer VisionView all articles

MedAlmighty: Enhancing disease diagnosis with Large Vision Model distillation

Provisionally accepted
Yajing  RenYajing Ren*Zheng  GuZheng GuWen  LiuWen Liu
  • Artificial Intelligence and Smart Mine Engineering Technology Center, Xinjiang Institute of Engineering, Urumqi, China

The final, formatted version of the article will be published soon.

Accurate disease diagnosis is crucial in the medical domain, but it is hindered by the limited, heterogeneous, and complex nature of medical data. These challenges are amplified in multimodal tasks, which require integrating diverse information. Lightweight models, though computationally efficient, often fail to capture the comprehensive medical knowledge needed for reliable predictions. To address this, we explore leveraging pre-trained large vision models, which offer robust features due to their extensive parameters and general-domain training, but lack specialized medical knowledge and rely on large-scale datasets often unavailable in the medical field. To overcome these limitations, we aim to combine the generalization strengths of large models with the domain-specific expertise of lightweight models trained on medical data, effectively bridging the gap between general and specialized performance. To this end, we propose MedAlmighty , a distillation-based framework that fuses the generalizable representations of large vision models with the domain-specific knowledge of compact medical models. Specifically, we employ DINOv2 as a frozen teacher model to guide a lightweight CNN student model via knowledge distillation. The student learns from both hard labels and soft targets provided by the teacher, balancing classification accuracy and generalization through a weighted combination of cross-entropy and KL divergence losses. This design enables the student to capture rich semantic features from DINOv2 while remaining efficient and tailored to the medical domain. Experimental results demonstrate that MedAlmighty adeptly leverages the general features of large vision models and the specialized knowledge of small models, thereby enhancing the accuracy and robustness of disease diagnosis, even in scenarios characterized by limited and diverse medical data.

Keywords: Disease diagnosis, Large Vision Model, Knowledge distillation, model capacity, Domain generalization

Received: 14 Nov 2024; Accepted: 18 Jul 2025.

Copyright: © 2025 Ren, Gu and Liu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Yajing Ren, Artificial Intelligence and Smart Mine Engineering Technology Center, Xinjiang Institute of Engineering, Urumqi, China

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.