Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Med.

Sec. Hematology

Volume 12 - 2025 | doi: 10.3389/fmed.2025.1624683

This article is part of the Research TopicApplications and Advances of Artificial Intelligence in Medical Image Analysis: PET, SPECT/ CT, MRI, and Pathology ImagingView all 5 articles

VFM-SSL-BMADCC-Framework: Vision Foundation Model and Self-supervised Learning Based Automated Framework for Differential Cell Counts on Whole-Slide Bone Marrow Aspirate Smears

Provisionally accepted
Shirong  ZhouShirong Zhou1Longrong  RanLongrong Ran2Yuanyou  YaoYuanyou Yao3Xing  WuXing Wu1*Zailin  YangZailin Yang2*Chengliang  WangChengliang Wang1Zhongshi  HeZhongshi He1Yao  LiuYao Liu2*
  • 1Chongqing University, Chongqing, China
  • 2Cancer Hospital, Chongqing University, Chongqing, China
  • 3Southwest Hospital, Army Medical University, Chongqing, China

The final, formatted version of the article will be published soon.

Background: Differential cell counts(DCCs) on bone marrow aspirate(BMA) smear is a critical step in the diagnosis and treatment of blood and bone marrow diseases. However, manual counts relies on the experience of pathologists and is very time-consuming. In recent years, deep learning-based intelligent cell detection models have achieved high detection accuracy on datasets of specific diseases and medical centers, but these models depend on a large amount of annotated data and have poor generalization. When the detection task changes or model is applied in different medical centers, we need to re-annotate a large amount of data and retrain the model to ensure detection accuracy. Methods: To address the above issues, we designed an automated framework for whole-slide bone marrow aspirate smear differential cell counts(BMADCC), called VFM-SSL-BMADCC-Framework. This framework only requires whole-slide images(WSIs) as input to generate DCCs. The vision foundation model SAM, known for its strong generalization ability, precisely segments cells within the countable regions of the BMA. The MAE, pre-trained on a large unlabeled cell dataset, excels in generalized feature extraction, enabling accurate classification of cells for counting. Additionally, TextureUnet and TCNet, with their powerful texture feature extraction capabilities, effectively segment the body-tail junction areas from WSIs and classify suitable tiles for DCCs. The framework was trained and validated on 40 WSIs from Chongqing Cancer Hospital. To assess its generalization ability across different medical centers and diseases, correlation tests were conducted using 13 WSIs from Chongqing Cancer Hospital and 5 WSIs from Southwest Hospital. Results: The framework demonstrated high accuracy across all stages: The IoU for region of interest(ROI) segmentation was 46.19%, and the accuracy for tile of interest(TOI) classification was 90.45%, the Recall75 for cell segmentation was 99.01%, and the accuracy for cell classification was 77.92%. Experimental results indicated that the automated framework had excellent cell classification and counts performance, suitable for BMADCC across different medical centers and diseases. The differential cell counts results from all centers were highly consistent with manual analysis. Conclusion: The proposed VFM-SSL-BMADCC-Framework effectively automates differential cell counts on bone marrow aspirate smears, reducing reliance on extensive annotations and improving generalization across medical centers.

Keywords: Whole-Slide Bone Marrow Aspirate Smears, Differential Cell Counts, VisionFoundation Model, Self-supervised learning, texture

Received: 07 May 2025; Accepted: 28 Aug 2025.

Copyright: © 2025 Zhou, Ran, Yao, Wu, Yang, Wang, He and Liu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence:
Xing Wu, Chongqing University, Chongqing, China
Zailin Yang, Cancer Hospital, Chongqing University, Chongqing, China
Yao Liu, Cancer Hospital, Chongqing University, Chongqing, China

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.