AUTHOR=Fu Fanglei , Zhang Xeimei , Wang Zhaoxuan , Xie Luxi , Fu Mingxi , Peng Jing , Wu Jianfeng , Wang Zhe , Guan Tian , He Yonghong , Lin Jin-Shun , Zhu Lianghui , Dai Wenbin TITLE=A pathology-attention multi-instance learning framework for multimodal classification of colorectal lesions JOURNAL=Frontiers in Pharmacology VOLUME=Volume 16 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/pharmacology/articles/10.3389/fphar.2025.1592950 DOI=10.3389/fphar.2025.1592950 ISSN=1663-9812 ABSTRACT=IntroductionColorectal cancer is the third most common cancer worldwide, and accurate pathological diagnosis is crucial for clinical intervention and prognosis assessment. Although deep learning has shown promise in classifying whole slide images (WSIs) in digital pathology, existing weakly supervised methods struggle to fully model the multimodal diagnostic process, which involves both visual feature analysis and pathological knowledge. Additionally, staining variability and tissue heterogeneity hinder model generalization.MethodsWe propose a multimodal weakly supervised learning framework named PAT-MIL (Pathology-Attention-MIL), which performs five-class WSI-level classification. The model integrates dynamic attention mechanisms with expert-defined text prototypes. It includes: (1) the construction of pathology knowledge-driven text prototypes for semantic guidance, (2) a refinement strategy that gradually adjusts category centers to adaptively improve prototype distribution, and (3) a loss balancing method that dynamically adjusts training weights based on gradient feedback to optimize both visual clustering and semantic alignment.ResultsPAT-MIL achieves an accuracy of 86.45% (AUC = 0.9624) on an internal five-class dataset, outperforming ABMIL and DSMIL by +2.96% and +2.19%, respectively. On external datasets CRS-2024 and UniToPatho, the model reaches 95.78% and 84.09% accuracy, exceeding the best baselines by 2.22% and 5.68%, respectively.DiscussionThese results demonstrate that PAT-MIL effectively mitigates staining variability and enhances cross-center generalization through the collaborative modeling of visual and textual modalities. It provides a robust solution for colorectal lesion classification without relying on pixel-level annotations, advancing the field of multimodal pathological image analysis.