AUTHOR=Rasool Novsheena , Wani Niyaz Ahmad , Bhat Javaid Iqbal , Saharan Sandeep , Sharma Vishal Kumar , Alsulami Bassma Saleh , Alsharif Hind , Lytras Miltiadis D. TITLE=CNN-TumorNet: leveraging explainability in deep learning for precise brain tumor diagnosis on MRI images JOURNAL=Frontiers in Oncology VOLUME=Volume 15 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/oncology/articles/10.3389/fonc.2025.1554559 DOI=10.3389/fonc.2025.1554559 ISSN=2234-943X ABSTRACT=IntroductionThe early identification of brain tumors is essential for optimal treatment and patient prognosis. Advancements in MRI technology have markedly enhanced tumor detection yet necessitate accurate classification for appropriate therapeutic approaches. This underscores the necessity for sophisticated diagnostic instruments that are precise and comprehensible to healthcare practitioners.MethodsOur research presents CNN-TumorNet, a convolutional neural network for categorizing MRI images into tumor and non-tumor categories. Although deep learning models exhibit great accuracy, their complexity frequently restricts clinical application due to inadequate interpretability. To address this, we employed the LIME technique, augmenting model transparency and offering explicit insights into its decision-making process.ResultsCNN-TumorNet attained a 99% accuracy rate in differentiating tumors from non-tumor MRI scans, underscoring its reliability and efficacy as a diagnostic instrument. Incorporating LIME guarantees that the model’s judgments are comprehensible, enhancing its clinical adoption.DiscussionDespite the efficacy of CNN-TumorNet, the overarching challenge of deep learning interpretability persists. These models may function as ”black boxes,” complicating doctors’ ability to trust and accept them without comprehending their rationale. By integrating LIME, CNN-TumorNet achieves elevated accuracy alongside enhanced transparency, facilitating its application in clinical environments and improving patient care in neuro-oncology.