AUTHOR=Gaur Loveleen , Bhandari Mohan , Razdan Tanvi , Mallik Saurav , Zhao Zhongming TITLE=Explanation-Driven Deep Learning Model for Prediction of Brain Tumour Status Using MRI Image Data JOURNAL=Frontiers in Genetics VOLUME=Volume 13 - 2022 YEAR=2022 URL=https://www.frontiersin.org/journals/genetics/articles/10.3389/fgene.2022.822666 DOI=10.3389/fgene.2022.822666 ISSN=1664-8021 ABSTRACT=Cancer research has seen an explosive development exploring deep learning (DL) techniques for analysing magnetic resonance imaging (MRI) images for predicting brain tumours. We have observed a substantial gap in explanation, interpretability, and high accuracy for DL models. Consequently, we propose an explanation-driven DL model by utilising a convolutional neural network (CNN) and explanation techniques, local interpretable model-agnostic explanation (LIME) and shapley additive explanation (SHAP) for the prediction of discrete subtypes of brain tumours (meningioma, glioma, and pituitary) using MRI image dataset. Unlike previous models, our model used a dual-input CNN approach to prevail over the challenge of classification with images of inferior quality in term of noise and metal artifacts by adding gradient noise. Our CNN training results reveal 94.64% accuracy as compared to other state-of-the-art. We used SHAP to ensure features such as consistency and local accuracy for interpretation as shapley values examine all prospective predictions applying all possible combinations of inputs. In contrast, LIME constructs sparse linear models around each prediction to illustrate how the model operates in the immediate area. Our emphasis for this study is interpretability and high accuracy, which is critical for realising disparities in predictive performance, helpful in developing trust, and essential in integration into clinical practice. The proposed method has vast clinical application that could potentially be used for mass screening in resource-constraint countries.