<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="2.3" xml:lang="EN">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Med.</journal-id>
<journal-title>Frontiers in Medicine</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Med.</abbrev-journal-title>
<issn pub-type="epub">2296-858X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fmed.2024.1349373</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Medicine</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Explainable AI-driven model for gastrointestinal cancer classification</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Binzagr</surname> <given-names>Faisal</given-names></name>
<xref ref-type="corresp" rid="c001"><sup>&#x002A;</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/2595600/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/conceptualization/"/>
<role content-type="https://credit.niso.org/contributor-roles/formal-analysis/"/>
<role content-type="https://credit.niso.org/contributor-roles/funding-acquisition/"/>
<role content-type="https://credit.niso.org/contributor-roles/investigation/"/>
<role content-type="https://credit.niso.org/contributor-roles/methodology/"/>
<role content-type="https://credit.niso.org/contributor-roles/project-administration/"/>
<role content-type="https://credit.niso.org/contributor-roles/resources/"/>
<role content-type="https://credit.niso.org/contributor-roles/software/"/>
<role content-type="https://credit.niso.org/contributor-roles/validation/"/>
<role content-type="https://credit.niso.org/contributor-roles/visualization/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-original-draft/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-review-editing/"/>
</contrib>
</contrib-group>
<aff><institution>Department of Computer Science, King Abdulaziz University</institution>, <addr-line>Rabigh</addr-line>, <country>Saudi Arabia</country></aff>
<author-notes>
<fn fn-type="edited-by" id="fn0001"><p>Edited by: Vinayakumar Ravi, Prince Mohammad bin Fahd University, Saudi Arabia</p></fn>
<fn fn-type="edited-by" id="fn0002"><p>Reviewed by: Prabhishek Singh, Bennett University, India</p><p>Jani Anbarasi L., Vellore Institute of Technology, India</p></fn>
<corresp id="c001">&#x002A;Correspondence: Faisal Binzagr, <email>fbinzagr@kau.edu.sa</email></corresp>
</author-notes>
<pub-date pub-type="epub">
<day>15</day>
<month>04</month>
<year>2024</year>
</pub-date>
<pub-date pub-type="collection">
<year>2024</year>
</pub-date>
<volume>11</volume>
<elocation-id>1349373</elocation-id>
<history>
<date date-type="received">
<day>04</day>
<month>12</month>
<year>2023</year>
</date>
<date date-type="accepted">
<day>04</day>
<month>04</month>
<year>2024</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2024 Binzagr.</copyright-statement>
<copyright-year>2024</copyright-year>
<copyright-holder>Binzagr</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract>
<p>Although the detection procedure has been shown to be highly effective, there are several obstacles to overcome in the usage of AI-assisted cancer cell detection in clinical settings. These issues stem mostly from the failure to identify the underlying processes. Because AI-assisted diagnosis does not offer a clear decision-making process, doctors are dubious about it. In this instance, the advent of Explainable Artificial Intelligence (XAI), which offers explanations for prediction models, solves the AI black box issue. The SHapley Additive exPlanations (SHAP) approach, which results in the interpretation of model predictions, is the main emphasis of this work. The intermediate layer in this study was a hybrid model made up of three Convolutional Neural Networks (CNNs) (InceptionV3, InceptionResNetV2, and VGG16) that combined their predictions. The KvasirV2 dataset, which comprises pathological symptoms associated to cancer, was used to train the model. Our combined model yielded an accuracy of 93.17% and an F1 score of 97%. After training the combined model, we use SHAP to analyze images from these three groups to provide an explanation of the decision that affects the model prediction.</p>
</abstract>
<kwd-group>
<kwd>gastrointestinal cancer</kwd>
<kwd>explainable AI</kwd>
<kwd>SHAP</kwd>
<kwd>transfer learning</kwd>
<kwd>ensemble learning</kwd>
</kwd-group>
<counts>
<fig-count count="7"/>
<table-count count="3"/>
<equation-count count="5"/>
<ref-count count="65"/>
<page-count count="11"/>
<word-count count="7923"/>
</counts>
<custom-meta-wrap>
<custom-meta>
<meta-name>section-at-acceptance</meta-name>
<meta-value>Pathology</meta-value>
</custom-meta>
</custom-meta-wrap>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="sec1">
<label>1</label>
<title>Introduction</title>
<p>The digestive system consists of the organs that make up the digestive system. Cellular mutations in at least one of these genes can lead to cancer and ultimately lead to the development of colon cancer. More importantly, colon cancer has a huge impact on the world, accounting for approximately 26.3% of all cancers (4.8&#x2009;million cases) and 35.4% of blood cancer cases (3.4&#x2009;million deaths) (<xref ref-type="bibr" rid="ref1">1</xref>). As shown in <xref ref-type="fig" rid="fig1">Figure 1</xref>, the digestive system has a line that is approximately 25&#x2009;feet long, starting from the mouth and ending at the anus. Many studies, including [Hospital] and (<xref ref-type="bibr" rid="ref2">2</xref>), identified the most common types of cancer, including stomach, colon, colon, liver, and Cancer.</p>
<fig position="float" id="fig1">
<label>Figure 1</label>
<caption>
<p>Proposed model.</p>
</caption>
<graphic xlink:href="fmed-11-1349373-g001.tif"/>
</fig>
<p>Latest studies have reported that a substantial proportion (over 50%) of gastrointestinal cancers can be attributed to risk factors that can be altered by adopting a healthier lifestyle alcohol intake, cigarette smoking, infection, unhealthy diet, and obesity (<xref ref-type="bibr" rid="ref3">3</xref>, <xref ref-type="bibr" rid="ref4">4</xref>). Moreover, it has been observed that males have a higher susceptibility to gastrointestinal cancers compared to females, with the risk increasing with age, as indicated by (<xref ref-type="bibr" rid="ref5">5</xref>). Unfortunately, due to late-stage diagnoses being predominant, the prognosis for such cancers is typically unfavorable (<xref ref-type="bibr" rid="ref6">6</xref>), thus resulting in site-specific death rates that align with the incidence trends. However, if gastrointestinal cancers are detected in their early stages, the survival rate becomes higher in the five-year timeline (<xref ref-type="bibr" rid="ref7">7</xref>). Nonetheless, a study conducted by (<xref ref-type="bibr" rid="ref8">8</xref>) put forward that cognitive and technological issues contribute to significant diagnostic errors, despite the effectiveness of traditional screening procedures.</p>
<p>The Global-Cancer-Observatory (<xref ref-type="bibr" rid="ref9">9</xref>) predicts a substantial increase in the global mortality and incidence rates of gastrointestinal (GI) cancers (<xref ref-type="bibr" rid="ref10">10</xref>) by the year 2040. The mortality rate is projected to rise by 73%, reaching approximately 5.6&#x2009;million cases, while the incidence rate is expected to increase by 58%, with an estimated 7.5&#x2009;million new cases. These alarming statistics highlight the urgent need for the development of dependable systems to support medical facilities in obtaining accurate GI cancer diagnoses. Addressing this priority through innovative research endeavors becomes crucial to effectively combat the rising burden of GI cancers on a global scale.</p>
<p>Recent research has highlighted the potential of Artificial Intelligence (AI) in reducing misdiagnosis rates associated with conventional screening techniques, thereby enhancing overall diagnostic accuracy (<xref ref-type="bibr" rid="ref11 ref12 ref13 ref14 ref15 ref16 ref17 ref18 ref19 ref20 ref21 ref22 ref23">11&#x2013;23</xref>). The main reason for this accomplishment is the application of machines as well as deep-learning techniques. However, a significant hurdle faced by AI-supported systems is their perceived nature as computational&#x201D; black boxes.&#x201D; The lack of transparency in the decision-making processes of these AI models has resulted in hesitancy among healthcare institutions when it comes to adopting them for diagnostic purposes, despite their effectiveness (<xref ref-type="bibr" rid="ref24 ref25 ref26 ref27 ref28 ref29 ref30 ref31 ref32 ref33 ref34 ref35">24&#x2013;35</xref>). It is therefore important for AI researchers to integrate digestible explanations throughout the development of AI-aided medical applications, thus assuring healthcare practitioners while also clearing any doubts they might have.</p>
<p>In this context, XAI has emerged as a promising field that aims to address the computational difficulties posed by AI systems, warranting the provision of explanations for model predictions (<xref ref-type="bibr" rid="ref36">36</xref>). By employing XAI techniques, AI researchers can enhance the interpretability and transparency of AI-driven diagnostic systems, thereby fostering trust and facilitating their integration into clinical practice. To address the challenges in AI driven diagnostic systems, this research work focuses on the investigation of SHAP. SHAP is an explanation approach for model predictions that was introduced by (<xref ref-type="bibr" rid="ref37">37</xref>). In our study, we have utilized an ensemble model that we developed and trained on the pathology results obtained from the publicly accessible Kvasir dataset. By employing SHAP, we aim to provide interpretable explanations for the predictions made by our ensemble model, thereby enhancing the transparency and understandability of the AI-assisted diagnostic system.</p>
<p>To pinpoint the critical elements influencing the decision-making process, this study presents a unique approach for the categorization of gastrointestinal lesions. InceptionV3, Inception-ResNetV2, and VGG16 (Visual Geometry Group) architectures are used in the article to apply transfer learning. CNN Models are optimized for the goal of identifying gastrointestinal lesions like esophagitis, poylps and ulcerative-colitis by the application of enhancements and fine-tuning procedures. These improvements improve the precision and robustness of the models as compared with the latest techniques like (<xref ref-type="bibr" rid="ref38 ref39 ref40 ref41 ref42">38&#x2013;42</xref>). Creation of an Ensemble Model: By combining the predictions of every CNN model, the research project suggests and creates an ensemble model. By combining the advantages of several models, the ensemble model seeks to enhance classification performance by making use of the variety and complementary traits of its component models. The ensemble model is developed to classify gastrointestinal lesions in the dataset, and its performance is assessed. The study also thoroughly examines the characteristics that impact the classification procedure, illuminating the critical elements influencing precise lesion classification. Explainability (<xref ref-type="bibr" rid="ref43">43</xref>, <xref ref-type="bibr" rid="ref44">44</xref>) characteristics made it possible to visualize the variables that contributed to each prediction in a comprehensible way, highlighting significant differences in performance that would not have been apparent otherwise.</p>
<p>The rest of the paper is organized as follows. Section II briefly gives a literature review; the framework of our novel technique is shown in Section III. Section IV presents experimental data, comments, and comparisons with existing techniques. Finally, the article is wrapped up in Section V.</p>
</sec>
<sec id="sec2">
<label>2</label>
<title>Related work</title>
<p>Numerous research investigations have been carried out to develop automated models for detecting gastrointestinal cancer. According to (<xref ref-type="bibr" rid="ref45">45</xref>), the detection of esophageal cancer using deep-learning (i.e., CNN) and machine-learning is becoming progressively prevalent. Preliminary screening of esophageal cancer has been made possible through the development of computer-assisted application by (<xref ref-type="bibr" rid="ref46">46</xref>). Eventually, the researchers achieved the classification of esophageal images through the implementation of random forests as an ensemble classifier for esophageal image classification. Nonetheless, deep-learning models are being investigated.</p>
<p>In a study conducted in 2019 (<xref ref-type="bibr" rid="ref47">47</xref>), developed a VGG16, InceptionV3, and ResNet50 model based on the transfer learning approach to classify endoscopic images into three classes: normal, benign-ulcer, and cancer using a custom dataset of 787 images including 367 samples of cancer, 200 samples of normal cases, and 220 samples of ulcers collected from Hospital. The images were first resized to 224&#x00D7;224 before they were preprocessed using Adaptive-Histogram-Equalization (AHE) to eliminate variations in the image brightness and contrast, thereby improving the local contrast and enhancing edge definition within each image region. Three binary classification tasks namely: normal vs. cancer, normal vs. ulcer, and cancer vs. ulcer were performed in this study and the accuracy, standard deviation, and Area-Under-Curve (AUC) values across the different CNN models. ResNet50 demonstrated the highest performance for all three-performance metrics. The model achieved an accuracy of above 92% for the classification tasks including the Normal images. However, for the cancer vs. ulcer task, a lower accuracy of 77.1% was noted. The authors conclude that this decrease is probably attributed due to the smaller visual differences between cancer and ulcer instances.</p>
<p>ResNet50 also achieved the lowest standard deviation, which indicates greater stability among the other models. In terms of AUC, ResNet50 reported an AUC of 0.97, 0.95, and 0.85, respectively for the normal vs. ulcer, normal vs. cancer, and cancer vs. ulcer tasks. The authors concluded that this proposed deep learning approach can be a valuable tool to complement traditional screening practices by medical practitioners thus reducing the risk of missing positive cases due to repetitive endoscopic frames or diminishing concentration.</p>
<p>The authors of (<xref ref-type="bibr" rid="ref48">48</xref>) developed a deep CNN based on the UNet++ and Resnet50 architectures to classify between cases of gastritis (AG) and non-atrophic gastritis (non-AG) using white light endoscopy images. A total of 6,122 images (4,022 AG cases and 2,100 non-AG) were collected from 456 patients and were randomly partitioned into training (89%) and test sets (11%). For the binary classification task, the model achieved an accuracy of 83.70%, sensitivity of 83.77%, 13 and specificity of 83.75% while for the region segmentation task, an IOU score of 0.648 for the AG regions and 0.777 for the incisura region. The results suggest that the developed model based on the UNet++ and Resnet50 architectures can effectively distinguish between AG and non-AG cases, and it can also be used to delineate specific regions of interest within the endoscopic images.</p>
<p>Based on a research carried out by (<xref ref-type="bibr" rid="ref49">49</xref>), images of non-cancerous lesions and Early-Gastric-Cancers (EGC) were used to evaluate CNN diagnostic potential. A dataset, comprising of 386 non-cancerous lesions images and 1702 ECG images, was used for the training of the CNN model. The analysis results showed a sensitivity level of 91.18% showing the model&#x2019;s adeptness to rightly identify EGC cases and a specificity level of 90.64% indicating its ability to properly identify non-cancerous lesions. Substantially, reaching an accuracy level of 90.91% of the CNN model to diagnose both types of cases. Upon comparison, no remarkable differences were found between the specificity and accuracy levels of the AI-aided system and endoscopy specialists. However, the specificity and accuracy levels of the non-experts were below those of both the endoscopists and AI-aided system. According to the study findings, the CNN model exhibited exceptional EGC and non-cancerous lesions diagnostic performance. Consequently, this research demonstrates the potential of AI-aided systems in assisting medical practitioners.</p>
<p>In the study presented by (<xref ref-type="bibr" rid="ref50">50</xref>), an automated detection approach utilizing Convolutional Neural Networks (CNN) was proposed to assist in the identification of Early-Gastric-Cancers (EGC) in endoscopic images. The method employed transfers learning on two distinct classes of image datasets: cancerous and normal. These datasets provided detailed information regarding the texture characteristics of the lesions and were obtained from a relatively limited data set. The CNN based network was trained using transfer learning techniques to leverage the knowledge acquired from pre-trained models. By utilizing this approach, the network achieved a notable accuracy of 87.6%. Subsequently, an external dataset was used for the evaluation of the model&#x2019;s performance and an accuracy of 82.8% was attained.</p>
<p>The median filtering (MF) approach is used in the MSSADL-GITDC technique that is being presented to smooth images. The class attention layer (CAL) modifies the enhanced capsule network (CapsNet) model in the MSSADL-GITDC approach, which is offered for feature extraction. Deep Belief Network with Extreme Learning Machine (DBN-ELM) was utilized for GIT categorization. The accuracy of the suggested approaches was 98.03% (<xref ref-type="bibr" rid="ref51">51</xref>). A unique approach for the automated identification and localization of gastrointestinal (GI) abnormalities in endoscopic video frame sequences is presented in this work (<xref ref-type="bibr" rid="ref52">52</xref>). The photos used for training have poor annotations. The localization and anomaly detection performance obtained were both greater than 80% in terms of the area under the receiver operating characteristic (AUC).</p>
<p>These results suggest that the proposed automated detection method based on CNN, trained on the cancerous and normal image datasets, effectively aids in the identification of EGC in endoscopic images. The achieved accuracy of 87.6% on the training dataset demonstrates the model&#x2019;s ability to discern between cancerous and normal instances. Furthermore, the comparable accuracy of 82.8% on the external dataset indicates the model&#x2019;s generalizability and potential for practical application in clinical settings.</p>
<p>The idea of interpretable real-time deep-neural-network (SHAP) was first proposed by (<xref ref-type="bibr" rid="ref53">53</xref>). The proposed technique showcased improved real time performance compared to existing methods. Experimental results highlighted the superiority of this approach over current deep learning techniques. Moreover, the author successfully addressed the needs of colorectal surgeons by providing satisfactory operational effectiveness and interpretable feedback. By incorporating Shapley additive explanations, the technique not only offers enhanced performance but also ensures interpretability, aligning with the requirements of the medical professionals in the field of colorectal surgery.</p>
<p>Upon investigation of prior research on the detection of gastrointestinal cancer using AI assistance, it became evident that this field will highly benefit from further exploration. While several AI models have been utilized to discover deformities in medical images, there remains a notable gap in the development of human-comprehensible models that can provide explanations for model predictions. Although there has been a recent surge of interest among researchers, only a limited number of studies have focused on creating AI models that offer interpretability, allowing healthcare professionals and stakeholders to understand and trust the predictions made by these models. In the context of gastrointestinal disease classification, the various shapes and sizes of a single lesion is a greater problem. Moreover, the single model extracts single type of features due to which classification accuracy reduces. Therefore, there is a clear need for more research efforts to develop AI models in gastrointestinal cancer detection that not only achieve high accuracy but also provide comprehensible explanations for their predictions.</p>
</sec>
<sec sec-type="methods" id="sec3">
<label>3</label>
<title>Methodology</title>
<p>The proposed scheme jointly defines artificial intelligence (XAI) and presents an XAI-based model for gastrointestinal (GI) diagnosis. <xref ref-type="fig" rid="fig1">Figure 1</xref> shows the proposed structure of XAI-based gastrointestinal cancer screening. The system was trained and evaluated using pathology results from the KvasirV2 dataset. A design was developed to improve the performance and accuracy of the system. This model incorporates predictions from multiple models and has the potential to increase power and improve overall classification. Additionally, the XAI process was used to uncover the determining factors associated with each category. This process can identify and describe key characteristics that influence the decision to classify the group. By integrating XAI into an integrated model and analyzing the decision, this approach aims to gain an understanding of the decision-making process of colon cancer testing, improving their transparency and interpretation.</p>
<sec id="sec4">
<label>3.1</label>
<title>Dataset</title>
<p>Datasets play a crucial role in the advancement of various computing domains, particularly in the field of deep learning applications. The availability and quality of datasets are critical since they must include enough examples, be sufficiently labeled, and show variety in the pictures. Several investigators and institutions have expanded the datasets for medical imaging so that it becomes easier to train and evaluate suggested models. This study made use of the Kvasir dataset initially introduced by (<xref ref-type="bibr" rid="ref54">54</xref>) in 2017 which is composed of images that have been meticulously validated and annotated by medical experts. Each class contains a thousand images, thus showcasing pathological revelations, endoscopic approaches, and anatomical landmarks within the gastrointestinal tract.</p>
<p>However, for the purpose of this research, our focus was solely on the pathological findings class, which encompasses three distinct categories: Esophagitis, Polyps, and Ulcerative-Colitis, a chronic condition causing inflammation of the colon and rectum. To enhance the diversity and variety within the dataset, data augmentation techniques were applied to the original dataset. Specifically, rotation and zoom techniques were utilized to create variations of the existing images. This process involved rotating the images at different angles and applying zooming operations to produce new perspectives and scales.</p>
<p>By applying these data augmentation techniques, another dataset having 2000 images per class was generated. This increased dataset size provided a broader range of image variations and ensured a more comprehensive representation of the pathological findings within the gastrointestinal (GI) tract. The augmented dataset with its increased variety and enlarged sample size is crucial for training and evaluating the proposed models effectively. It enables the models to learn from a more diverse set of examples and improves their ability to generalize and make accurate predictions on unseen data. <xref ref-type="fig" rid="fig2">Figure 2</xref> shows the sample images from the dataset. The dataset was divided into training and testing ratios, with 70% data used for training and remaining 30% for testing.</p>
<fig position="float" id="fig2">
<label>Figure 2</label>
<caption>
<p>Sample images of dataset.</p>
</caption>
<graphic xlink:href="fmed-11-1349373-g002.tif"/>
</fig>
</sec>
<sec id="sec5">
<label>3.2</label>
<title>Convolutional neural network models</title>
<p>There are a total of three primary deep CNN that were implemented in the development of the ensemble model. InceptionV3, created by (<xref ref-type="bibr" rid="ref55">55</xref>) in 2015. InceptionV3 is an upgraded version of GoogleNet (Inception V1) and comprises 42 layers. VGG16, created by (<xref ref-type="bibr" rid="ref56">56</xref>) in 2014, was the third model used to create the ensemble model. It has 16 layers and uses the Softmax classifier. Finally, the InceptionResNetV2 was effectuated. InceptionResNetV2 is deep-CNN having Inception-architecture as its foundational basis though it makes use of residual connections instead of undergoing the filter concatenation phase. It comprises of 164 layers and was formed in 2016 by (<xref ref-type="bibr" rid="ref57">57</xref>).</p>
</sec>
<sec id="sec6">
<label>3.3</label>
<title>Ensemble learning</title>
<p>Ensemble models are a valuable technique in machine learning that combines multiple individual models to enhance the overall performance of a system. The fundamental concept behind ensemble modeling is to leverage the strengths of different models to compensate for their respective weaknesses, resulting in improved accuracy, robustness, and generalization capabilities. Ensemble models come in several varieties, such as bagging, boosting, and stacking.</p>
<p>Several models are separately trained on various subsets of the training data in bagging. Usually, the final forecast is determined by combining all the models&#x2019; predictions together using methods like majority voting or averaging. This method aids in lowering overfitting and boosting forecast stability. Boosting, on the other hand, entails repeated training models. Each new model focuses on the examples that were misclassified by the previous models, thereby progressively improving the overall performance. Boosting algorithms assign higher weights to difficult examples, allowing subsequent models to prioritize those instances during training. Stacking takes a different approach by utilizing the predictions of multiple models as input features for a meta-model. The meta-model is trained to learn how to combine these predictions effectively and make the final prediction. This approach can capture complex relationships between the base models&#x2019; outputs and potentially improve overall performance.</p>
<p>Ensemble models find applications in various domains of AI, including computer vision, natural language processing, and speech recognition. For instance, in image classification tasks, an ensemble of CNNs can be employed to enhance accuracy and robustness. Each CNN within the ensemble may specialize in different aspects of feature extraction or classification, leading to improved classification performance. Ensemble models are a powerful technique in machine learning that leverages the collective wisdom of multiple models. By combining diverse models, ensemble methods can mitigate individual model limitations and yield superior performance across a range of AI applications (<xref ref-type="bibr" rid="ref58 ref59 ref60">58&#x2013;60</xref>). The mathematical equations behind the ensemble modeling is as follows:</p>
<disp-formula id="E1"><mml:math id="M1"><mml:mover accent="true"><mml:mi>f</mml:mi><mml:mo stretchy="true">&#x00AF;</mml:mo></mml:mover><mml:mspace width="0.25em"/><mml:mfenced open="(" close=")"><mml:mrow><mml:mi>y</mml:mi><mml:mo stretchy="true">|</mml:mo><mml:mi>x</mml:mi></mml:mrow></mml:mfenced><mml:mo>=</mml:mo><mml:munderover><mml:mstyle displaystyle="true"><mml:mo stretchy="true">&#x2211;</mml:mo></mml:mstyle><mml:mrow><mml:mi>t</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>T</mml:mi></mml:munderover><mml:msub><mml:mi>w</mml:mi><mml:mi>t</mml:mi></mml:msub><mml:mspace width="0.25em"/><mml:msub><mml:mi>f</mml:mi><mml:mi>t</mml:mi></mml:msub><mml:mfenced open="(" close=")"><mml:mrow><mml:mi>y</mml:mi><mml:mo stretchy="true">|</mml:mo><mml:mi>x</mml:mi></mml:mrow></mml:mfenced></mml:math></disp-formula>
<disp-formula id="E2"><mml:math id="M2"><mml:mover accent="true"><mml:mi>f</mml:mi><mml:mo stretchy="true">&#x00AF;</mml:mo></mml:mover><mml:mspace width="0.25em"/><mml:mfenced open="(" close=")"><mml:mrow><mml:mi>y</mml:mi><mml:mo stretchy="true">|</mml:mo><mml:mi>x</mml:mi></mml:mrow></mml:mfenced><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mi>Z</mml:mi></mml:mfrac><mml:mspace width="0.25em"/><mml:munderover><mml:mstyle displaystyle="true"><mml:mo stretchy="true">&#x220F;</mml:mo></mml:mstyle><mml:mrow><mml:mi>t</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>T</mml:mi></mml:munderover><mml:msub><mml:mi>f</mml:mi><mml:mi>t</mml:mi></mml:msub><mml:msup><mml:mfenced open="(" close=")"><mml:mrow><mml:mi>y</mml:mi><mml:mo stretchy="true">|</mml:mo><mml:mi>x</mml:mi></mml:mrow></mml:mfenced><mml:msub><mml:mi>w</mml:mi><mml:mi>t</mml:mi></mml:msub></mml:msup></mml:math></disp-formula>
<disp-formula id="E3"><mml:math id="M3"><mml:mi>H</mml:mi><mml:mspace width="0.25em"/><mml:mfenced open="(" close=")"><mml:mi>x</mml:mi></mml:mfenced><mml:mo>=</mml:mo><mml:mi mathvariant="italic">sign</mml:mi><mml:mspace width="0.25em"/><mml:mfenced open="(" close=")"><mml:mrow><mml:munderover><mml:mstyle displaystyle="true"><mml:mo stretchy="true">&#x2211;</mml:mo></mml:mstyle><mml:mrow><mml:mi>t</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>T</mml:mi></mml:munderover><mml:msub><mml:mi>w</mml:mi><mml:mi>t</mml:mi></mml:msub><mml:mspace width="0.25em"/><mml:msub><mml:mi>h</mml:mi><mml:mi>t</mml:mi></mml:msub><mml:mfenced open="(" close=")"><mml:mi>x</mml:mi></mml:mfenced></mml:mrow></mml:mfenced></mml:math></disp-formula>
<p>Where a model f<sub>t</sub> (y | x) is a model, moreover, and y is the estimated probability. Z is a normalization rule, h<sub>t</sub> is a model output variable.</p>
<p>This study focuses on the development of an ensemble model based on bagging techniques. This is executed through the synthesis of the predictions of three pretrained CNNs InceptionV3, InceptionResNetV2, and VGG16. The ensemble model&#x2019;s architecture is depicted in <xref ref-type="fig" rid="fig3">Figure 3</xref>. Moreover, each of the three previously mentioned CNN models was applied to our enhanced KvasirV2 dataset which was separated into two parts 75% training and 25% validation. After individual training of the models, the average approach was used to develop the ensemble model through the combination of each model&#x2019;s predictions. The average technique formulates an average of the predictions obtained from the three trained models resulting in the generation of the final prediction.</p>
<fig position="float" id="fig3">
<label>Figure 3</label>
<caption>
<p>Ensemble model architecture.</p>
</caption>
<graphic xlink:href="fmed-11-1349373-g003.tif"/>
</fig>
</sec>
<sec id="sec7">
<label>3.4</label>
<title>Explainable AI</title>
<p>The field of Explainable AI (XAI) is experiencing rapid growth, focusing on enhancing transparency and interpretability in machine learning algorithms. This advancement is of particular significance in the realm of medical imaging, as the outputs of machine learning models greatly influence patient care. XAI methods play a crucial role in enabling clinicians and radiologists to comprehend the rationale behind the model&#x2019;s predictions, thereby instilling confidence in the accuracy of the model&#x2019;s assessments. Moreover, XAI techniques aid in the identification of potential biases within the model, facilitating the prevention of misdiagnosis and promoting equitable healthcare outcomes (<xref ref-type="bibr" rid="ref24">24</xref>, <xref ref-type="bibr" rid="ref25">25</xref>, <xref ref-type="bibr" rid="ref61">61</xref>).</p>
<p>XAI in medicine and healthcare have been classified in five categories by (<xref ref-type="bibr" rid="ref36">36</xref>). Our goal is to improve the explain ability of medical imaging, which motivated us to investigate the XAI explanation using feature relevance approach. One such technique is the SHAP. The SHAP method developed by (<xref ref-type="bibr" rid="ref37">37</xref>) is a model agnostic technique derived from cooperative game theory, enabling the interpretation of machine learning model outputs by quantifying the contribution of each feature. It provides a comprehensive framework that considers both global and local feature importance, accounting for feature interactions and ensuring fairness in assigning importance. The SHAP values align with desired axioms of feature attribution methods, including local accuracy, consistency, and missingness. Local accuracy ensures that the sum of SHAP values corresponds to the discrepancy between the model&#x2019;s prediction and the expected output for a specific input. Consistency guarantees that fixing a feature&#x2019;s value will not decrease its associated SHAP value. Missingness implies that irrelevant features have SHAP values close to zero. When you take into account all the many ways that features might combine, the Shapley value of a feature value indicates how much that feature contributed to the outcome (such as a reward or payout). It&#x2019;s similar to calculating the relative contribution of each feature to the final result, accounting for all the various ways in which they may have cooperated. This aids in our comprehension of the elements that were crucial in reaching the desired outcome or making the final decision. The mathematical modeling for XAI is as:</p>
<disp-formula id="E4"><mml:math id="M4"><mml:mi>g</mml:mi><mml:mspace width="0.25em"/><mml:mfenced open="(" close=")"><mml:mover accent="true"><mml:mi>s</mml:mi><mml:mi>&#x00B4;</mml:mi></mml:mover></mml:mfenced><mml:mo>=</mml:mo><mml:msub><mml:mi>&#x03C6;</mml:mi><mml:mi>o</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:munderover><mml:mstyle displaystyle="true"><mml:mo stretchy="true">&#x2211;</mml:mo></mml:mstyle><mml:mrow><mml:mi>N</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>M</mml:mi></mml:munderover><mml:msub><mml:mi>&#x03C6;</mml:mi><mml:mi>N</mml:mi></mml:msub><mml:msub><mml:mover accent="true"><mml:mi>s</mml:mi><mml:mi>&#x00B4;</mml:mi></mml:mover><mml:mi>N</mml:mi></mml:msub></mml:math></disp-formula>
<p>Here, <inline-formula><mml:math id="M5"><mml:mi>g</mml:mi><mml:mspace width="0.25em"/><mml:mfenced open="(" close=")"><mml:mover accent="true"><mml:mi>s</mml:mi><mml:mi>&#x00B4;</mml:mi></mml:mover></mml:mfenced></mml:math></inline-formula> is the XAI model, <inline-formula><mml:math id="M6"><mml:mover accent="true"><mml:mi>s</mml:mi><mml:mi>&#x00B4;</mml:mi></mml:mover></mml:math></inline-formula> is the simplified input such that <inline-formula><mml:math id="M7"><mml:mfenced open="(" close=")"><mml:mover accent="true"><mml:mi>s</mml:mi><mml:mi>&#x00B4;</mml:mi></mml:mover></mml:mfenced></mml:math></inline-formula>&#x2009;&#x2248;&#x2009;(&#x1D465;&#x2032;) and <inline-formula><mml:math id="M8"><mml:mover accent="true"><mml:mi>s</mml:mi><mml:mi>&#x00B4;</mml:mi></mml:mover></mml:math></inline-formula>&#x2208; (<xref ref-type="bibr" rid="ref62">62</xref>) &#x1D440; &#x1D719;<sub>&#x1D456;</sub> &#x2208; &#x1D445;. Moreover, the function is defined in the manner shown in following equation to determine the impact of each attribute on model prediction.</p>
<disp-formula id="E5"><mml:math id="M9"><mml:msub><mml:mi>&#x03C6;</mml:mi><mml:mi>N</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mfenced open="|" close="|"><mml:mo>&#x2201;</mml:mo></mml:mfenced><mml:mo>!</mml:mo><mml:mi>&#x03C1;</mml:mi><mml:mfenced open="(" close=")"><mml:mrow><mml:mfenced open="|" close="|"><mml:mi>&#x03C1;</mml:mi></mml:mfenced><mml:mo>&#x2212;</mml:mo><mml:mo>&#x2201;</mml:mo><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:mfenced><mml:mo>!</mml:mo></mml:mrow><mml:mrow><mml:mfenced open="|" close="|"><mml:mi>&#x03C1;</mml:mi></mml:mfenced><mml:mo>!</mml:mo></mml:mrow></mml:mfrac><mml:mspace width="0.25em"/><mml:mfenced open="[" close="]"><mml:mrow><mml:msub><mml:mi>&#x03C1;</mml:mi><mml:mrow><mml:mo>&#x2201;</mml:mo><mml:mi>u</mml:mi><mml:mfenced open="{" close="}"><mml:mi>N</mml:mi></mml:mfenced></mml:mrow></mml:msub><mml:mfenced open="(" close=")"><mml:mrow><mml:msub><mml:mi>&#x03C3;</mml:mi><mml:mrow><mml:mo>&#x2201;</mml:mo><mml:mi>u</mml:mi><mml:mfenced open="{" close="}"><mml:mi>N</mml:mi></mml:mfenced></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>&#x03C1;</mml:mi><mml:mo>&#x2201;</mml:mo></mml:msub><mml:mfenced open="(" close=")"><mml:msub><mml:mi>X</mml:mi><mml:mo>&#x2201;</mml:mo></mml:msub></mml:mfenced></mml:mrow></mml:mfenced></mml:mrow></mml:mfenced></mml:math></disp-formula>
<p>Here, <inline-formula><mml:math id="M10"><mml:mo>&#x2201;</mml:mo></mml:math></inline-formula> represents feature-sets, and <inline-formula><mml:math id="M11"><mml:mi>&#x03C1;</mml:mi></mml:math></inline-formula> is a subset of <inline-formula><mml:math id="M12"><mml:mo>&#x2201;</mml:mo></mml:math></inline-formula>. <inline-formula><mml:math id="M13"><mml:msub><mml:mi>&#x03C1;</mml:mi><mml:mrow><mml:mo>&#x2201;</mml:mo><mml:mi>u</mml:mi><mml:mfenced open="{" close="}"><mml:mi>N</mml:mi></mml:mfenced></mml:mrow></mml:msub></mml:math></inline-formula> is the trained model on <inline-formula><mml:math id="M14"><mml:mo>&#x2201;</mml:mo></mml:math></inline-formula> and N<sup>th</sup> feature. <inline-formula><mml:math id="M15"><mml:msub><mml:mi>&#x03C1;</mml:mi><mml:mo>&#x2201;</mml:mo></mml:msub></mml:math></inline-formula> is the trained model without this feature. Moreover, <inline-formula><mml:math id="M16"><mml:msub><mml:mi>X</mml:mi><mml:mo>&#x2201;</mml:mo></mml:msub></mml:math></inline-formula> represent the feature-value in <inline-formula><mml:math id="M17"><mml:mo>&#x2201;</mml:mo></mml:math></inline-formula><sup>th</sup> set.</p>
<p>Various applications have benefited from SHAP values, encompassing domains such as image recognition, natural language processing, and healthcare. For instance, in a study focusing on breast cancer detection, SHAP values were utilized to identify the most relevant regions in the images (<xref ref-type="bibr" rid="ref63">63</xref>). Similarly, in another study concerning the detection of relevant regions in retinal images for predicting disease severity (<xref ref-type="bibr" rid="ref64">64</xref>), SHAP values were employed to interpret the features of a deep neural network model.</p>
</sec>
</sec>
<sec id="sec8">
<label>4</label>
<title>Experimental results</title>
<p>In the initial phase of the experiments, an ensemble model was developed for the classification of gastrointestinal (GI) lesions. This involved training the InceptionV3, InceptionResNetV2, and VGG16 models individually on the KvasirV2 datasets. Subsequently, these models were combined to create the ensemble meta model. To adapt the primordial architectures of the 3 pre trained convolutional neural networks (CNNs), a global average pooling layer was added. This layer summarizes the spatial information from the previous layers and reduces the dimensionality of the feature maps. Following the pooling layer, a dropout layer with a dropout rate of 0.3 was applied to mitigate overfitting, enhancing the model&#x2019;s generalization capabilities. The Adam optimization algorithm was employed for model optimization, using sparse-categorical-cross entropy as the loss-function. This combination allowed for effective training of the ensemble model by updating the model weights based on the calculated gradients. Each of the selected deep-CNNs was trained for 5 epochs, with a batch size of 32, to finetune the model parameters and improve its performance. After the training process, the softmax activation function was utilized for classification. This function assigned probabilities to each class, enabling the ensemble model to make predictions on the GI lesion classes.</p>
<p>By leveraging the strengths of multiple pre-trained CNN models through ensemble learning, the developed model aimed to enhance the accuracy and robustness of GI lesion classification. <xref ref-type="table" rid="tab1">Table 1</xref> shows the classification results obtained on individual models as well as Ensemble model. Moreover, for comparison among individual models as well as ensemble model, a graph is shown in <xref ref-type="fig" rid="fig4">Figure 4</xref>.</p>
<table-wrap position="float" id="tab1">
<label>Table 1</label>
<caption>
<p>Classification performance on individual models.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top">Model</th>
<th align="center" valign="top">Training accuracy (%)</th>
<th align="center" valign="top">Validation accuracy (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">Inception V3</td>
<td align="char" valign="top" char=".">92.56%</td>
<td align="char" valign="top" char=".">86.67%</td>
</tr>
<tr>
<td align="left" valign="top">InceptionResnetV2</td>
<td align="char" valign="top" char=".">90.08%</td>
<td align="char" valign="top" char=".">83.58%</td>
</tr>
<tr>
<td align="left" valign="top">VGG16</td>
<td align="char" valign="top" char=".">89.03%</td>
<td align="char" valign="top" char=".">78.56%</td>
</tr>
<tr>
<td align="left" valign="top">Ensemble model</td>
<td align="char" valign="top" char=".">97.15%</td>
<td align="char" valign="top" char=".">93.17%</td>
</tr>
</tbody>
</table>
</table-wrap>
<fig position="float" id="fig4">
<label>Figure 4</label>
<caption>
<p>Classification comparison among models.</p>
</caption>
<graphic xlink:href="fmed-11-1349373-g004.tif"/>
</fig>
<p>Based on <xref ref-type="fig" rid="fig5">Figure 5</xref>, the F1-score, recall, and precision for each class of esophagitis, polyps, and ulcerative colitis are displayed in the classification report, as shown in <xref ref-type="fig" rid="fig6">Figure 6</xref>. The confusion matrix derived from the classification results is shown in <xref ref-type="fig" rid="fig5">Figure 5</xref>. The confusion matrix, which displays the proportion of properly and erroneously identified samples for each class, offers a summary of the model&#x2019;s performance.</p>
<fig position="float" id="fig5">
<label>Figure 5</label>
<caption>
<p>Confusion matrix of proposed model.</p>
</caption>
<graphic xlink:href="fmed-11-1349373-g005.tif"/>
</fig>
<fig position="float" id="fig6">
<label>Figure 6</label>
<caption>
<p>SHAP XAI explanation.</p>
</caption>
<graphic xlink:href="fmed-11-1349373-g006.tif"/>
</fig>
<p>The classification report is shown in <xref ref-type="table" rid="tab2">Table 2</xref> and contains the F1-score, recall, and accuracy metrics for the ulcerative colitis, polyps, and esophagitis classes. The model&#x2019;s accuracy is gaged by the F1-score, which unifies recall and precision into a single number. The capacity of the model to accurately identify positive samples is reflected in recall, and the ability to correctly categorize positive predictions is reflected in accuracy. These metrics shed light on how well the model performs for a given class. The classification report and the confusion matrix provide useful data to assess the precision and potency of the created ensemble model in categorizing gastrointestinal lesions.</p>
<table-wrap position="float" id="tab2">
<label>Table 2</label>
<caption>
<p>Classification performance of ensemble model using various metrics on individual class.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top">Class/Label</th>
<th align="center" valign="top">Precision</th>
<th align="center" valign="top">Recall</th>
<th align="center" valign="top">F1-Score</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">Esophagitis</td>
<td align="char" valign="top" char=".">0.977</td>
<td align="char" valign="top" char=".">0.982</td>
<td align="char" valign="top" char=".">0.979</td>
</tr>
<tr>
<td align="left" valign="top">Polyps</td>
<td align="char" valign="top" char=".">0.968</td>
<td align="char" valign="top" char=".">0.971</td>
<td align="char" valign="top" char=".">0.969</td>
</tr>
<tr>
<td align="left" valign="top">Ulcerative-Colitis</td>
<td align="char" valign="top" char=".">0.970</td>
<td align="char" valign="top" char=".">0.963</td>
<td align="char" valign="top" char=".">0.967</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>When compared to the individual models, the ensemble model&#x2019;s findings show a notable improvement in overall accuracy. The three classes&#x2014;ulcerative colitis, polyps, and esophagitis&#x2014;showcase excellent accuracy, recall, and F1-score in the classification report. These metrics show that the model can effectively minimize false positives and false negatives while correctly detecting positive events. An overall accuracy of 93.17% and an F1-score of 97% for every class show that the ensemble model performs well in classifying gastrointestinal lesions. The model can properly detect both positive and negative examples, as suggested by the high F1-score, which also indicates that the model strikes a balance between precision and recall. The encouraging outcomes of the ensemble model point to its potential for further refinement and implementation in clinical settings, which is appropriate considering the significance of precise prediction in the context of gastrointestinal malignancies. The model is a useful tool in healthcare practice as it may help diagnose GI cancer because of its high F1-scores and overall accuracy.</p>
<p>We employed a blurring-based masker in conjunction with the SHAP partition explainer to gain an understanding of the deterministic elements that underlie the predictions of our ensemble model. We were able to explain the accurate forecasts by using this method to visualize the precise regions of the image that were important to the model&#x2019;s predictions. Four photos from each class&#x2014;ulcerative colitis, polyps, and esophagitis&#x2014;that our model properly predicted were included in this study. We were able to generate a visual depiction of each class&#x2019;s contributing attributes by utilizing the SHAP partition explainer. The deterministic characteristics and their significance for the ulcerative colitis, polyps, and esophagitis groups are shown in <xref ref-type="fig" rid="fig6">Figure 6</xref>.</p>
<p>These visualizations improve the interpretability and explain the ability of our model&#x2019;s predictions by offering insightful information about the areas or patterns within the pictures that had a major impact on the ensemble model&#x2019;s decision-making process. The chart shows the real image in <xref ref-type="fig" rid="fig6">Figure 6</xref>, with blue and red highlights in particular areas. The red color indicates elements that positively added to the prediction of a particular category, while the blue color represents parts that had an adverse contribution. By analyzing the fourth image in <xref ref-type="fig" rid="fig6">Figure 6</xref> as an example, it can be observed that the red shades are predominantly concentrated around the region corresponding to the esophagitis pathology in the esophagitis class.</p>
<p>This suggests that these highlighted regions played a significant role in the model&#x2019;s prediction for this category. However, when examining the subsequent two classes predicted by the model, we notice that both images exhibit mostly blue shades in the area associated with esophagitis pathology. This implies that these regions negatively influenced the model&#x2019;s prediction for these classes. Overall, the model predicts and outputs the deterministic features of each tested image, highlighting the regions that contribute positively or adversely to the predicted categories. This provides valuable insights into the specific image characteristics that the model considers when making its predictions. Moreover, features visualization using t-sne is also shown in <xref ref-type="fig" rid="fig7">Figure 7</xref>.</p>
<fig position="float" id="fig7">
<label>Figure 7</label>
<caption>
<p>Features visualization using T-SNE.</p>
</caption>
<graphic xlink:href="fmed-11-1349373-g007.tif"/>
</fig>
<p>The limited number of studies conducted on gastrointestinal cancer detection highlights the need for further research in this area. Existing studies have reported moderate to high accuracies using deep learning models such as InceptionResNetV2 and InceptionV3. For instance, one study (<xref ref-type="bibr" rid="ref65">65</xref>) achieved an accuracy of 84.5% using InceptionResNetV2 with a dataset of 854 images, while another study (<xref ref-type="bibr" rid="ref49">49</xref>) reported an accuracy of 90.1% using InceptionV3 with a test set of 341 endoscopic images.</p>
<p>In comparison, our optimized ensemble model, along with the individual models, demonstrates superior performance compared to these existing studies. The accuracy of our ensemble model is reported as 93.17% with an F1-score of 97% for each class. This indicates the effectiveness of our approach in accurately classifying gastrointestinal lesions. However, it is important to acknowledge the challenges faced in developing and evaluating deep learning models for gastrointestinal cancer due to the limited availability of publicly accessible datasets in this domain. This scarcity hinders the progress and thorough evaluation of deep learning models for gastrointestinal cancer detection. Moreover, the lack of explainability in deep learning models has contributed to the hesitation among healthcare professionals in adopting these models in clinical practices. To address this limitation, our proposed model incorporates the SHAP technique, which allows for the identification of deterministic features within the images associated with gastrointestinal pathologies. By providing explanations for the model decision-making process, our model enhances the interpretability and trustworthiness of the results. It has been observed that major misclassification occurs in the ulcerative-colitis and then polyp class. This occurs as both have similarity in their shape and size, the problem can be catered by applying the contours and highlighting the region, which will be done in future work. The limitation of the model is its reproducibility of results which is generally a deep learning issue, moreover the technique is not evaluated on a real time system therefore it should be trialed clinically before implementation of it.</p>
<p>The comparison of the proposed model with the latest techniques is shown in <xref ref-type="table" rid="tab3">Table 3</xref>.</p>
<table-wrap position="float" id="tab3">
<label>Table 3</label>
<caption>
<p>Performance comparison.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top">Techniques</th>
<th align="center" valign="top">Accuracy</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">AHE is applied on images after that ResNet50 is finetuned using transfer learning (<xref ref-type="bibr" rid="ref47">47</xref>)</td>
<td align="char" valign="top" char=".">92.00%</td>
</tr>
<tr>
<td align="left" valign="top">Deep CNN based on the UNet++ and Resnet50 architectures to classify (<xref ref-type="bibr" rid="ref48">48</xref>)</td>
<td align="char" valign="top" char=".">83.70%</td>
</tr>
<tr>
<td align="left" valign="top">Images of non-cancerous lesions and EGC were used to evaluate CNN diagnostic potential (<xref ref-type="bibr" rid="ref49">49</xref>)</td>
<td align="char" valign="top" char=".">90.91%</td>
</tr>
<tr>
<td align="left" valign="top">Proposed approach</td>
<td align="char" valign="top" char=".">93.17%</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec sec-type="conclusions" id="sec9">
<label>5</label>
<title>Conclusion</title>
<p>The use of technology in healthcare is often challenged by a lack of explanation. This research addresses this issue by examining SHAP (Shapley Additive exPlanations) technology in depth. Colon cancer pathology results can be used to extract preliminary characteristics thanks to SHAP. The use of SHAP in our research aims to enhance the comprehension and interpretation of the prediction model. We start our study by creating and improving augmented ensemble models. The averaging approach was used to merge three pre-trained CNN models: InceptionV3, InceptionResNetV2, and VGG16. Pathology findings from the KvasirV2 dataset, a helpful tool for diagnosing gastrointestinal disorders, were analyzed for this sample. Co-learning maximizes the model&#x2019;s quality by increasing the model&#x2019;s accuracy and efficiency. Because a pooled sample incorporates the unique strengths and capacities of each sample, cancer detection using it can be more robust and trustworthy. Furthermore, each disease&#x2019;s characteristic traits were highlighted using the SHAP translator method. With the use of this technology, we can decipher certain details and regions of medical pictures, enabling the creation of prediction models. We may gain a better grasp of the decision-making mechanism and the underlying concepts of forecasting by extracting and visualizing these elements. Our results demonstrate the acceleration, quality, and use of descriptive intelligence (XAI) models for cancer detection, particularly in colon cancer. For future work, we will investigate other AI models, explainability methodologies, or applicability to other forms of cancer or disorders.</p>
</sec>
<sec sec-type="data-availability" id="sec10">
<title>Data availability statement</title>
<p>The data that support the findings of this study are available from the first and corresponding authors upon reasonable request.</p>
</sec>
<sec sec-type="author-contributions" id="sec11">
<title>Author contributions</title>
<p>FB: Conceptualization, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Validation, Visualization, Writing &#x2013; original draft, Writing &#x2013; review &#x0026; editing.</p>
</sec>
</body>
<back>
<sec sec-type="funding-information" id="sec12">
<title>Funding</title>
<p>The author(s) declare that financial support was received for the research, authorship, and/or publication of this article. This research work was funded by Institutional Fund Projects under grant no. (IFPIP: 860&#x2013;830-1443). The authors gratefully acknowledge technical and financial support provided by the Ministry of Education and King Abdulaziz University, DSR, Jeddah, Saudi Arabia.</p>
</sec>
<sec sec-type="COI-statement" id="sec13">
<title>Conflict of interest</title>
<p>The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec id="sec100" sec-type="disclaimer">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="ref1"><label>1.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lopes</surname> <given-names>J</given-names></name> <name><surname>Rodrigues</surname> <given-names>CM</given-names></name> <name><surname>Gaspar</surname> <given-names>MM</given-names></name> <name><surname>Reis</surname> <given-names>CP</given-names></name></person-group>. <article-title>Melanoma management: from epidemiology to treatment and latest advances</article-title>. <source>Cancers</source>. (<year>2022</year>) <volume>14</volume>:<fpage>4652</fpage>. doi: <pub-id pub-id-type="doi">10.3390/cancers14194652</pub-id>, PMID: <pub-id pub-id-type="pmid">36230575</pub-id></citation></ref>
<ref id="ref2"><label>2.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Arnold</surname> <given-names>M</given-names></name> <name><surname>Abnet</surname> <given-names>CC</given-names></name> <name><surname>Neale</surname> <given-names>RE</given-names></name> <name><surname>Vignat</surname> <given-names>J</given-names></name> <name><surname>Giovannucci</surname> <given-names>EL</given-names></name> <name><surname>McGlynn</surname> <given-names>KA</given-names></name> <etal/></person-group>. <article-title>Global burden of 5 major types of gastrointestinal cancer</article-title>. <source>Gastroenterology</source>. (<year>2020</year>) <volume>159</volume>:<fpage>335</fpage>&#x2013;<lpage>349.e15</lpage>. <comment>e15</comment>. doi: <pub-id pub-id-type="doi">10.1053/j.gastro.2020.02.068</pub-id>, PMID: <pub-id pub-id-type="pmid">32247694</pub-id></citation></ref>
<ref id="ref3"><label>3.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Islami</surname> <given-names>F</given-names></name> <name><surname>Goding Sauer</surname> <given-names>A</given-names></name> <name><surname>Miller</surname> <given-names>KD</given-names></name> <name><surname>Siegel</surname> <given-names>RL</given-names></name> <name><surname>Fedewa</surname> <given-names>SA</given-names></name> <name><surname>Jacobs</surname> <given-names>EJ</given-names></name> <etal/></person-group>. <article-title>Proportion and number of cancer cases and deaths attributable to potentially modifiable risk factors in the United States</article-title>. <source>CA Cancer J Clin</source>. (<year>2018</year>) <volume>68</volume>:<fpage>31</fpage>&#x2013;<lpage>54</lpage>. doi: <pub-id pub-id-type="doi">10.3322/caac.21440</pub-id>, PMID: <pub-id pub-id-type="pmid">29160902</pub-id></citation></ref>
<ref id="ref4"><label>4.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>van den Brandt</surname> <given-names>PA</given-names></name> <name><surname>Goldbohm</surname> <given-names>RA</given-names></name></person-group>. <article-title>Nutrition in the prevention of gastrointestinal cancer</article-title>. <source>Best Pract Res Clin Gastroenterol</source>. (<year>2006</year>) <volume>20</volume>:<fpage>589</fpage>&#x2013;<lpage>603</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.bpg.2006.04.001</pub-id></citation></ref>
<ref id="ref5"><label>5.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Matsuoka</surname> <given-names>T</given-names></name> <name><surname>Yashiro</surname> <given-names>M</given-names></name></person-group>. <article-title>Precision medicine for gastrointestinal cancer: recent progress and future perspective</article-title>. <source>World J Gastrointest Oncol</source>. (<year>2020</year>) <volume>12</volume>:<fpage>1</fpage>&#x2013;<lpage>20</lpage>. doi: <pub-id pub-id-type="doi">10.4251/wjgo.v12.i1.1</pub-id></citation></ref>
<ref id="ref6"><label>6.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Allemani</surname> <given-names>C</given-names></name> <name><surname>Matsuda</surname> <given-names>T</given-names></name> <name><surname>Di Carlo</surname> <given-names>V</given-names></name> <name><surname>Harewood</surname> <given-names>R</given-names></name> <name><surname>Matz</surname> <given-names>M</given-names></name> <name><surname>Nik&#x0161;i&#x0107;</surname> <given-names>M</given-names></name> <etal/></person-group>. <article-title>Global surveillance of trends in cancer survival 2000&#x2013;14 (CONCORD-3): analysis of individual records for 37 513 025 patients diagnosed with one of 18 cancers from 322 population-based registries in 71 countries</article-title>. <source>Lancet</source>. (<year>2018</year>) <volume>391</volume>:<fpage>1023</fpage>&#x2013;<lpage>75</lpage>. doi: <pub-id pub-id-type="doi">10.1016/S0140-6736(17)33326-3</pub-id>, PMID: <pub-id pub-id-type="pmid">29395269</pub-id></citation></ref>
<ref id="ref7"><label>7.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Moghimi-Dehkordi</surname> <given-names>B</given-names></name> <name><surname>Safaee</surname> <given-names>A</given-names></name></person-group>. <article-title>An overview of colorectal cancer survival rates and prognosis in Asia</article-title>. <source>World J Gastrointest Oncol</source>. (<year>2012</year>) <volume>4</volume>:<fpage>71</fpage>&#x2013;<lpage>5</lpage>. doi: <pub-id pub-id-type="doi">10.4251/wjgo.v4.i4.71</pub-id>, PMID: <pub-id pub-id-type="pmid">22532879</pub-id></citation></ref>
<ref id="ref8"><label>8.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Frenette</surname> <given-names>CT</given-names></name> <name><surname>Strum</surname> <given-names>WB</given-names></name></person-group>. <article-title>Relative rates of missed diagnosis for colonoscopy, barium enema, and flexible sigmoidoscopy in 379 patients with colorectal cancer</article-title>. <source>J Gastrointest Cancer</source>. (<year>2007</year>) <volume>38</volume>:<fpage>148</fpage>&#x2013;<lpage>53</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s12029-008-9027-x</pub-id>, PMID: <pub-id pub-id-type="pmid">19089670</pub-id></citation></ref>
<ref id="ref9"><label>9.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Grasgruber</surname> <given-names>P</given-names></name> <name><surname>Hrazdira</surname> <given-names>E</given-names></name> <name><surname>Sebera</surname> <given-names>M</given-names></name> <name><surname>Kalina</surname> <given-names>T</given-names></name></person-group>. <article-title>Cancer incidence in Europe: an ecological analysis of nutritional and other environmental factors</article-title>. <source>Front Oncol</source>. (<year>2018</year>) <volume>8</volume>:<fpage>151</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fonc.2018.00151</pub-id>, PMID: <pub-id pub-id-type="pmid">29951370</pub-id></citation></ref>
<ref id="ref10"><label>10.</label><citation citation-type="other"><person-group person-group-type="author"><name><surname>Gupta</surname> <given-names>J</given-names></name> <name><surname>Agrawal</surname> <given-names>T</given-names></name> <name><surname>Singh</surname> <given-names>P</given-names></name> <name><surname>Diwakar</surname> <given-names>M</given-names></name></person-group>. <italic>Optical biosensor for early diagnosis of Cancer</italic>. 2023 International Conference on Computer, Electronics &#x0026; Electrical Engineering &#x0026; their Applications (IC2E3); IEEE. (<year>2023</year>).</citation></ref>
<ref id="ref11"><label>11.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ahmad</surname> <given-names>Z</given-names></name> <name><surname>Rahim</surname> <given-names>S</given-names></name> <name><surname>Zubair</surname> <given-names>M</given-names></name> <name><surname>Abdul-Ghafar</surname> <given-names>J</given-names></name></person-group>. <article-title>Artificial intelligence (AI) in medicine, current applications and future role with special emphasis on its potential and promise in pathology: present and future impact, obstacles including costs and acceptance among pathologists, practical and philosophical considerations. A comprehensive review</article-title>. <source>Diagn Pathol</source>. (<year>2021</year>) <volume>16</volume>:<fpage>1</fpage>&#x2013;<lpage>16</lpage>. doi: <pub-id pub-id-type="doi">10.1186/s13000-021-01085-4</pub-id></citation></ref>
<ref id="ref12"><label>12.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nasir</surname> <given-names>IM</given-names></name> <name><surname>Raza</surname> <given-names>M</given-names></name> <name><surname>Shah</surname> <given-names>JH</given-names></name> <name><surname>Khan</surname> <given-names>MA</given-names></name> <name><surname>Nam</surname> <given-names>Y-C</given-names></name> <name><surname>Nam</surname> <given-names>Y</given-names></name></person-group>. <article-title>Improved shark smell optimization algorithm for human action recognition</article-title>. <source>Comput Mater Contin</source>. (<year>2023</year>) <volume>76</volume>:<fpage>2667</fpage>&#x2013;<lpage>84</lpage>. doi: <pub-id pub-id-type="doi">10.32604/cmc.2023.035214</pub-id></citation></ref>
<ref id="ref13"><label>13.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nasir</surname> <given-names>IM</given-names></name> <name><surname>Raza</surname> <given-names>M</given-names></name> <name><surname>Ulyah</surname> <given-names>SM</given-names></name> <name><surname>Shah</surname> <given-names>JH</given-names></name> <name><surname>Fitriyani</surname> <given-names>NL</given-names></name> <name><surname>Syafrudin</surname> <given-names>M</given-names></name></person-group>. <article-title>ENGA: elastic net-based genetic algorithm for human action recognition</article-title>. <source>Expert Syst Appl</source>. (<year>2023</year>) <volume>227</volume>:<fpage>120311</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.eswa.2023.120311</pub-id></citation></ref>
<ref id="ref14"><label>14.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nasir</surname> <given-names>IM</given-names></name> <name><surname>Raza</surname> <given-names>M</given-names></name> <name><surname>Shah</surname> <given-names>JH</given-names></name> <name><surname>Wang</surname> <given-names>S-H</given-names></name> <name><surname>Tariq</surname> <given-names>U</given-names></name> <name><surname>Khan</surname> <given-names>MA</given-names></name></person-group>. <article-title>HAREDNet: a deep learning based architecture for autonomous video surveillance by recognizing human actions</article-title>. <source>Comput Electr Eng</source>. (<year>2022</year>) <volume>99</volume>:<fpage>107805</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.compeleceng.2022.107805</pub-id></citation></ref>
<ref id="ref15"><label>15.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tariq</surname> <given-names>J</given-names></name> <name><surname>Alfalou</surname> <given-names>A</given-names></name> <name><surname>Ijaz</surname> <given-names>A</given-names></name> <name><surname>Ali</surname> <given-names>H</given-names></name> <name><surname>Ashraf</surname> <given-names>I</given-names></name> <name><surname>Rahman</surname> <given-names>H</given-names></name> <etal/></person-group>. <article-title>Fast intra mode selection in HEVC using statistical model</article-title>. <source>Comput Mater Contin</source>. (<year>2022</year>) <volume>70</volume>:<fpage>3903</fpage>&#x2013;<lpage>18</lpage>. doi: <pub-id pub-id-type="doi">10.32604/cmc.2022.019541</pub-id></citation></ref>
<ref id="ref16"><label>16.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nasir</surname> <given-names>IM</given-names></name> <name><surname>Rashid</surname> <given-names>M</given-names></name> <name><surname>Shah</surname> <given-names>JH</given-names></name> <name><surname>Sharif</surname> <given-names>M</given-names></name> <name><surname>Awan</surname> <given-names>MY</given-names></name> <name><surname>Alkinani</surname> <given-names>MH</given-names></name></person-group>. <article-title>An optimized approach for breast cancer classification for histopathological images based on hybrid feature set</article-title>. <source>Curr Med Imaging</source>. (<year>2021</year>) <volume>17</volume>:<fpage>136</fpage>&#x2013;<lpage>47</lpage>. doi: <pub-id pub-id-type="doi">10.2174/1573405616666200423085826</pub-id>, PMID: <pub-id pub-id-type="pmid">32324518</pub-id></citation></ref>
<ref id="ref17"><label>17.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mushtaq</surname> <given-names>I</given-names></name> <name><surname>Umer</surname> <given-names>M</given-names></name> <name><surname>Imran</surname> <given-names>M</given-names></name> <name><surname>Nasir</surname> <given-names>IM</given-names></name> <name><surname>Muhammad</surname> <given-names>G</given-names></name> <name><surname>Shorfuzzaman</surname> <given-names>M</given-names></name></person-group>. <article-title>Customer prioritization for medical supply chain during COVID-19 pandemic</article-title>. <source>Comput Mater Contin</source>. (<year>2021</year>) <volume>70</volume>:<fpage>59</fpage>&#x2013;<lpage>72</lpage>. doi: <pub-id pub-id-type="doi">10.32604/cmc.2022.019337</pub-id></citation></ref>
<ref id="ref18"><label>18.</label><citation citation-type="other"><person-group person-group-type="author"><name><surname>Nasir</surname> <given-names>IM</given-names></name> <name><surname>Raza</surname> <given-names>M</given-names></name> <name><surname>Shah</surname> <given-names>JH</given-names></name> <name><surname>Khan</surname> <given-names>MA</given-names></name> <name><surname>Rehman</surname> <given-names>A</given-names></name></person-group>. <italic>Human action recognition using machine learning in uncontrolled environment</italic>. 2021 1st International Conference on Artificial Intelligence and Data Analytics (CAIDA); IEEE. (<year>2021</year>).</citation></ref>
<ref id="ref19"><label>19.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nasir</surname> <given-names>IM</given-names></name> <name><surname>Khan</surname> <given-names>MA</given-names></name> <name><surname>Yasmin</surname> <given-names>M</given-names></name> <name><surname>Shah</surname> <given-names>JH</given-names></name> <name><surname>Gabryel</surname> <given-names>M</given-names></name> <name><surname>Scherer</surname> <given-names>R</given-names></name> <etal/></person-group>. <article-title>Pearson correlation-based feature selection for document classification using balanced training</article-title>. <source>Sensors</source>. (<year>2020</year>) <volume>20</volume>:<fpage>6793</fpage>. doi: <pub-id pub-id-type="doi">10.3390/s20236793</pub-id></citation></ref>
<ref id="ref20"><label>20.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nasir</surname> <given-names>IM</given-names></name> <name><surname>Bibi</surname> <given-names>A</given-names></name> <name><surname>Shah</surname> <given-names>JH</given-names></name> <name><surname>Khan</surname> <given-names>MA</given-names></name> <name><surname>Sharif</surname> <given-names>M</given-names></name> <name><surname>Iqbal</surname> <given-names>K</given-names></name> <etal/></person-group>. <article-title>Deep learning-based classification of fruit diseases: an application for precision agriculture</article-title>. <source>Comput Mater Contin</source>. (<year>2021</year>) <volume>66</volume>:<fpage>1949</fpage>&#x2013;<lpage>62</lpage>. doi: <pub-id pub-id-type="doi">10.32604/cmc.2020.012945</pub-id></citation></ref>
<ref id="ref21"><label>21.</label><citation citation-type="other"><person-group person-group-type="author"><name><surname>Nasir</surname> <given-names>IM</given-names></name> <name><surname>Khan</surname> <given-names>MA</given-names></name> <name><surname>Armghan</surname> <given-names>A</given-names></name> <name><surname>Javed</surname> <given-names>MY</given-names></name></person-group>. <italic>SCNN: a secure convolutional neural network using blockchain</italic>. 2020 2nd International Conference on Computer and Information Sciences (ICCIS); IEEE. (<year>2020</year>).</citation></ref>
<ref id="ref22"><label>22.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Khan</surname> <given-names>MA</given-names></name> <name><surname>Nasir</surname> <given-names>IM</given-names></name> <name><surname>Sharif</surname> <given-names>M</given-names></name> <name><surname>Alhaisoni</surname> <given-names>M</given-names></name> <name><surname>Kadry</surname> <given-names>S</given-names></name> <name><surname>Bukhari</surname> <given-names>SAC</given-names></name> <etal/></person-group>. <article-title>A blockchain based framework for stomach abnormalities recognition</article-title>. <source>Comput Mater Contin.</source> (<year>2021</year>) <volume>67</volume>:<fpage>141</fpage>&#x2013;<lpage>58</lpage>. doi: <pub-id pub-id-type="doi">10.32604/cmc.2021.013217</pub-id></citation></ref>
<ref id="ref23"><label>23.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mashood Nasir</surname> <given-names>I</given-names></name> <name><surname>Attique Khan</surname> <given-names>M</given-names></name> <name><surname>Alhaisoni</surname> <given-names>M</given-names></name> <name><surname>Saba</surname> <given-names>T</given-names></name> <name><surname>Rehman</surname> <given-names>A</given-names></name> <name><surname>Iqbal</surname> <given-names>T</given-names></name></person-group>. <article-title>A hybrid deep learning architecture for the classification of superhero fashion products: an application for medical-tech classification</article-title>. <source>Comput Mod Eng Sci</source>. (<year>2020</year>) <volume>124</volume>:<fpage>1017</fpage>&#x2013;<lpage>33</lpage>. doi: <pub-id pub-id-type="doi">10.32604/cmes.2020.010943</pub-id></citation></ref>
<ref id="ref24"><label>24.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Amann</surname> <given-names>J</given-names></name> <name><surname>Blasimme</surname> <given-names>A</given-names></name> <name><surname>Vayena</surname> <given-names>E</given-names></name> <name><surname>Frey</surname> <given-names>D</given-names></name> <name><surname>Madai</surname> <given-names>VI</given-names></name> <name><surname>Consortium</surname> <given-names>PQ</given-names></name></person-group>. <article-title>Explainability for artificial intelligence in healthcare: a multidisciplinary perspective</article-title>. <source>BMC Med Inform Decis Mak</source>. (<year>2020</year>) <volume>20</volume>:<fpage>1</fpage>&#x2013;<lpage>9</lpage>. doi: <pub-id pub-id-type="doi">10.1186/s12911-020-01332-6</pub-id></citation></ref>
<ref id="ref25"><label>25.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>Y</given-names></name> <name><surname>Weng</surname> <given-names>Y</given-names></name> <name><surname>Lund</surname> <given-names>J</given-names></name></person-group>. <article-title>Applications of explainable artificial intelligence in diagnosis and surgery</article-title>. <source>Diagnostics</source>. (<year>2022</year>) <volume>12</volume>:<fpage>237</fpage>. doi: <pub-id pub-id-type="doi">10.3390/diagnostics12020237</pub-id>, PMID: <pub-id pub-id-type="pmid">35204328</pub-id></citation></ref>
<ref id="ref26"><label>26.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dutta</surname> <given-names>S</given-names></name></person-group>. <article-title>An overview on the evolution and adoption of deep learning applications used in the industry</article-title>. <source>Wiley Interdiscip Rev</source>. (<year>2018</year>) <volume>8</volume>:<fpage>e1257</fpage>. doi: <pub-id pub-id-type="doi">10.1002/widm.1257</pub-id></citation></ref>
<ref id="ref27"><label>27.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tehsin</surname> <given-names>S</given-names></name> <name><surname>Rehman</surname> <given-names>S</given-names></name> <name><surname>Saeed</surname> <given-names>MOB</given-names></name> <name><surname>Riaz</surname> <given-names>F</given-names></name> <name><surname>Hassan</surname> <given-names>A</given-names></name> <name><surname>Abbas</surname> <given-names>M</given-names></name> <etal/></person-group>. <article-title>Self-organizing hierarchical particle swarm optimization of correlation filters for object recognition</article-title>. <source>IEEE Access</source>. (<year>2017</year>) <volume>5</volume>:<fpage>24495</fpage>&#x2013;<lpage>502</lpage>. doi: <pub-id pub-id-type="doi">10.1109/ACCESS.2017.2762354</pub-id></citation></ref>
<ref id="ref28"><label>28.</label><citation citation-type="other"><person-group person-group-type="author"><name><surname>Tehsin</surname> <given-names>S</given-names></name> <name><surname>Rehman</surname> <given-names>S</given-names></name> <name><surname>Awan</surname> <given-names>AB</given-names></name> <name><surname>Chaudry</surname> <given-names>Q</given-names></name> <name><surname>Abbas</surname> <given-names>M</given-names></name> <name><surname>Young</surname> <given-names>R</given-names></name> <etal/></person-group>. <italic>Improved maximum average correlation height filter with adaptive log base selection for object recognition</italic>. Optical Pattern Recognition XXVII; (<year>2016</year>).</citation></ref>
<ref id="ref29"><label>29.</label><citation citation-type="other"><person-group person-group-type="author"><name><surname>Tehsin</surname> <given-names>S</given-names></name> <name><surname>Rehman</surname> <given-names>S</given-names></name> <name><surname>Bilal</surname> <given-names>A</given-names></name> <name><surname>Chaudry</surname> <given-names>Q</given-names></name> <name><surname>Saeed</surname> <given-names>O</given-names></name> <name><surname>Abbas</surname> <given-names>M</given-names></name> <etal/></person-group>. <italic>Comparative analysis of zero aliasing logarithmic mapped optimal trade-off correlation filter</italic>. Pattern Recognition and Tracking XXVIII; SPIE. (<year>2017</year>).</citation></ref>
<ref id="ref30"><label>30.</label><citation citation-type="other"><person-group person-group-type="author"><name><surname>Tehsin</surname> <given-names>S</given-names></name> <name><surname>Rehman</surname> <given-names>S</given-names></name> <name><surname>Riaz</surname> <given-names>F</given-names></name> <name><surname>Saeed</surname> <given-names>O</given-names></name> <name><surname>Hassan</surname> <given-names>A</given-names></name> <name><surname>Khan</surname> <given-names>M</given-names></name> <etal/></person-group>. <italic>Fully invariant wavelet enhanced minimum average correlation energy filter for object recognition in cluttered and occluded environments</italic>. Pattern Recognition and Tracking XXVIII; SPIE. (<year>2017</year>).</citation></ref>
<ref id="ref31"><label>31.</label><citation citation-type="other"><person-group person-group-type="author"><name><surname>Akbar</surname> <given-names>N</given-names></name> <name><surname>Tehsin</surname> <given-names>S</given-names></name> <name><surname>Bilal</surname> <given-names>A</given-names></name> <name><surname>Rubab</surname> <given-names>S</given-names></name> <name><surname>Rehman</surname> <given-names>S</given-names></name> <name><surname>Young</surname> <given-names>R</given-names></name></person-group>, editors. <italic>Detection of moving human using optimized correlation filters in homogeneous environments</italic>. Pattern Recognition and Tracking XXXI; SPIE. (<year>2020</year>).</citation></ref>
<ref id="ref32"><label>32.</label><citation citation-type="other"><person-group person-group-type="author"><name><surname>Tehsin</surname> <given-names>S</given-names></name> <name><surname>Asfia</surname> <given-names>Y</given-names></name> <name><surname>Akbar</surname> <given-names>N</given-names></name> <name><surname>Riaz</surname> <given-names>F</given-names></name> <name><surname>Rehman</surname> <given-names>S</given-names></name> <name><surname>Young</surname> <given-names>R</given-names></name></person-group>. <italic>Selection of CPU scheduling dynamically through machine learning</italic>. Pattern Recognition and Tracking XXXI; SPIE. (<year>2020</year>).</citation></ref>
<ref id="ref33"><label>33.</label><citation citation-type="other"><person-group person-group-type="author"><name><surname>Akbar</surname> <given-names>N</given-names></name> <name><surname>Tehsin</surname> <given-names>S</given-names></name> <name><surname>Ur Rehman</surname> <given-names>H</given-names></name> <name><surname>Rehman</surname> <given-names>S</given-names></name> <name><surname>Young</surname> <given-names>R</given-names></name></person-group>. <italic>Hardware design of correlation filters for target detection</italic>. Pattern Recognition and Tracking XXX; SPIE. (<year>2019</year>).</citation></ref>
<ref id="ref34"><label>34.</label><citation citation-type="other"><person-group person-group-type="author"><name><surname>Asfia</surname> <given-names>Y</given-names></name> <name><surname>Tehsin</surname> <given-names>S</given-names></name> <name><surname>Shahzeen</surname> <given-names>A</given-names></name> <name><surname>Khan</surname> <given-names>US</given-names></name></person-group>. <italic>Visual person identification device using raspberry pi. The 25th conference of FRUCT association</italic> (<year>2019</year>).</citation></ref>
<ref id="ref35"><label>35.</label><citation citation-type="other"><person-group person-group-type="author"><name><surname>Saad</surname> <given-names>SM</given-names></name> <name><surname>Bilal</surname> <given-names>A</given-names></name> <name><surname>Tehsin</surname> <given-names>S</given-names></name> <name><surname>Rehman</surname> <given-names>S</given-names></name></person-group>. <italic>Spoof detection for fake biometric images using feature-based techniques</italic>. SPIE Future Sensing Technologies; SPIE. (<year>2020</year>).</citation></ref>
<ref id="ref36"><label>36.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yang</surname> <given-names>G</given-names></name> <name><surname>Ye</surname> <given-names>Q</given-names></name> <name><surname>Xia</surname> <given-names>J</given-names></name></person-group>. <article-title>Unbox the black-box for the medical explainable AI via multi-modal and multi-Centre data fusion: a mini-review, two showcases and beyond</article-title>. <source>Inf Fusion</source>. (<year>2022</year>) <volume>77</volume>:<fpage>29</fpage>&#x2013;<lpage>52</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.inffus.2021.07.016</pub-id>, PMID: <pub-id pub-id-type="pmid">34980946</pub-id></citation></ref>
<ref id="ref37"><label>37.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lundberg</surname> <given-names>SM</given-names></name> <name><surname>Lee</surname> <given-names>S-I</given-names></name></person-group>. <article-title>A unified approach to interpreting model predictions</article-title>. <source>Adv Neural Inf Proces Syst</source>. (<year>2017</year>) <volume>30</volume>:<fpage>4768</fpage>&#x2013;<lpage>77</lpage>. doi: <pub-id pub-id-type="doi">10.48550/arXiv.1705.07874</pub-id></citation></ref>
<ref id="ref38"><label>38.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nouman Noor</surname> <given-names>M</given-names></name> <name><surname>Nazir</surname> <given-names>M</given-names></name> <name><surname>Khan</surname> <given-names>SA</given-names></name> <name><surname>Song</surname> <given-names>O-Y</given-names></name> <name><surname>Ashraf</surname> <given-names>I</given-names></name></person-group>. <article-title>Efficient gastrointestinal disease classification using pretrained deep convolutional neural network</article-title>. <source>Electronics</source>. (<year>2023</year>) <volume>12</volume>:<fpage>1557</fpage>. doi: <pub-id pub-id-type="doi">10.3390/electronics12071557</pub-id></citation></ref>
<ref id="ref39"><label>39.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nouman Noor</surname> <given-names>M</given-names></name> <name><surname>Nazir</surname> <given-names>M</given-names></name> <name><surname>Khan</surname> <given-names>SA</given-names></name> <name><surname>Ashraf</surname> <given-names>I</given-names></name> <name><surname>Song</surname> <given-names>O-Y</given-names></name></person-group>. <article-title>Localization and classification of gastrointestinal tract disorders using explainable AI from endoscopic images</article-title>. <source>Appl Sci</source>. (<year>2023</year>) <volume>13</volume>:<fpage>9031</fpage>. doi: <pub-id pub-id-type="doi">10.3390/app13159031</pub-id></citation></ref>
<ref id="ref40"><label>40.</label><citation citation-type="book"><person-group person-group-type="author"><name><surname>Noor</surname> <given-names>MN</given-names></name> <name><surname>Ashraf</surname> <given-names>I</given-names></name> <name><surname>Nazir</surname> <given-names>M</given-names></name></person-group>. <article-title>Analysis of GAN-based data augmentation for GI-tract disease classification</article-title> In: <person-group person-group-type="editor"><name><surname>Ali</surname> <given-names>H</given-names></name> <name><surname>Rehmani</surname> <given-names>MH</given-names></name> <name><surname>Shah</surname> <given-names>Z</given-names></name></person-group>, editors. <source>Advances in deep generative models for medical artificial intelligence</source>. <publisher-loc>Berlin</publisher-loc>: <publisher-name>Springer</publisher-name> (<year>2023</year>). <fpage>43</fpage>&#x2013;<lpage>64</lpage>.</citation></ref>
<ref id="ref41"><label>41.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Noor</surname> <given-names>MN</given-names></name> <name><surname>Nazir</surname> <given-names>M</given-names></name> <name><surname>Ashraf</surname> <given-names>I</given-names></name></person-group>. <article-title>Emerging trends and advances in the diagnosis of gastrointestinal diseases</article-title>. <source>BioScientific Rev</source>. (<year>2023</year>) <volume>5</volume>:<fpage>118</fpage>&#x2013;<lpage>43</lpage>. doi: <pub-id pub-id-type="doi">10.32350/BSR.52.11</pub-id></citation></ref>
<ref id="ref42"><label>42.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Noor</surname> <given-names>MN</given-names></name> <name><surname>Nazir</surname> <given-names>M</given-names></name> <name><surname>Ashraf</surname> <given-names>I</given-names></name> <name><surname>Almujally</surname> <given-names>NA</given-names></name> <name><surname>Aslam</surname> <given-names>M</given-names></name> <name><surname>Fizzah</surname> <given-names>JS</given-names></name></person-group>. <article-title>GastroNet: a robust attention-based deep learning and cosine similarity feature selection framework for gastrointestinal disease classification from endoscopic images. CAAI transactions on intelligence</article-title>. <source>Technology</source>. (<year>2023</year>) <volume>2023</volume>:<fpage>12231</fpage>. doi: <pub-id pub-id-type="doi">10.1049/cit2.12231</pub-id></citation></ref>
<ref id="ref43"><label>43.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bertsimas</surname> <given-names>D</given-names></name> <name><surname>Margonis</surname> <given-names>GA</given-names></name> <name><surname>Tang</surname> <given-names>S</given-names></name> <name><surname>Koulouras</surname> <given-names>A</given-names></name> <name><surname>Antonescu</surname> <given-names>CR</given-names></name> <name><surname>Brennan</surname> <given-names>MF</given-names></name> <etal/></person-group>. <article-title>An interpretable AI model for recurrence prediction after surgery in gastrointestinal stromal tumour: an observational cohort study</article-title>. <source>Eclinicalmedicine</source>. (<year>2023</year>) <volume>64</volume>:<fpage>102200</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.eclinm.2023.102200</pub-id></citation></ref>
<ref id="ref44"><label>44.</label><citation citation-type="other"><person-group person-group-type="author"><name><surname>Auzine</surname> <given-names>MM</given-names></name> <name><surname>Khan</surname> <given-names>MH-M</given-names></name> <name><surname>Baichoo</surname> <given-names>S</given-names></name> <name><surname>Sahib</surname> <given-names>NG</given-names></name> <name><surname>Gao</surname> <given-names>X</given-names></name> <name><surname>Bissoonauth-Daiboo</surname> <given-names>P</given-names></name></person-group>. <italic>Classification of gastrointestinal Cancer through explainable AI and ensemble learning</italic>. 2023 Sixth International Conference of Women in Data Science at Prince Sultan University (WiDS PSU); IEEE. (<year>2023</year>).</citation></ref>
<ref id="ref45"><label>45.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bang</surname> <given-names>CS</given-names></name> <name><surname>Lee</surname> <given-names>JJ</given-names></name> <name><surname>Baik</surname> <given-names>GH</given-names></name></person-group>. <article-title>Computer-aided diagnosis of esophageal cancer and neoplasms in endoscopic images: a systematic review and meta-analysis of diagnostic test accuracy</article-title>. <source>Gastrointest Endosc</source>. (<year>2021</year>) <volume>93</volume>:<fpage>1006</fpage>&#x2013;<lpage>1015.e13</lpage>. <comment>e13</comment>. doi: <pub-id pub-id-type="doi">10.1016/j.gie.2020.11.025</pub-id></citation></ref>
<ref id="ref46"><label>46.</label><citation citation-type="other"><person-group person-group-type="author"><name><surname>Janse</surname> <given-names>MH</given-names></name> <name><surname>Van der Sommen</surname> <given-names>F</given-names></name> <name><surname>Zinger</surname> <given-names>S</given-names></name> <name><surname>Schoon</surname> <given-names>EJ</given-names></name></person-group>. <italic>Early esophageal cancer detection using RF classifiers. Medical imaging 2016: computer-aided diagnosis; SPIE</italic>. (<year>2016</year>).</citation></ref>
<ref id="ref47"><label>47.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lee</surname> <given-names>JH</given-names></name> <name><surname>Kim</surname> <given-names>YJ</given-names></name> <name><surname>Kim</surname> <given-names>YW</given-names></name> <name><surname>Park</surname> <given-names>S</given-names></name> <name><surname>Choi</surname> <given-names>Y-i</given-names></name> <name><surname>Kim</surname> <given-names>YJ</given-names></name> <etal/></person-group>. <article-title>Spotting malignancies from gastric endoscopic images using deep learning</article-title>. <source>Surg Endosc</source>. (<year>2019</year>) <volume>33</volume>:<fpage>3790</fpage>&#x2013;<lpage>7</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s00464-019-06677-2</pub-id>, PMID: <pub-id pub-id-type="pmid">30719560</pub-id></citation></ref>
<ref id="ref48"><label>48.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xiao</surname> <given-names>T</given-names></name> <name><surname>Renduo</surname> <given-names>S</given-names></name> <name><surname>Lianlian</surname> <given-names>W</given-names></name> <name><surname>Honggang</surname> <given-names>Y</given-names></name></person-group>. <article-title>An automatic diagnosis system for chronic atrophic gastritis under white light endoscopy based on deep learning</article-title>. <source>Endoscopy</source>. (<year>2022</year>) <volume>54</volume>:<fpage>S80</fpage>. doi: <pub-id pub-id-type="doi">10.1055/s-0042-1744749</pub-id></citation></ref>
<ref id="ref49"><label>49.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>L</given-names></name> <name><surname>Chen</surname> <given-names>Y</given-names></name> <name><surname>Shen</surname> <given-names>Z</given-names></name> <name><surname>Zhang</surname> <given-names>X</given-names></name> <name><surname>Sang</surname> <given-names>J</given-names></name> <name><surname>Ding</surname> <given-names>Y</given-names></name> <etal/></person-group>. <article-title>Convolutional neural network for the diagnosis of early gastric cancer based on magnifying narrow band imaging</article-title>. <source>Gastric Cancer</source>. (<year>2020</year>) <volume>23</volume>:<fpage>126</fpage>&#x2013;<lpage>32</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s10120-019-00992-2</pub-id>, PMID: <pub-id pub-id-type="pmid">31332619</pub-id></citation></ref>
<ref id="ref50"><label>50.</label><citation citation-type="other"><person-group person-group-type="author"><name><surname>Sakai</surname> <given-names>Y</given-names></name> <name><surname>Takemoto</surname> <given-names>S</given-names></name> <name><surname>Hori</surname> <given-names>K</given-names></name> <name><surname>Nishimura</surname> <given-names>M</given-names></name> <name><surname>Ikematsu</surname> <given-names>H</given-names></name> <name><surname>Yano</surname> <given-names>T</given-names></name> <etal/></person-group>. <italic>Automatic detection of early gastric cancer in endoscopic images using a transferring convolutional neural network</italic>. 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC); IEEE. (<year>2018</year>).</citation></ref>
<ref id="ref51"><label>51.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Obayya</surname> <given-names>M</given-names></name> <name><surname>Al-Wesabi</surname> <given-names>FN</given-names></name> <name><surname>Maashi</surname> <given-names>M</given-names></name> <name><surname>Mohamed</surname> <given-names>A</given-names></name> <name><surname>Hamza</surname> <given-names>MA</given-names></name> <name><surname>Drar</surname> <given-names>S</given-names></name> <etal/></person-group>. <article-title>Modified salp swarm algorithm with deep learning based gastrointestinal tract disease classification on endoscopic images</article-title>. <source>IEEE Access</source>. (<year>2023</year>) <volume>11</volume>:<fpage>25959</fpage>&#x2013;<lpage>67</lpage>. doi: <pub-id pub-id-type="doi">10.1109/ACCESS.2023.3256084</pub-id></citation></ref>
<ref id="ref52"><label>52.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Iakovidis</surname> <given-names>DK</given-names></name> <name><surname>Georgakopoulos</surname> <given-names>SV</given-names></name> <name><surname>Vasilakakis</surname> <given-names>M</given-names></name> <name><surname>Koulaouzidis</surname> <given-names>A</given-names></name> <name><surname>Plagianakos</surname> <given-names>VP</given-names></name></person-group>. <article-title>Detecting and locating gastrointestinal anomalies using deep learning and iterative cluster unification</article-title>. <source>IEEE Trans Med Imaging</source>. (<year>2018</year>) <volume>37</volume>:<fpage>2196</fpage>&#x2013;<lpage>210</lpage>. doi: <pub-id pub-id-type="doi">10.1109/TMI.2018.2837002</pub-id>, PMID: <pub-id pub-id-type="pmid">29994763</pub-id></citation></ref>
<ref id="ref53"><label>53.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>S</given-names></name> <name><surname>Yin</surname> <given-names>Y</given-names></name> <name><surname>Wang</surname> <given-names>D</given-names></name> <name><surname>Lv</surname> <given-names>Z</given-names></name> <name><surname>Wang</surname> <given-names>Y</given-names></name> <name><surname>Jin</surname> <given-names>Y</given-names></name></person-group>. <article-title>An interpretable deep neural network for colorectal polyp diagnosis under colonoscopy</article-title>. <source>Knowl-Based Syst</source>. (<year>2021</year>) <volume>234</volume>:<fpage>107568</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.knosys.2021.107568</pub-id></citation></ref>
<ref id="ref54"><label>54.</label><citation citation-type="other"><person-group person-group-type="author"><name><surname>Pogorelov</surname> <given-names>K</given-names></name> <name><surname>Randel</surname> <given-names>KR</given-names></name> <name><surname>Griwodz</surname> <given-names>C</given-names></name> <name><surname>Eskeland</surname> <given-names>SL</given-names></name> <name><surname>de Lange</surname> <given-names>T</given-names></name> <name><surname>Johansen</surname> <given-names>D</given-names></name> <etal/></person-group>. <italic>Kvasir: a multi-class image dataset for computer aided gastrointestinal disease detection</italic>. Proceedings of the 8th ACM on Multimedia Systems Conference (<year>2017</year>).</citation></ref>
<ref id="ref55"><label>55.</label><citation citation-type="other"><person-group person-group-type="author"><name><surname>Szegedy</surname> <given-names>C</given-names></name> <name><surname>Vanhoucke</surname> <given-names>V</given-names></name> <name><surname>Ioffe</surname> <given-names>S</given-names></name> <name><surname>Shlens</surname> <given-names>J</given-names></name> <name><surname>Wojna</surname> <given-names>Z</given-names></name></person-group>. <italic>Rethinking the inception architecture for computer vision</italic>. Proceedings of the IEEE conference on computer vision and pattern recognition (<year>2016</year>).</citation></ref>
<ref id="ref56"><label>56.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Simonyan</surname> <given-names>K</given-names></name> <name><surname>Zisserman</surname> <given-names>A</given-names></name></person-group>. <article-title>Very deep convolutional networks for large-scale image recognition</article-title>. <source>arXiv</source>. (<year>2014</year>) <volume>2014</volume>:<fpage>14091556</fpage>. doi: <pub-id pub-id-type="doi">10.48550/arXiv.1409.1556</pub-id></citation></ref>
<ref id="ref57"><label>57.</label><citation citation-type="other"><person-group person-group-type="author"><name><surname>Szegedy</surname> <given-names>C</given-names></name> <name><surname>Ioffe</surname> <given-names>S</given-names></name> <name><surname>Vanhoucke</surname> <given-names>V</given-names></name> <name><surname>Alemi</surname> <given-names>A</given-names></name></person-group>. <italic>Inception-v4, inception-resnet and the impact of residual connections on learning</italic>. Proceedings of the AAAI conference on artificial intelligence (<year>2017</year>).</citation></ref>
<ref id="ref58"><label>58.</label><citation citation-type="book"><person-group person-group-type="author"><name><surname>Nisbet</surname> <given-names>R</given-names></name> <name><surname>Elder</surname> <given-names>J</given-names></name> <name><surname>Miner</surname> <given-names>GD</given-names></name></person-group>. <source>Handbook of statistical analysis and data mining applications</source>. <publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>Academic Press</publisher-name> (<year>2009</year>).</citation></ref>
<ref id="ref59"><label>59.</label><citation citation-type="book"><person-group person-group-type="author"><name><surname>Kotu</surname> <given-names>V</given-names></name> <name><surname>Deshpande</surname> <given-names>B</given-names></name></person-group>. <source>Predictive analytics and data mining: Concepts and practice with rapidminer</source>. <publisher-loc>Burlington, MA</publisher-loc>: <publisher-name>Morgan Kaufmann</publisher-name> (<year>2014</year>).</citation></ref>
<ref id="ref60"><label>60.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ganaie</surname> <given-names>MA</given-names></name> <name><surname>Hu</surname> <given-names>M</given-names></name> <name><surname>Malik</surname> <given-names>A</given-names></name> <name><surname>Tanveer</surname> <given-names>M</given-names></name> <name><surname>Suganthan</surname> <given-names>P</given-names></name></person-group>. <article-title>Ensemble deep learning: a review</article-title>. <source>Eng Appl Artif Intell</source>. (<year>2022</year>) <volume>115</volume>:<fpage>105151</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.engappai.2022.105151</pub-id></citation></ref>
<ref id="ref61"><label>61.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hulsen</surname> <given-names>T</given-names></name></person-group>. <article-title>Explainable artificial intelligence (XAI): concepts and challenges in healthcare</article-title>. <source>AI</source>. (<year>2023</year>) <volume>4</volume>:<fpage>652</fpage>&#x2013;<lpage>66</lpage>. doi: <pub-id pub-id-type="doi">10.3390/ai4030034</pub-id></citation></ref>
<ref id="ref62"><label>62.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Abou Jaoude</surname> <given-names>M</given-names></name> <name><surname>Sun</surname> <given-names>H</given-names></name> <name><surname>Pellerin</surname> <given-names>KR</given-names></name> <name><surname>Pavlova</surname> <given-names>M</given-names></name> <name><surname>Sarkis</surname> <given-names>RA</given-names></name> <name><surname>Cash</surname> <given-names>SS</given-names></name> <etal/></person-group>. <article-title>Expert-level automated sleep staging of long-term scalp electroencephalography recordings using deep learning</article-title>. <source>Sleep</source>. (<year>2020</year>) <volume>43</volume>:<fpage>112</fpage>. doi: <pub-id pub-id-type="doi">10.1093/sleep/zsaa112</pub-id></citation></ref>
<ref id="ref63"><label>63.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jansen</surname> <given-names>T</given-names></name> <name><surname>Geleijnse</surname> <given-names>G</given-names></name> <name><surname>Van Maaren</surname> <given-names>M</given-names></name> <name><surname>Hendriks</surname> <given-names>MP</given-names></name> <name><surname>Ten Teije</surname> <given-names>A</given-names></name> <name><surname>Moncada-Torres</surname> <given-names>A</given-names></name></person-group>. <article-title>Machine learning explainability in breast cancer survival</article-title>. <source>Stud Health Technol Inform</source>. (<year>2020</year>) <volume>270</volume>:<fpage>307</fpage>&#x2013;<lpage>11</lpage>. doi: <pub-id pub-id-type="doi">10.3233/SHTI200172</pub-id></citation></ref>
<ref id="ref64"><label>64.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tymchenko</surname> <given-names>B</given-names></name> <name><surname>Marchenko</surname> <given-names>P</given-names></name> <name><surname>Spodarets</surname> <given-names>D</given-names></name></person-group>. <article-title>Deep learning approach to diabetic retinopathy detection</article-title>. <source>arXiv</source>. (<year>2020</year>) <volume>2020</volume>:<fpage>200302261</fpage>. doi: <pub-id pub-id-type="doi">10.48550/arXiv.2003.02261</pub-id></citation></ref>
<ref id="ref65"><label>65.</label><citation citation-type="other"><person-group person-group-type="author"><name><surname>Sun</surname> <given-names>JY</given-names></name> <name><surname>Lee</surname> <given-names>SW</given-names></name> <name><surname>Kang</surname> <given-names>MC</given-names></name> <name><surname>Kim</surname> <given-names>SW</given-names></name> <name><surname>Kim</surname> <given-names>SY</given-names></name> <name><surname>Ko</surname> <given-names>SJ</given-names></name></person-group>. <italic>A novel gastric ulcer differentiation system using convolutional neural networks</italic>. 2018 IEEE 31st International Symposium on Computer-Based Medical Systems (CBMS); IEEE. (<year>2018</year>).</citation></ref>
</ref-list>
</back>
</article>