Skip to main content

OPINION article

Front. Med., 30 September 2022
Sec. Gastroenterology
Volume 9 - 2022 | https://doi.org/10.3389/fmed.2022.1025382

Machines with vision for intraoperative guidance during gastrointestinal cancer surgery

Muhammad Uzair Khalid1* Simon Laplante2,3 Amin Madani2,3
  • 1Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
  • 2Department of Surgery, University of Toronto, Toronto, ON, Canada
  • 3Surgical Artificial Intelligence Research Academy, University Health Network, Toronto, ON, Canada

Introduction

Gastrointestinal (GI) malignancies represent over 26% of all cancers worldwide and a disproportionate 35% of all cancer deaths (1). The most common sites of GI cancers include colorectal (10.0% of all diagnosed cancers), gastric (5.6%), liver (4.7%), esophageal (3.1%), and pancreatic (2.6%) cancers, respectively representing the second (9.4% of all cancer-related deaths), fourth (7.7%), third (8.3%), sixth (5.5%), and seventh (4.7%) most common cause of cancer-related deaths (2). Whereas, the 5-year survival of each of these cancers has been steadily improving over the years (albeit marginally in the case of pancreatic and esophageal cancers), clinical uncertainty has meant that a significant number of these cancers continue to face complications with surgical management (37). Indeed, with intraoperative complication rates reaching 40% in some types of gastric cancer resections, patient morbidity and mortality can be significant, especially in oncology-related surgeries (8).

Technology such as artificial intelligence (AI) can potentially play a strong role in improving intraoperative surgical outcomes of gastrointestinal cancers. AI is a field of computer science that uses algorithms to enable machines to mimic higher-order human behaviors like problem-solving and object classification. A subset of AI is machine learning (ML): ML, unlike conventional software, uses inexplicit programming to identify patterns in training datasets, such that when presented with novel data, it is able to make new predictions on that data. A further subset of ML, in turn, is deep learning (DL); DL uses convolutional neural networks (CNNs) that imitate complex human brain pathways using multilayered processing algorithms. CNNs are often black-box (i.e., unexplainable) processes with which machines can learn information and subsequently make decisions in supervised, semi-supervised, and unsupervised settings (9).

At the intersection of ML/AI and image/signal processing is computer vision (CV), a revolutionary new domain that allows machines the ability to understand and interpret visual data. Using CV, algorithms can classify and process pixelated data (i.e., images and videos) via point operations, stabilization, and 3D reconstruction; detect and track objects within those images; and perform semantic segmentation (i.e., delineate objects along their boundaries) (10). With much progress in this field over recent years, several applications of CV have been made in diagnostic medicine, including in the determination of diabetic retinopathy from eye images, lung cancer from computed tomography (CT) scans, and skin cancer from images of skin lesions (1113). Similar progress has also been made in prognostic medicine, where examples include models that use radiomics' analysis from CT imaging studies, back-processing from magnetic resonance imaging series, or digital histopathological slides to predict long-term cardiovascular risk, cancer survival, adverse histopathological status (i.e., advanced tumor-node-metastasis (TNM) staging), or the metastasis of malignancy (1418).

Despite this, very few surgical applications of CV in the form of intraoperative guidance have made it to patient bedsides. This is because the process of obtaining datasets, annotating, training, testing, validating, and implementing is an extremely complex and resource-intensive process. Indeed, a very recent systematic review looking at the use of machine learning in upper gastrointestinal cancer surgeries found no studies looking at CV or intraoperative guidance (19). In this opinion therefore, we will discuss the current applications of ML/CV in surgery and how they can be used in the intraoperative surgical management of gastrointestinal cancers by providing examples from the literature.

Intraoperative applications of computer vision in surgery

There are several ways in which computer vision can be used in surgical decision making, especially given that, over the last few decades, there has been a rise in laparoscopic, endoscopic, and robotic surgery. This has allowed researchers in CV to use recorded operative videos for various purposes such as landmark anatomy identification, operative phase recognition, identification of safe and unsafe areas of dissection, coaching, and safety initiatives.

Firstly, CV can be used to identify anatomical landmarks during surgery to aid the surgeon. At our own institution, for instance, we have developed a model (GoNoGoNet) with the ability to replicate the mental model of expert surgeons by recognizing complex anatomical structures without clear boundaries covered by fat and fatty tissues. The model, validated by an external panel of experts, uses laparoscopic cholecystectomy videos as input and overlays Go (with a specificity of 0.97) and No-Go (with a sensitivity of 0.80) zones onto the surgical field (20, 21). Bile duct injuries constitute a major source of avoidable morbidity and mortality in up to 0.7% of laparoscopic cases, and models such as GoNoGoNet have the potential to help guide surgeons by acting akin to an intraoperative GPS (22). The same principle can be applied to oncologic resections. For example, models implemented by two independent groups have attempted to use DL, CNN, and segmentation to identify the total mesorectal excision (TME) plane of dissection during rectal cancer resections (23, 24). This is particularly important given the difficulty of staying in the correct plane of dissection during rectal surgery. Additionally, the correct identification of this plane is key to reducing recurrence, increasing overall survival, and reducing complications such as presacral bleeding and nerve injuries. Despite limited performance in these prototypes, such identification of similar “Go and No-Go zones of dissection” in oncologic rectal surgery shows incredible promise, not only in improving patient outcomes, but also for coaching, setting benchmarks, and education.

Some studies have taken such anatomical and tumor landmarking to the next level by combining intraoperative imaging with preoperative assessments; this is particularly important when trying to identify resection margins and limit the extent of resection during hepatectomy or non-anatomical resections with direct implication on patient outcomes. Examples of these models include surgical navigation systems such as the novel laparoscopic hepatectomy navigation system (LHNS), which fuses preoperative 3D models with indocyanine green (ICG) fluorescence imaging to achieve real-time surgical navigation (25). Systems like LHNS are also able to better recognize liver anatomy and anticipate anatomical changes that occur with retraction as the operation progresses (26).

Secondly, CV can also be used in task classification and quality control checks during surgery. One such example is a group in Strasbourg who was able to create a ML model based in deep neural networks and segmentation, identifying whether the critical view of safety was obtained or not during laparoscopic cholecystectomy with 71.9% accuracy (27). Another example of the use of CV in intraoperative quality control has been in checking anastomotic leaks following cancer resection secondary to inadequate perfusion of the anastomosis. Such leaks can lead to increased recurrence rates, extended hospital stays, and poorer quality of life, eventually causing increased mortality of up to 20% (28). One way to prevent these can be perfusion angiography using ICG. A research group based out of South Korea has, in turn, analyzed angiography images via real-time analysis micro-perfusion and CV to predict anastomotic complications with 87% accuracy (29).

Lastly, CV has been shown to help in identification and classification of cancerous lesions at endoscopy. These methods have been trialed in the setting of polyp identification during colonoscopies, showing enhanced ability to detect smaller adenomas (30). Similarly, work has also been done in using AI to aid in the diagnosis of Barrett's esophagus and T1 esophageal cancers with 90 and 85% sensitivity, respectively (31, 32). In its translation to surgical applications, CV could potentially have a role in the identification of tumor invasion, resection margins, or suspicious peritoneal deposits reflective of malignancy at the time of diagnostic laparoscopy.

Challenges going forward

Many of the examples provided here are in the setting of non-oncological surgery; nevertheless, they are an early proof of concept of the great potential of CV in oncologic surgical care.

Yet, despite the early successes, it must be noted that there are several challenges in developing such ML models in surgery. Firstly, DL approaches are known to be incredibly data hungry, requiring hundreds, if not thousands, of data points to develop a model that has any useful level of accuracy or validity in its predictions (33). Bringing together such amounts of data is challenging, not only in the international collaboration that is required across centers to amalgamate heterogenous data, but also in the time commitment that is needed on behalf of surgeons in curating and annotating operative datasets. As a result, organizations like the Global Surgical AI Collaborative (https://www.surgicalai.org/) are particularly poised to organize and implement DL projects (34). Secondly, the AI algorithms that are developed should not only be computationally-sound, but also designed to address a real unmet clinical need. Doing so requires coordinated work with subject matter experts and other stakeholders, such as cognitive task analyses combined with Delphi consensus, so as to understand the way surgeons think and the milestones they look for while engaged in surgery (3539).

In conclusion, there are many potential opportunities to apply principles of CV and ML in improving gastrointestinal cancer surgical care. We should aim to make gastrointestinal cancer surgery safer, more effective, and of higher quality by using ML to our advantage in every aspect of care. This will require increased international collaboration and policy development around data storing, sharing, and utilization.

Author contributions

MK, SL, and AM: idea conceptualization, literature search, manuscript writing–revisions, and final draft. MK: manuscript writing-first draft. All authors contributed to the article and approved the submitted version.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

1. Arnold M, Abnet CC, Neale RE, Vignat J, Giovannucci EL, McGlynn KA, et al. global burden of 5 major types of gastrointestinal cancer. Gastroenterology. (2020) 159:335–49.e15. doi: 10.1053/j.gastro.2020.02.068

PubMed Abstract | CrossRef Full Text | Google Scholar

2. Sung H, Ferlay J, Siegel RL, Laversanne M, Soerjomataram I, Jemal A, et al. Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin. (2021) 71:209–49. doi: 10.3322/caac.21660

PubMed Abstract | CrossRef Full Text | Google Scholar

3. Li Y, Feng A, Zheng S, Chen C, Lyu J. Recent estimates and predictions of 5-year survival in patients with gastric cancer: a model-based period analysis. Cancer Control. (2022) 1–9. doi: 10.1177/10732748221099227

PubMed Abstract | CrossRef Full Text | Google Scholar

4. Jiang Y, Yuan H, Li Z, Ji X, Shen Q, Tuo J, et al. Global pattern and trends of colorectal cancer survival: a systematic review of population-based registration data. Cancer Biol Med. (2022) 19:175–86. doi: 10.20892/j.issn.2095-3941.2020.0634

PubMed Abstract | CrossRef Full Text | Google Scholar

5. Otterstatter MC, Brierley JD, De P, Ellison LF, MacIntyre M, Marrett LD, et al. Esophageal cancer in Canada: trends according to morphology and anatomical location. Can J Gastroenterol. (2012) 26:723–7. doi: 10.1155/2012/649108

PubMed Abstract | CrossRef Full Text | Google Scholar

6. Rawla P, Sunkara T, Gaduputi V. Epidemiology of pancreatic cancer: global trends, etiology and risk factors. World J Oncol. (2019) 10:10–27. doi: 10.14740/wjon1166

PubMed Abstract | CrossRef Full Text | Google Scholar

7. Bannon, F, Di Carlo, V, Harewood, R, Engholm, G, Ferretti, S, Johnson, CJ, . Survival Trends for Primary Liver Cancer, 1995–2009: Analysis of Individual Data for 578,740 Patients From 187 Population-Based Registries in 36 Countries (CONCORD-2). Available online at: https://dugi-doc.udg.edu/handle/10256/18019 (accessed August 17, 2022).

8. Voeten DM, Elfrink AKE, Gisbertz SS, Ruurda JP, van Hillegersberg R, van Berge Henegouwen MI. The impact of performing gastric cancer surgery during holiday periods. A population-based study using Dutch upper gastrointestinal cancer audit (DUCA) data. Curr Probl Cancer. (2022) 46:100850. doi: 10.1016/j.currproblcancer.2022.100850

PubMed Abstract | CrossRef Full Text | Google Scholar

9. Castelvecchi D. Can we open the black box of AI? Nat News. (2016) 538:20. doi: 10.1038/538020a

PubMed Abstract | CrossRef Full Text | Google Scholar

10. Hashimoto DA, Madani A, Navarrete-Welton A, Rosman G. Chapter 6 - computer vision in surgery: fundamental principles and applications. In: Artificial Intelligence in Surgery: Understanding the Role of AI in Surgical Practice. 1st, ed. McGraw Hill (2021). p. 115–40.

Google Scholar

11. Grzybowski A, Brona P, Lim G, Ruamviboonsuk P, Tan GSW, Abramoff M, et al. Artificial intelligence for diabetic retinopathy screening: a review. Eye. (2020) 34:451–60. doi: 10.1038/s41433-019-0566-0

PubMed Abstract | CrossRef Full Text | Google Scholar

12. Wang S, Yang DM, Rong R, Zhan X, Fujimoto J, Liu H, et al. Artificial intelligence in lung cancer pathology image analysis. Cancers. (2019) 11:1673. doi: 10.3390/cancers11111673

PubMed Abstract | CrossRef Full Text | Google Scholar

13. Jones OT, Matin RN, van der Schaar M, Bhayankaram KP, Ranmuthu CKI, Islam MS, et al. Artificial intelligence and machine learning algorithms for early detection of skin cancer in community and primary care settings: a systematic review. Lancet Digit Health. (2022) 4:e466–76. doi: 10.1016/S2589-7500(22)00023-1

PubMed Abstract | CrossRef Full Text | Google Scholar

14. Oikonomou EK, Williams MC, Kotanidis CP, Desai MY, Marwan M, Antonopoulos AS, et al. A novel machine learning-derived radiotranscriptomic signature of perivascular fat improves cardiac risk prediction using coronary CT angiography. Eur Heart J. (2019) 40:3529–43. doi: 10.1093/eurheartj/ehz592

PubMed Abstract | CrossRef Full Text | Google Scholar

15. Vial A, Stirling D, Field M, Ros M, Ritz C, Carolan M, et al. The role of deep learning and radiomic feature extraction in cancer-specific predictive modelling: a review. Transl Cancer Res. 7:803–81. doi: 10.21037/tcr.2018.05.02

CrossRef Full Text | Google Scholar

16. Dai H, Bian Y, Wang L, Yang J. Support vector machine-based backprojection algorithm for detection of gastric cancer lesions with abdominal endoscope using magnetic resonance imaging images. Sci Program. (2021) 2021:e9964203. doi: 10.1155/2021/9964203

CrossRef Full Text | Google Scholar

17. Li Q, Qi L, Feng QX, Liu C, Sun SW, Zhang J, et al. Machine learning–based computational models derived from large-scale radiographic-radiomic images can help predict adverse histopathological status of gastric cancer. Clin Transl Gastroenterol. (2019) 10:e00079. doi: 10.14309/ctg.0000000000000079

PubMed Abstract | CrossRef Full Text | Google Scholar

18. Wang J, Wu LL, Zhang Y, Ma G, Lu Y. Establishing a survival prediction model for esophageal squamous cell carcinoma based on CT and histopathological images. Phys Med Ampmathsemicolon Biol. (2021) 66:145015. doi: 10.1088/1361-6560/ac1020

PubMed Abstract | CrossRef Full Text | Google Scholar

19. Bektaş M, Burchell GL, Bonjer HJ, van der Peet DL. Machine learning applications in upper gastrointestinal cancer surgery: a systematic review. Surg Endosc. (2022) 1–15. doi: 10.1007/s00464-022-09516-z

PubMed Abstract | CrossRef Full Text | Google Scholar

20. Laplante S, Namazi B, Kiani P, Hashimoto DA, Alseidi A, Pasten M, et al. Validation of an artificial intelligence platform for the guidance of safe laparoscopic cholecystectomy. Surg Endosc. (2022) 1–9. doi: 10.1007/s00464-022-09439-9

PubMed Abstract | CrossRef Full Text | Google Scholar

21. Madani A, Namazi B, Altieri MS, Hashimoto DA, Rivera AM, Pucher PH, et al. Artificial intelligence for intraoperative guidance: using semantic segmentation to identify surgical anatomy during laparoscopic cholecystectomy. Ann Surg. (2020) 276:363–9. doi: 10.1097/SLA.0000000000004594

PubMed Abstract | CrossRef Full Text | Google Scholar

22. Moghul F, Kashyap S. Bile duct injury. In: StatPearls. Treasure Island, FL: StatPearls Publishing (2022). Available online at: http://www.ncbi.nlm.nih.gov/books/NBK546703/ (accessed August 19, 2022).

Google Scholar

23. Igaki T, Kitaguchi D, Kojima S, Hasegawa H, Takeshita N, Mori K, et al. Artificial intelligence-based total mesorectal excision plane navigation in laparoscopic colorectal surgery. Dis Colon Rectum. (2022) 65:e329–33. doi: 10.1097/DCR.0000000000002393

PubMed Abstract | CrossRef Full Text | Google Scholar

24. Kolbinger FR, Leger S, Carstens M, Rinner FM, Krell S, Chernykh A, et al. Artificial Intelligence for context-aware surgical guidance in complex robot-assisted oncological procedures: an exploratory feasibility study. medRxiv. (2022) 1–25. doi: 10.1101/2022.05.02.22274561

CrossRef Full Text | Google Scholar

25. Zhang P, Luo H, Zhu W, Yang J, Zeng N, Fan Y, et al. Real-time navigation for laparoscopic hepatectomy using image fusion of preoperative 3D surgical plan and intraoperative indocyanine green fluorescence imaging. Surg Endosc. (2020) 34:3449–59. doi: 10.1007/s00464-019-07121-1

PubMed Abstract | CrossRef Full Text | Google Scholar

26. Veerankutty FH, Jayan G, Yadav MK, Manoj KS, Yadav A, Nair SRS, et al. Artificial Intelligence in hepatology, liver surgery and transplantation: emerging applications and frontiers of research. World J Hepatol. (2021) 13:1977–90. doi: 10.4254/wjh.v13.i12.1977

PubMed Abstract | CrossRef Full Text | Google Scholar

27. Mascagni P, Vardazaryan A, Alapatt D, Urade T, Emre T, Fiorillo C, et al. Artificial intelligence for surgical safety: automatic assessment of the critical view of safety in laparoscopic cholecystectomy using deep learning. Ann Surg. (2022) 275:955–61. doi: 10.1097/SLA.0000000000004351

PubMed Abstract | CrossRef Full Text | Google Scholar

28. Ramamoorthy SL, Matson JS. ICG image-guided surgery with the assessment for anastomotic safety. In:Horgan S, Fuchs KH, , editors. Innovative Endoscopic and Surgical Technology in the GI Tract. Cham: Springer International Publishing (2021). p. 391–407.

Google Scholar

29. Park SH, Park HM, Baek KR, Ahn HM, Lee IY, Son GM. Artificial intelligence based real-time microcirculation analysis system for laparoscopic colorectal surgery. World J Gastroenterol. (2020) 26:6945–62. doi: 10.3748/wjg.v26.i44.6945

PubMed Abstract | CrossRef Full Text | Google Scholar

30. Barua I, Vinsard DG, Jodal HC, Løberg M, Kalager M, Holme Ø, et al. Artificial intelligence for polyp detection during colonoscopy: a systematic review and meta-analysis. Endoscopy. (2021) 53:277–84. doi: 10.1055/a-1201-7165

PubMed Abstract | CrossRef Full Text | Google Scholar

31. Shiroma S, Yoshio T, Kato Y, Horie Y, Namikawa K, Tokai Y, et al. Ability of artificial intelligence to detect T1 esophageal squamous cell carcinoma from endoscopic videos and the effects of real-time assistance. Sci Rep. (2021) 11:7759. doi: 10.1038/s41598-021-87405-6

PubMed Abstract | CrossRef Full Text | Google Scholar

32. Bang CS, Lee JJ, Baik GH. Computer-aided diagnosis of esophageal cancer and neoplasms in endoscopic images: a systematic review and meta-analysis of diagnostic test accuracy. Gastrointest Endosc. (2021) 93:1006–15.e13. doi: 10.1016/j.gie.2020.11.025

PubMed Abstract | CrossRef Full Text | Google Scholar

33. van der Ploeg T, Austin PC, Steyerberg EW. Modern modelling techniques are data hungry: a simulation study for predicting dichotomous endpoints. BMC Med Res Methodol. (2014) 14:137. doi: 10.1186/1471-2288-14-137

PubMed Abstract | CrossRef Full Text | Google Scholar

34. Madani, A, Hashimoto, DA, Mascagni, P, Alseidi, A, Watanabe, Y, Dingemans, F, . Global Surgical Artificial Intelligence Collaborative. Global Surgical Artificial Intelligence Collaborative. Available online at: https://www.surgicalai.org (accessed August 17, 2022).

35. Madani A, Watanabe Y, Vassiliou M, Feldman LS, Duh QY, Singer MC, et al. Defining competencies for safe thyroidectomy: an international Delphi consensus. Surgery. (2016) 86–94, 96–101. doi: 10.1016/j.surg.2015.07.039

PubMed Abstract | CrossRef Full Text | Google Scholar

36. Madani A, Grover K, Kuo JH, Mitmaker EJ, Shen W, Beninato T, et al. Defining the competencies for laparoscopic transabdominal adrenalectomy: an investigation of intraoperative behaviors and decisions of experts. Surgery. (2020) 167:241–9. doi: 10.1016/j.surg.2019.03.035

PubMed Abstract | CrossRef Full Text | Google Scholar

37. Pugh CM, DaRosa DA. Use of cognitive task analysis to guide the development of performance-based assessments for intraoperative decision making. Mil Med. (2013) 178:22–7. doi: 10.7205/MILMED-D-13-00207

PubMed Abstract | CrossRef Full Text | Google Scholar

38. Madani A, Watanabe Y, Feldman LS, Vassiliou MC, Barkun JS, Fried GM, et al. Expert intraoperative judgment and decision-making: defining the cognitive competencies for safe laparoscopic cholecystectomy. J Am Coll Surg. (2015) 221:931–40.e8. doi: 10.1016/j.jamcollsurg.2015.07.450

PubMed Abstract | CrossRef Full Text | Google Scholar

39. Bihorac A, Ozrazgat-Baslanti T, Ebadi A, Motaei A, Madkour M, Pardalos PM, et al. MySurgeryRisk: development and validation of a machine-learning risk algorithm for major complications and death after surgery. Ann Surg. (2019) 269:652–62. doi: 10.1097/SLA.0000000000002706

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: machine learning, computer vision, intraoperative guidance, anatomical landmarking, quality control, task classification

Citation: Khalid MU, Laplante S and Madani A (2022) Machines with vision for intraoperative guidance during gastrointestinal cancer surgery. Front. Med. 9:1025382. doi: 10.3389/fmed.2022.1025382

Received: 22 August 2022; Accepted: 15 September 2022;
Published: 30 September 2022.

Edited by:

Claudio Fiorillo, Agostino Gemelli University Polyclinic (IRCCS), Italy

Reviewed by:

Davide De Sio, Catholic University of the Sacred Heart, Italy

Copyright © 2022 Khalid, Laplante and Madani. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Muhammad Uzair Khalid, uzair.khalid@mail.utoronto.ca

Download