Skip to main content

ORIGINAL RESEARCH article

Front. Comput. Neurosci.
Volume 18 - 2024 | doi: 10.3389/fncom.2024.1365727

Deep Learning for Automatic Segmentation of Vestibular Schwannoma: A Retrospective Study from Multi-Centre Routine MRI Provisionally Accepted

 Aaron Kujawa1*  Reuben Dorent1 Steve Connor1, 2, 3  Suki Thomson2  Marina Ivory1 Ali Vahedi2 Emily Guilhem2  Navodini Wijethilake1 Robert Bradford4, 5 Neil Kitchen4, 5  Sotirios Bisdas5  Sebastien Ourselin1  Tom Vercauteren1  Jonathan Shapey1, 6
  • 1King's College London, United Kingdom
  • 2Department of Neuroradiology, King's College Hospital, United Kingdom
  • 3Service of Radiology, Guy's and St Thomas' NHS Foundation Trust, United Kingdom
  • 4Queen Square Radiosurgery Centre (Gamma Knife), National Hospital for Neurology and Neurosurgery, United Kingdom
  • 5Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, United Kingdom
  • 6Department of Neurosurgery, King's College Hospital NHS Foundation Trust, United Kingdom

The final, formatted version of the article will be published soon.

Receive an email when it is updated
You just subscribed to receive the final version of the article

Automatic segmentation of vestibular schwannoma (VS) from routine clinical MRI has potential to improve clinical workflow, facilitate treatment decisions, and assist patient management. Previous work demonstrated reliable automatic segmentation performance on datasets of standardised MRI images acquired for stereotactic surgery planning. However, diagnostic clinical datasets are generally more diverse and pose a larger challenge to automatic segmentation algorithms, especially when post-operative images are included. In this work, we show for the first time that automatic segmentation of VS on routine MRI datasets is also possible with high accuracy.We acquired and publicly release a curated multi-centre routine clinical (MC-RC) dataset of 160 patients with a single sporadic VS. For each patient up to three longitudinal MRI exams with contrast-enhanced T1-weighted (ceT1w) (n=124) and T2-weighted (T2w) (n=363) images were included and the VS manually annotated. Segmentations were produced and verified in an iterative process: 1) initial segmentations by a specialized company; 2) review by one of three trained radiologists; and 3) validation by an expert team. Inter-and intra-observer reliability experiments were performed on a subset of the dataset. A state-of-the-art deep learning framework was used to train segmentation models for VS. Model performance was evaluated on a MC-RC hold-out testing set, another public VS datasets, and a partially public dataset.1 Kujawa et al.The generalizability and robustness of the VS deep learning segmentation models increased significantly when trained on the MC-RC dataset. Dice similarity coefficients (DSC) achieved by our model are comparable to those achieved by trained radiologists in the inter-observer experiment. On the MC-RC testing set, median DSCs were 86.2(9.5) for ceT1w, 89.4(7.0) for T2w and 86.4(8.6) for combined ceT1w+T2w input images. On another public dataset acquired for Gamma Knife stereotactic radiosurgery our model achieved median DSCs of 95.3(2.9), 92.8(3.8), and 95.5(3.3), respectively. In contrast, models trained on the Gamma Knife dataset did not generalise well as illustrated by significant underperformance on the MC-RC routine MRI dataset, highlighting the importance of data variability in the development of robust VS segmentation models.The MC-RC dataset and all trained deep learning models were made available online.

Keywords: vestibular schwannoma, segmentation, deep learning, Convolutional Neural Network, Volumetry, Surveillance MRI

Received: 04 Jan 2024; Accepted: 17 Apr 2024.

Copyright: © 2024 Kujawa, Dorent, Connor, Thomson, Ivory, Vahedi, Guilhem, Wijethilake, Bradford, Kitchen, Bisdas, Ourselin, Vercauteren and Shapey. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Mx. Aaron Kujawa, King's College London, London, United Kingdom