Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Bioeng. Biotechnol.

Sec. Biomechanics

This article is part of the Research TopicBiomechanical Evaluation of Bone Structural Integrity: Experimental, Computational, and Clinical PerspectivesView all articles

CervSpineNet: A hybrid deep learning-based approach for the segmentation of cervical spinous processes

Provisionally accepted
Jay  Sunil SawantJay Sunil Sawant1Lama  MoukheiberLama Moukheiber1Anupama  NairAnupama Nair1Anubha  MahajanAnubha Mahajan1Jaehui  ByunJaehui Byun1Ishwarya  PichaimaniIshwarya Pichaimani1Sangwook  T. YoonSangwook T. Yoon2Christopher  T. MartinChristopher T. Martin3Cassie  S MitchellCassie S Mitchell1*
  • 1Georgia Institute of Technology, Atlanta, United States
  • 2Emory University, Atlanta, United States
  • 3University of Minnesota Medical School, Minneapolis, United States

The final, formatted version of the article will be published soon.

Accurate segmentation of cervical spinous processes on lateral X-rays is essential for reliable anatomical landmarking, surgical planning, and longitudinal assessment of spinal deformity. Yet no publicly available dataset provides pixel-level annotations of these structures, and manual delineation remains time-consuming and operator dependent. To address this gap, we curated an expert-labeled dataset of 500 cervical spine radiographs and developed CervSpineNet, a hybrid deep learning framework for automated spinous process segmentation. CervSpineNet integrates a transformer-based encoder that captures global anatomical context with a lightweight convolutional decoder that refines local boundaries. A compound loss function—combining Dice, Focal Tversky, Hausdorff distance transform, and Structural Similarity (SSIM) terms—jointly optimizes region overlap, class balance, structural fidelity, and boundary accuracy. The model was trained and evaluated on original, contrast-enhanced (CLAHE), and augmented dataset variants and benchmarked against four baselines: U-Net, DeepLabV3+, the Segment Anything Model (SAM), and a text-guided SegFormer. Across all experiments, CervSpineNet consistently outperformed competing methods, achieving mean Dice coefficients above 0.93, IoU values above 0.87, and SSIM above 0.98, with substantially lower HD95 distances. The model demonstrated strong agreement with ground truth (global MAE ≈ 0.005) and efficient inference times of 5–10 seconds per image. With a compact footprint (~345 MB), CervSpineNet runs on standard clinical hardware and reduces manual annotation time by approximately 96%, enabling rapid and reproducible segmentation for surgical evaluation, deformity monitoring, and large-scale retrospective studies. Together, the dataset and CervSpineNet provide a scalable, reproducible foundation for advancing AI-assisted cervical spine analysis in both research and clinical practice.

Keywords: artificial intelligence, automatedmusculoskeletal landmark detection, cervical spine segmentation, cervical spinous process dataset, Deep Learning in Radiology, hybrid transformer–CNN architecture, machine learning, radiology workflow automation

Received: 27 Oct 2025; Accepted: 18 Dec 2025.

Copyright: © 2025 Sawant, Moukheiber, Nair, Mahajan, Byun, Pichaimani, Yoon, Martin and Mitchell. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Cassie S Mitchell

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.