Your new experience awaits. Try the new design now and help us make it even better

TECHNOLOGY AND CODE article

Front. Bioinform.

Sec. Computational BioImaging

This article is part of the Research TopicAI in Computational BioimagingView all 3 articles

Deep learning software and revised 2D model to segment bone in micro-CT scans

Provisionally accepted
Andrew  H. LeeAndrew H. Lee1,2,3,4,5*Ganesh  TalluriGanesh Talluri6Manan  DamaniManan Damani1,4Brandon  Vera CovarrubiasBrandon Vera Covarrubias1,5Helena  HannaHelena Hanna1,2Julian  M. MooreJulian M. Moore1,2Jacob  BaradarianJacob Baradarian1,2Jeremy  ChavezJeremy Chavez1,2Michael  MolgaardMichael Molgaard1,2Beau  NielsonBeau Nielson1,2Kalah  WaldenKalah Walden1,2Thomas  L BroderickThomas L Broderick1,2,5Layla  Al-NakkashLayla Al-Nakkash1,2,5
  • 1Midwestern University, Glendale, United States
  • 2Midwestern University Arizona College of Osteopathic Medicine, Glendale, United States
  • 3Midwestern University College of Veterinary Medicine, Glendale, United States
  • 4Midwestern University Core Facilities, Glendale, United States
  • 5Midwestern University College of Graduate Studies, Glendale, United States
  • 6Basis Peoria, Peoria, United States

The final, formatted version of the article will be published soon.

Deep learning (DL) enables automated bone segmentation in micro-CT datasets but can struggle to generalize across developmental stages, anatomical regions, and imaging conditions. We present BP-2D-03, which is a revised 2D Bone-Pores segmentation model. It was fitted to a dataset comprising 20 micro-CT scans spanning five mammalian species and 142,960 image patches. To manage the substantially larger and more varied dataset, we developed a DL software interface with modules for training ("BONe DLFit"), prediction ("BONe DLPred"), and evaluation ("BONe IoU"). These tools resolve prior issues such as slice-level data leakage, high memory usage, and limited multi-GPU support. Model performance was evaluated through three analyses. First, 5-fold cross-validation with three seeds per fold evaluated baseline robustness and stability. The model showed generally high mean Intersection-over-Union (IoU) with minimal variation across seeds, but performance varied more across folds related to differences in scan composition. These findings show that the baseline model is stable overall but that predictivity can decline for atypical scans. Second, 30 benchmarking experiments tested how model architecture, encoder backbone, and patch size influence segmentation IoU and computational efficiency. U-Net and UNet++ architectures with simple convolutional backbones (e.g., ResNet-18) achieved the highest IoU values, approaching 0.97. Third, cross-platform experiments confirmed that results are consistent across hardware configurations, operating systems, and implementations (Avizo 3D and standalone). Together, these analyses demonstrate that the BONe DL software delivers robust baseline performance and reproducible results across platforms.

Keywords: artificial intelligence, Avizo, Bone, Bone Marrow, Mammal, Semantic segmentation

Received: 01 Aug 2025; Accepted: 15 Dec 2025.

Copyright: © 2025 Lee, Talluri, Damani, Covarrubias, Hanna, Moore, Baradarian, Chavez, Molgaard, Nielson, Walden, Broderick and Al-Nakkash. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Andrew H. Lee

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.