Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Med., 15 April 2025

Sec. Precision Medicine

Volume 12 - 2025 | https://doi.org/10.3389/fmed.2025.1511487

This article is part of the Research TopicAdvances in Precision Medicine for Minimally Invasive Treatment of Pelvis/Hip Fractures: Integration of Digital and Intelligent TechnologiesView all 3 articles

Automatic pelvic fracture segmentation: a deep learning approach and benchmark dataset


Yanzhen LiuYanzhen Liu1Sutuke YibulayimuSutuke Yibulayimu1Gang ZhuGang Zhu2Chao ShiChao Shi1Chendi LiangChendi Liang1Chunpeng ZhaoChunpeng Zhao3Xinbao WuXinbao Wu3Yudi Sang
Yudi Sang2*Yu Wang,
Yu Wang1,2*
  • 1Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, China
  • 2Beijing Rossum Robot Technology Co., Ltd., Beijing, China
  • 3Department of Orthopaedics and Traumatology, Beijing Jishuitan Hospital, Beijing, China

Introduction: Accurate segmentation of pelvic fractures from computed tomography (CT) is crucial for trauma diagnosis and image-guided reduction surgery. The traditional manual slice-by-slice segmentation by surgeons is time-consuming, experience-dependent, and error-prone. The complex anatomy of the pelvic bone, the diversity of fracture types, and the variability in fracture surface appearances pose significant challenges to automated solutions.

Methods: We propose an automatic pelvic fracture segmentation method based on deep learning, which effectively isolates hipbone and sacrum fragments from fractured pelvic CT. The method employs two sequential networks: an anatomical segmentation network for extracting hipbones and sacrum from CT images, followed by a fracture segmentation network that isolates the main and minor fragments within each bone region. We propose a distance-weighted loss to guide the fracture segmentation network's attention on the fracture surface. Additionally, multi-scale deep supervision and smooth transition strategies are incorporated to enhance overall performance.

Results: Tested on a curated dataset of 150 CTs, which we have made publicly available, our method achieves an average Dice coefficient of 0.986 and an average symmetric surface distance of 0.234 mm.

Discussion: The method outperformed traditional max-flow and a transformer-based method, demonstrating its effectiveness in handling complex fracture.

1 Introduction

Pelvic fractures are classified as one of the most severe forms of orthopedic injury, typically resulting from high-energy trauma. A study involving 11,149 patients has demonstrated that pelvic fracture leads to a mortality of 14.2%, significantly higher than other types of injuries (1). The anatomical complexity of the pelvic ring involves numerous muscle groups, ligaments, neurovascular bundles, and other soft tissues, making its surgical intervention the most challenging one and posing significant treatment obstacles (2).

The goal of surgical management for pelvic fractures is to restore the bone's original anatomy to regain lost functional mobility. This process is categorized into open reduction and closed reduction surgeries. Open reduction surgery often necessitates extensive dissection, leading to considerable tissue damage and an elevated risk of complications. In contrast, closed reduction surgery is desired for its minimally invasive nature and hence the reduced recovery time (3). In recent years, the exploration and clinical implementation of robotic-assisted closed fracture reduction surgery have significantly enhanced the accuracy of fracture reduction while minimizing radiation exposure for both patients and surgeons (4). Regardless of whether manual or robotic-assisted reduction is employed, segmentation of the fractures from preoperative computed tomography (CT) is crucial. This step is fundamental for trauma diagnosis and reduction planning, aiming to identify and determine the optimal anatomical reduction position to restore the natural state of the pelvic bone.

Conventionally, semi-automated approaches are employed to delineate the anatomy of pelvic fractures. The initial steps involve thresholding and region-growing techniques to extract bone regions by adjusting the threshold and precisely locating seed points (5). Subsequently, the fracture surface is manually outlined, either by refining segments in a 3D view or by editing the segmentation masks on a slice-by-slice basis. This labor-intensive process often takes more than 30 minutes, especially when fracture fragments are intertwined or partially attached (6). Furthermore, the complexity and the induced variability of pelvic fractures mean that manual segmentation relies heavily on the clinician's experience, highlighting a pressing need for an automated solution for segmenting pelvic fracture fragments from CT images.

Deep learning has been successfully applied to various bone segmentation tasks, demonstrating its effectiveness (7). Nevertheless, learning-based methods specifically addressing pelvic fracture segmentation remain under-explored. Several factors in image characteristics contribute to this challenge:

• The intricate anatomy of the fractured pelvis, combined with surrounding bones such as sacralized lumbar vertebra, fractured vertebra or femur, and the potential presence of patient's hands during CT scanning, complicates the differentiation of pelvic bones.

• Unlike the more prevalent organ segmentation tasks, where models can often intuitively grasp the typical shape of an object, discerning the shape of bone fragments is more complex due to the significant variation in fracture types and morphologies (8).

• The actual fracture surface is diverse in its presentation. It can manifest as a vast space when fragments are isolated and displaced, a minor gap when fragments are isolated but stable, a crease when fragments are not fully separated, compression when fragments collide, or a blend of these scenarios. This diversity results in quite different image intensity profiles around the fracture site.

• The inconsistency in the number of bone fragments present in pelvic fractures poses a challenge in establishing a uniform labeling approach suitable for every fracture type and case.

In this study, we propose a deep learning-based method to segment pelvic fracture fragments from preoperative CT images. Our major contributions are threefold:

• We proposed a completely automated pipeline for pelvic fracture segmentation, which is the first attempt to apply deep learning to this task to the best of our knowledge.

• We designed a novel multi-scale distance-weighted loss to boost segmentation accuracy near fracture sites, incorporating deep supervision and a smooth transition strategy during training to elevate local accuracy without compromising the overall performance.

• We curated a benchmark dataset of pelvic fracture CT images, encompassing 150 fractured cases with well-annotated ground-truth anatomical and fracture labels.

Our dataset and source code have been made publicly available at https://github.com/YzzLiu/FracSegNet.

2 Related work

2.1 Medical image segmentation

The encoder-decoder architecture introduced by U-Net has established a strong foundation for both 2D and 3D medical image segmentation tasks (9), (10). Subsequent models such as U-Net++ and V-Net have refined this approach with improvements like nested skip connections and volumetric convolutions (11), (12). More recently, researchers have enhanced U-Net by integrating new architectural concepts. Transformer-based hybrids, including Swin-UNETR and TransUNet, incorporate global context through self-attention mechanisms, while CNN-focused enhancements such as MedNext and STU-Net improve feature extraction using advanced convolutional techniques (1316). Additionally, innovative models like U-Mamba employ state-space models to better capture long-range dependencies (17). Furthermore, systematic benchmarking, as demonstrated by nn-UNet, reveals that careful attention to implementation details, such as the choice of loss function and data augmentation strategies, can yield performance gains that rival or even surpass those achieved by novel architectural designs (18).

2.2 Bone segmentation

Bone segmentation methods can be broadly categorized into traditional approaches based on intensity, template-based methods, and deep learning-based methods (1921). Traditional intensity-based approaches often struggle with the distinct intensity discrepancies between cortical and trabecular bones, compounded by the overlapping intensity ranges between trabecular bones and soft tissues. This often results in the formation of hollows within the segmentation masks. Such inaccuracies are particularly problematic in tasks like screw fixation planning, where a precise understanding of the pelvic bone topology is critical (19). Template-based methods involve registering a CT scan with a healthy template and employing graph partitioning techniques to propagate labels. However, this strategy heavily relies on the accuracy of registration and can yield unreliable results in the presence of fractures (20). Deep learning-based bone segmentation has demonstrated significant success across various anatomies, such as the pelvis, ribs, spine, and skull (2225). Liu et al. applied a cascade 3D UNet for the anatomical segmentation of the hipbone, sacrum, and lumbar vertebrae in CT, demonstrating the effectiveness and robustness of deep learning methods in pelvic bone segmentation (22).

2.3 Fracture detection

The application of deep learning in handling fractured images was initially explored in fracture detection tasks aimed to facilitate diagnosis. It has been applied across various anatomical sites, including the hand, ribs, pelvis, and spine (26, 27), (28, 29). Notably, Jin et al. formulated rib fracture detection as a segmentation task. While this provided a rough outline of the fractured region, it is not capable of delineating the fracture surface and the fragment accurately (27). In the context of pelvic fractures, Ukai et al. integrates parallel 2D YOLOv3 models to detect pelvic fractures and subsequently combines 2D fracture candidate points to delineate the 2D fracture region (28). Additionally, Zeng et al. propose a two-stage structure-focused contrastive learning strategy that effectively exploits the symmetry of pelvic structures for fracture detection (30). While these methods can provide substantial aid in trauma diagnosis and clinical decision-making, they fall short in applications requiring precise delineation of fragments for image-guided surgery.

2.4 Fracture segmentation

Various methods have been explored to isolate fractured bone fragments from CT scans, including fixed or adaptive thresholding, watershed algorithms, non-rigid registration, sheetness-based approaches, and region growing (6, 3134). These techniques generally rely on the intensity similarity and continuity of boundary gradients to segment fractures. For instance, Yuan et al. proposed a semi-automatic graph cut method based on continuous max-flow to segment pelvic fractures, which involves manual selection of seed points and a trial-and-error process (5, 35). Similarly, Wang et al. developed an automatic max-flow segmentation approach using graph cuts and boundary-enhancing filters. While effective in separating fragments, this method often struggles with fragments in collision or compression (36). Despite these advancements, a fully automatic and robust solution for fracture segmentation remains elusive.

Deep learning fracture segmentation remains a relatively under-explored area, yet several studies have demonstrated its significant potential. For instance, Yang et al. applied a two-stage Mask R-CNN model to locate and segment intertrochanteric fractures in 2D images (37). Kim et al. leverages a DeepLab model to automatically segment bone fragments in tibia and fibula from CT scans (38). Furthermore, Wang et al. employed the V-net architecture for segmenting intertrochanteric femoral fractures (39). Data size has been identified as a common challenge in these studies that limits the segmentation accuracy, especially for small bone fragments. This limitation underscores the need for innovative solutions in both the development of robust datasets and the more efficient use of available data.

2.5 Differences from the conference version

This study expands upon our initial conference paper presented at the 26th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2023), advancing the original work in its performance, depth, and practical utility (40). Firstly, we have enhanced the fractured CT dataset with a larger patient cohort, a more comprehensive range of fracture types, and refined annotations of pelvic bone fragments. Secondly, we have optimized the design of the anatomical segmentation network and experimented on its training setups, substantially boosting its performance on fractured data. Thirdly, we have incorporated more thorough experiments on comparing methods and parameter searching for the fracture segmentation network, demonstrating the model's effectiveness and robustness with detailed analysis.

3 Methods

3.1 Overview

Our study is dedicated to the automated segmentation of target bone fragments (specifically, the left and right hipbones and the sacrum) from CT scans. As shown in Figure 1, our methodology unfolds in three steps. Initially, an anatomical segmentation network, leveraging a cascaded 3D nn-UNet architecture, is deployed to isolate the pelvic bones from the CT scans. This network, pre-trained on a comprehensive dataset of pelvic CT images (22) undergoes further refinement on our dataset of fractured cases. Following this, a fracture segmentation network is used to segment the bone fragments within each extracted hipbone and sacral region. To establish a uniform labeling protocol across all fracture types, we assign three labels per bone: the background, the main fragment, and minor fragments. The main fragment, typically the most substantial piece located centrally within each bone, contrasts with the minor fragments, which represent the remainder. The post-processing step further separates and labels isolated components to yield the final segmentation result.

Figure 1
www.frontiersin.org

Figure 1. Overview of the proposed pelvic fracture segmentation method. In Step 1, a cascaded UNet is employed to predict anatomical labels from pelvic CT images, which are then utilized to extract the pelvic bones from the CT. In Step 2, a distance-weighted UNet is used to segment main and minor fragments from the extracted bone regions. In Step 3, connected component analysis is performed to obtain the final segmentation results.

3.2 Pelvic bone extraction

In the initial step, we develop an anatomical segmentation network to extract pelvic bones from CT images. We employed a cascaded 3D UNet framework to predict anatomical labels from pre-processed CT images. Two five-layer UNet models are trained sequentially: The first UNet is trained on low-resolution images to enhance its contextual understanding through larger receptive field, producing coarse segmentations of hipbones and sacrum. Then, the second UNet is trained on full-resolution images to refine local details, taking concatenated coarse segmentation labels and CT volumes as inputs to produce precise segmentations.

The networks are pre-trained on the CTPelvic1K dataset, which contains over 1,000 high-resolution scans of pelvis, and is refined on the curated Pelvic Bone Fragments with Injuries (PENGWIN) dataset, which contains a broader range of fracture cases (detailed in Section 3.5).

3.3 Bone fragment segmentation

We develop a fracture segmentation network to further isolate main and minor fragments from each extracted region. As shown in Figure 2, a 3D UNet, based on nn-UNet, is selected as the backbone model (18). The model learns a non-linear mapping relationship M:XY, where X and Y are the masked CT volume and ground truth fragment label, respectively.

Figure 2
www.frontiersin.org

Figure 2. Training process of the fracture segmentation network. The contact fracture surfaces (CFS) are computed from the manually annotated ground truth to generate the fracture distance map (FDM), which is then mapped to the FDM weight and incorporated into the loss function. Deep supervision and the smooth transition strategy are employed to prevent excessive focus on local features.

3.3.1 Fracture distance map

The contact fracture surface (CFS) is the part where the bones collide and overlap due to compression, and is the most challenging part for both human operators and network models to delineate. We are particularly concerned about the segmentation performance in this region. To this end, we introduce guidance into the network training using fracture distance map (FDM).

The FDM is computed on the ground-truth segmentation of each data sample before training. This representation provides information of the boundary, shape, and position of the object to be segmented. First, CFS regions are identified by comparing the labels within each voxel's neighborhood. Then, the distance of each foreground voxel to the nearest CFS is calculated as its distance value Dv, and is then normalized.

Dv=I(Yv1)minuCFS||v-u||2,    (1)
Dv^=DvmaxvVDv,    (2)

where V is the set of all foreground voxels, v = (hv, wv, dv) is the index of a voxel in V, u = (hu, wu, du) is the voxel index. Y is the ground-truth segmentation, I(Yv≥1) is the indicator function for foreground, and D^ is the normalized distance. The distance is then used to calculate the FDM weight Ŵ:

Wv=λback+I(Yv1)1-λback1+eλFDMDv^-5,    (3)
W^v=Wv·|V|vVWv.    (4)

where λback is the weight for the background voxels, λFDM is the slope parameter in the activation function. To ensure the equivalence of the loss among different samples, the weights are normalized by their sum.

3.3.2 Distance-weighted loss

The FDM weight Ŵ is then used to calculate the weighted Dice Ldice and cross-entropy loss Lce, so that the CFS gains more importance in training.

Ldice=1-2|L|lLvVW^vPvlYvlvVW^vPvl+vVW^vYvl,    (5)
Lce=-1|V||L|vVlLW^vYvllog(Pvl),    (6)

where L is the number of classes, Pvl and Yvl are the output prediction and the ground truth for the vth voxel of the lth label. The overall loss is their weighted sum:

Ltotal = λdiceLdice+λceLce,    (7)

where λdice and λce are balancing weights.

3.3.3 Multi-scale deep supervision

We use a multi-scale deep supervision strategy in model training to learn different features more effectively (41). The deep layers mainly capture the global features with shape/structural information, whereas the shallow layers focus more on local features that help delineate fracture surfaces. Auxiliary losses are integrated into the decoder at different resolution levels (except the lowest resolution level). The losses are calculated using the corresponding down-sampled FDM Ŵvn, and down-sampled ground truth Yvn. The loss for the nth level Ln is calculated with a different λFDM in Equation 3. The λFDM of each layer decreases by a factor of 2 as the depth increases, i.e., λn+1 = λn/2. In this way, the local CFS information are assigned more attention in the shallow layers, while the weights become more uniform in the deep layers.

3.3.4 Smooth transition

To stabilize network training, we use a smooth transition strategy to maintain the model's attention on global features at the early stage of training and gradually shift the attention toward the fracture site as the model evolves (42). The smooth transition dynamically adjusts the proportion of the FDM in the overall weight matrix based on the number of training iterations. The dynamic weight is calculated using the following formula:

Wst={J,if t<τbegin,11+δJ+δ1+δW^,if τbegintτbegin+τsmooth,W^,if t>τbegin+τsmooth,    (8)
δ=-ln(1-t-τbeginτsmooth+ϵ),    (9)

where J is an all-ones matrix with the same size as the input volume, t is the current iteration number, τbegin is the iteration where the transition begins, τsmooth is the duration of the smooth transition phase, and ϵ is a small positive constant. The dynamic weight Wst is adjusted by controlling the relative proportion of J and Ŵ.

3.4 Post-processing

Connected component analysis (CCA) has been widely used in segmentation (43). However, its direct application to fracture segmentation is often complicated due to the collision between fragments. Nevertheless, after the removal of the main central fragment, the minor fragments become naturally isolated. Therefore, in the post-processing step, we further isolate the remaining minor fragments by CCA. The isolated components are then assigned different labels. Additionally, we exclude any fragments smaller than 1 cm3, as they typically do not significantly impact the outcomes in robotic surgery contexts.

3.5 Dataset

3.5.1 Data collection and distribution

We curated PENGWIN, a dataset of 150 CT scans representing a wide range of common pelvic fractures. These scans were obtained from patients who underwent pelvic reduction surgery between 2017 and 2023 at six medical centers: Beijing Jishuitan Hospital (JST), Foshan Hospital of TCM (FSHTCM), the First Bethune Hospital of Jilin University (JLUFH), the Third Bethune Hospital of Jilin University (JLUTH), Nanfang Hospital (NFH), and Tianjin Hospital (TJH). Imaging was performed using seven CT scanners, including the Toshiba Aquilion Prime, United Imaging uCT 550, Philips Brilliance 64, Siemens Sensation 64, Toshiba Aquilion One, Siemens Somatom Force, and GE Optima CT660. The retrospective use of these scans was approved by the respective institutional ethics committees.

The dataset includes patients aged 16 to 94 years, comprising 63 females and 87 males. The average voxel spacing is (0.83, 0.83, 0.89) mm, with typical image dimensions of approximately (488, 426, 323). To ensure a representative distribution, we incorporated five primary fracture types: pelvic ring dislocation (5 cases), unilateral hip fracture (54 cases), bilateral hip fractures (31 cases), sacral fracture (5 cases), and combined sacral and hip fractures (55 cases). For model development, stratified sampling allocated 120 cases for training and 30 for testing.

3.5.2 Annotation

The dataset is processed by two experienced annotators and a senior expert. The inter-annotator variability between annotators was characterized by an Intersection over Union (IoU) of 0.984 and an Adjusted Rand Index (ARI) of 0.993. The data annotation was structured into a four-step workflow:

Initial automatic segmentation: we employ a pre-trained segmentation network based on the nn-UNet framework to produce preliminary anatomical segmentations (22). This network was trained on CTPelvic1K, with the majority of them not presenting any fractures.

Manual refinement of anatomical labels: The initial anatomical labels undergo a meticulous refinement process by annotators using the 3D Slicer platform.

Identification of fractured fragments: leveraging the refined anatomical labels, the annotators identify and label fractured bone fragments. This operation is also carried out on the 3D Slicer platform.

Expert validation: as a final checkpoint, a senior expert rigorously reviews and modifies the annotated fracture labels, ensuring their precision and consistency.

3.5.3 Labeling rule

The primary objective of our investigation is to streamline the process for automated fracture reduction planning in robotic surgeries. Within this framework, the main fragment is maneuvered to a predefined location using a robotic arm, while the minor fragments are either manually adjusted by surgeons or simply ignored. Based on our findings, separating the minor fragments is often not necessary. Hence, for consistency in annotations across our dataset, we limit the fragment count for each bone to three. This rule has been uniformly applied across all 150 cases within our dataset. In addition, to enhance the utility of our research for future studies, we have compiled a separate dataset version that includes detailed separation of minor fragments.

4 Experiments and results

4.1 Implementation

The method was implemented with PyTorch and SimpleITK. Experiments were performed with an Intel Xeon 40-core CPU, a Quadro RTX 5000 GPU, and a 256 GB memory.

4.1.1 Anatomical segmentation network training

All images underwent b-spline interpolation to resample voxel spacing to (0.83, 0.83, 0.89) mm, followed by z-score normalization. To enhance the variability of our dataset, for each training sample, four augmented samples are generated. These images were created by applying random elastic distortions within a range of 80%–120%, along with random translations and rotations within the ranges of -20 to 20 mm and -30 to 30 degrees for each axis, respectively. To further strengthen resilience against noise, random noise was added with a probability of 15%. This included Gaussian blur values ranging from 0.5 to 1.0, brightness scaling from 75% to 125%, contrast adjustments from 75% to 125%, and gamma transformations from 0.7 to 1.5.

ADAM optimizer with an initial learning rate of 0.0001 and a batch size of 2 was used. The learning rate was subjected to exponential decay. We performed five-fold cross-validation on training set. Each model underwent training for 2,000 epochs.

4.1.2 Fracture segmentation network training

We cropped the resampled bone volumes by calculating bounding box, and normalized them with z-score. For each training sample, eight augmented images were generated. This process involved mirror flipping along three axes, accompanied with random distortions, translations, rotations, and noise simulation similar to those employed in the anatomical network.

ADAM optimizer with a learning rate of 0.0001 and a batch size of 2 was used. λback was set 0.2. Both λdice and λce were set to 1. The initial λFDM was set to 16. We conducted a five-fold cross-validation on the training set, where each model underwent training for 2000 epochs.

4.2 Evaluation

We assessed the performance of various methods in anatomical and fracture segmentation using Dice Similarity Coefficient (DSC), average symmetric surface distance (ASSD), and the 95th percentile of the Hausdorff Distance (HD95). To account for labels that were entirely missing in the prediction, their HD95 and ASSD were assigned the diameter and radius of the ground truth's circumferential sphere, respectively. Furthermore, we also incorporated the median HD95 for a more comprehensive evaluation, which mitigates the impact from the failure cases. For fracture segmentation, we evaluated the local Dice similarity coefficient (LDSC) within a 10 mm range around the CFS to measure performance in critical areas. Two-tailed t-tests were used to examine the statistical significance.

4.3 Experiments on anatomical segmentation

We conducted a comparative analysis of anatomical segmentation models trained on three distinct datasets: CTpelvic1K alone, PENGWIN alone, and a combination where training began on CTpelvic1K followed by fine-tuning on PENGWIN.

Figure 3 illustrates the results across five typical fracture types. The model trained solely on CTpelvic1K displayed suboptimal performance, particularly in cases involving lumbar sacralization and fractures with fragments distanced from the main fragment. This limitation is largely attributed to the dataset's predominance of intact pelvis scans and a limited diversity in fracture types. In contrast, the model trained on the combined dataset exhibited superior overall performance, benefiting significantly from the enhanced variety in fracture characteristics. Table 1 provides a quantitative comparison of the anatomical segmentation results on the test set, with paired t-tests indicating that the model trained on the combined dataset achieved better or at least comparable results to other methods across all evaluated metrics.

Figure 3
www.frontiersin.org

Figure 3. Example anatomical segmentation results from different models. (a) Pelvic ring dislocation. (b) Unilateral hip fracture. (c) Bilateral hip fractures. (d) Sacral fracture. (e) Combined sacral and hip fractures.

Table 1
www.frontiersin.org

Table 1. Quantitative comparison of anatomical segmentation.

4.4 Experiments on fracture segmentation

4.4.1 Ablation study and benchmark comparison

We conducted an ablation study for the fracture segmentation network, comparing the proposed method (FDMSS-UNet) against the model without smooth transition and deep supervision (FDM-UNet) and the model without distance weighting (UNet). In addition, we also compared the methods against the traditional max-flow segmentation approach and a Swin-UNETR model (5, 13).

Figures 4, 5 provide qualitative comparisons in both 3D and 2D slice views, respectively. While max-flow segmentation yields reasonable results in cases where the CFS is clear or the fragments are non-contacting, it underperforms in more complex scenarios. Both Swin-UNETR and UNet effectively identify fracture fragments but struggle with accurate delineation in complex CFS areas, leading to errors in fracture surface identification. FDM-UNet improves upon max-flow, Swin-UNETR, and UNet near the CFS areas but occasionally misidentifies non-fractured areas far from the CFS as fractured. The inclusion of FDM weighting and deep supervision with a smooth transition in FDMSS-UNet significantly enhances its performance, particularly near the CFS, making it the most effective model among those tested. In addition, Figure 6 presents the segmentation performance on both an osteoporotic fracture case and a highly complex fracture case, demonstrating that our method is effective for these two types of fracture cases.

Figure 4
www.frontiersin.org

Figure 4. Example fracture segmentation results using different methods. (a) Anterior ring fracture. (b) Posterior ring fracture. (c) Combined fracture of the anterior and posterior ring. (d) Left sacral fracture. (e) Right sacral fracture. Fractured regions are marked with red boxes.

Figure 5
www.frontiersin.org

Figure 5. Example fracture segmentation results shown on 2D axial slices. (a) Isolated and displaced fragments. (b) Isolated but stable fragments. (c) Partially separated fragments. (d) Compressed and colliding fragments. Fractured regions are marked with red boxes.

Figure 6
www.frontiersin.org

Figure 6. Fracture segmentation results on (a) osteoporotic, and (b) a highly complex case.

Table 2 presents the quantitative results. The main fragments, which occupy a larger proportion and are always present, generally show better metric outcomes compared to the minor fragments. Deep learning methods significantly outperform traditional max-flow in the success rate of identifying fragments, particularly small ones, with statistically significant improvements. The introduction of FDM notably increases prediction accuracy in the CFS area. The strategies of deep supervision and smooth transition stabilize training, balance local and global performance, and yield the best overall results.

Table 2
www.frontiersin.org

Table 2. Quantitative comparisons of fracture segmentation.

4.4.2 Comparison of training setups

We compared the performance of three different training setups for the FDMSS-UNet: (a) a three-class setup that differentiates the main fragment, anterior iliac (or left sacral) fragments, and posterior iliac (or right sacral) fragments; (b) a two-class setup where models were trained separately for the hipbone and sacrum data, each distinguishing between the main and minor fragments; and (c) a two-class setup with mixed hipbone and sacrum data used for training. The results are also shown in Table 2. Overall, the two-class models demonstrated superior performance across most metrics compared to the three-class model, with exceptions on HD95 and ASSD for the hipbone main fragment. Moreover, training a mixed model with both hipbone and sacrum data generally yielded better results than training separate models, likely due to a more diverse representation of fracture surface characteristics in the mixed dataset. While the separate sacrum model performed slightly better than the mixed model, the difference was not statistically significant.

4.4.3 Hyper-parameters for smooth transition

To assess the behavior of the proposed smooth transition scheme, we conducted a grid search experiment to optimize τbegin and τsmooth in Equation 8, exploring values of 0, 500, and 1000 for each. The results indicate that the FDMSS-UNet achieved the best overall performance across most metrics when τbegin is set to 0 and τsmooth is set to 1000.

4.4.4 Influence of fragment size

We evaluated the impact of fragment size on segmentation accuracy using the FDMSS-UNet. The results, shown in Figure 7, reveal no significant correlation between the size of the fragments and the overall segmentation accuracy. Specifically, DSC exhibited a weak positive correlation with fragment size, while HD95 and ASSD exhibited a weak negative correlation with fragment size.

Figure 7
www.frontiersin.org

Figure 7. Influence of fragment size on segmentation accuracy. (a–c) the relationship between fragment size and DSC, HD95, and ASSD. (d) the correlation matrix between size and the metrics.

5 Discussion

5.1 Effectiveness of the distance-weighted loss

Our network is trained using a FDM-based loss function and multi-scale deep supervision and smooth transition strategies. Compared to other methods, our approach utilizes FDM related to the CFS to guide network training, helping to focus on features near the CFS. Multi-scale deep supervision and smooth transition ensure local accuracy improvements without affecting overall performance. Experimental results show that our method achieves the best results.

In practical applications, especially in semi-automatic pipelines where human operators can modify and refine network predictions, accurate initial segmentation near the fracture site is highly desirable. The fracture surface itself is often complex and intertwined, making it difficult for manual operations. Our method can accurately predict main and minor fragments, greatly simplifying the workflow. Even when network predictions are inaccurate, manual operations on a 3D view can suffice for quick modifications in most cases, eliminating the need for inefficient slice-by-slice handcrafting.

5.2 Potential impacts on subsequent tasks

In addition to improving efficiency, our method significantly enhances the delineation of fracture surfaces and ensures a consistently filled bone region without gaps in the marrow areas, compared to traditional max-flow segmentation techniques used in commercial softwares. These improvements are crucial for various subsequent tasks in image-guided reduction surgery (44).

First, precise segmentation of bone fragments and fracture surfaces facilitates accurate alignment, enabling accurate target pose planning and navigation. It also minimizes interference in collision detection, reducing the risk of unexpected tool-bone contact during intraoperative navigation. Errors in segmentation can lead to misjudgments, increasing surgical complications. By providing a complete and reliable bone model, our approach enhances surgical safety. Furthermore, an intact bone mask is essential for precise screw placement planning. Our method ensures structural continuity, allowing for accurate and safe trajectory design, thereby reducing the risks of implant failure and neurovascular injury (45). Additionally, our approach improves intraoperative image registration by eliminating undesired inner surface points (46). Conventional methods struggle to differentiate trabecular and cortical bone boundaries, leading to registration errors. By enhancing segmentation accuracy, our method improves point cloud registration, ensuring precise alignment between preoperative CT and intraoperative CBCT models. This, in turn, enhances the reliability of surgical guidance.

5.3 Limitations and future work

The variability in the number of bone fragments across different cases presents a challenge for deep learning-based segmentation approaches. As mentioned in Sec. 3.5, our study addresses this by implementing a consistent labeling strategy that simplifies the annotation process and ensures uniformity across the dataset. We limit the number of fragments for each bone to three, which aligns with the requirements of automatic fracture reduction planning for robotic surgery (47). While this approach streamlines the labeling process and reduces the complexity of the segmentation task, it is possible that the CCA cannot fully isolate the smaller fragments. Figure 8 presents an example of segmentation failures in case where fractures, though not completely separated, have experienced significant distortion. These situations complicate fracture delineation, occasionally resulting in imprecise segmentation. However, with minimal manual adjustments, the resulting segmentations remain suitable for subsequent tasks. Furthermore, the current study was conducted on a limited benchmarks and did not incorporate validation on additional external datasets. Due to the difficulty of sourcing additional large dataset with pelvic fracture, which is rare due to its low incidence rate, we resort to further validating the robustness of the proposed dataset and method using a few external special cases (Figure 6).

Figure 8
www.frontiersin.org

Figure 8. Failure segmentation examples on specific fractures that are not completely separated but have undergone significant distortion.

In future work, we plan to investigate instance segmentation setup that accommodates arbitrary number of fragment labels, potentially offering a more detailed representation of fracture cases. In addition, we also plan to simplify the current framework by replacing the initial anatomical segmentation network with a more lightweight bounding box detection network, which could potentially accelerate the process, as well as to prevent error accumulation across segmentation modules. Furthermore, we aim to extend our evaluations by including a broader range of benchmarks and datasets, thereby enhancing the generalizability of our findings. We also plan to apply the proposed method into downstream tasks including automatic target pose planning and CT-CBCT image registration to validate its clinical feasibility in the context of robot-assisted reduction surgery (46), (48), (49). We plan to assess the performance through retrospective case studies, cadaver experiments, and clinical experiments.

6 Conclusion

We have proposed an automatic segmentation approach for pelvic fractures utilizing deep convolutional networks, which accurately isolates bone fragments in CT scans. Our approach incorporates a multi-scale distance-weighted loss and deep supervision with a smooth transition strategy, significantly enhancing segmentation precision at fracture sites while maintaining robust overall performance. We have evaluated our method on a well-annotated benchmark dataset of 150 pelvic fracture CT scans, which has been made publicly available to foster further research in this field. The experimental results demonstrate a significant improvement over traditional max-flow method and state-of-the-art network model. The proposed method holds promise for improving image-guided surgeries through enhanced surgical planning, registration, and navigation, ultimately contributing to better clinical outcomes.

Data availability statement

The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: https://github.com/YzzLiu/FracSegNet.

Ethics statement

The studies involving humans were approved by Jishuitan IRB approval (202009-04). The studies were conducted in accordance with the local legislation and institutional requirements. Written informed consent for participation in this study was provided by the participants' legal guardians/next of kin. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.

Author contributions

YL: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Software, Validation, Visualization, Writing – original draft, Writing – review & editing. SY: Conceptualization, Data curation, Formal analysis, Investigation, Software, Validation, Writing – review & editing. GZ: Conceptualization, Funding acquisition, Investigation, Project administration, Resources, Writing – review & editing. CS: Conceptualization, Data curation, Formal analysis, Investigation, Validation, Writing – review & editing. CL: Conceptualization, Data curation, Formal analysis, Software, Validation, Writing – review & editing. CZ: Data curation, Formal analysis, Funding acquisition, Project administration, Resources, Writing – review & editing. XW: Data curation, Formal analysis, Funding acquisition, Project administration, Resources, Supervision, Writing – review & editing. YS: Conceptualization, Data curation, Formal analysis, Project administration, Resources, Supervision, Validation, Writing – review & editing, Writing – original draft. YW: Conceptualization, Funding acquisition, Project administration, Resources, Supervision, Writing – review & editing, Writing – original draft, Data curation, Formal analysis, Validation.

Funding

The author(s) declare that financial support was received for the research and/or publication of this article. This work was supported by the National Key Research and Development Program of China (Grant: 2022YFC2504304), the Beijing Science and Technology Project (Grant: Z221100003522007), and the Natural Science Foundation of Beijing (Grant: L222136).

Acknowledgments

We thank Pengbo Liu et al. for their substantial contributions to the development of the CTPelvic1K dataset.

Conflict of interest

GZ, YS, and YW were employed by Beijing Rossum Robot Technology Co., Ltd.

The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declare that no Gen AI was used in the creation of this manuscript.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

1. Giannoudis PV, Grotz MR, Tzioupis C, Dinopoulos H, Wells GE, Bouamra O, et al. Prevalence of pelvic fractures, associated injuries, and mortality: the United Kingdom perspective. J Trauma Acute Care Surg. (2007) 63:875–83. doi: 10.1097/01.ta.0000242259.67486.15

PubMed Abstract | Crossref Full Text | Google Scholar

2. Sathy AK, Starr AJ, Smith WR, Elliott A, Agudelo J, Reinert CM, et al. The effect of pelvic fracture on mortality after trauma: an analysis of 63,000 trauma patients. JBJS. (2009) 91:2803–10. doi: 10.2106/JBJS.H.00598

PubMed Abstract | Crossref Full Text | Google Scholar

3. Boudissa M, Roudet A, Fumat V, Ruatti S, Kerschbaumer G, Milaire M, et al. Part 1: outcome of posterior pelvic ring injuries and associated prognostic factors-a five-year retrospective study of one hundred and sixty five operated cases with closed reduction and percutaneous fixation. Int Orthop. (2020) 44:1209–15. doi: 10.1007/s00264-020-04574-1

PubMed Abstract | Crossref Full Text | Google Scholar

4. Ge Y, Zhao C, Wang Y, Wu X. Robot-assisted autonomous reduction of a displaced pelvic fracture: a case report and brief literature review. J Clin Med. (2022) 11:1598. doi: 10.3390/jcm11061598

PubMed Abstract | Crossref Full Text | Google Scholar

5. Yuan J, Bae E, Tai XC, Boykov Y, A. spatially continuous max-flow and min-cut framework for binary labeling problems. Numerische Mathematik. (2014) 126:559–87. doi: 10.1007/s00211-013-0569-x

Crossref Full Text | Google Scholar

6. Fornaro J, Székely G, Harders M. Semi-automatic segmentation of fractured pelvic bones for surgical planning. In: Biomedical Simulation: 5th International Symposium, ISBMS 2010. Phoenix, AZ: Springer (2010). p. 82–89.

Google Scholar

7. Moolenaar JZ, Tümer N, Checa S. Computer-assisted preoperative planning of bone fracture fixation surgery: a state-of-the-art review. Front Bioeng Biotechnol. (2022) 10:1037048. doi: 10.3389/fbioe.2022.1037048

PubMed Abstract | Crossref Full Text | Google Scholar

8. Kuiper RJA, Sakkers RJB, van Stralen M, Arbabi V, Viergever MA, Weinans H, et al. Efficient cascaded V-net optimization for lower extremity CT segmentation validated using bone morphology assessment. J Orthopaedic Res. (2022) 40:2894–907. doi: 10.1002/jor.25314

PubMed Abstract | Crossref Full Text | Google Scholar

9. Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation. In: Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015: 18th International Conference. Munich: Springer (2015). p. 234–241.

Google Scholar

10. Çiçek Ö, Abdulkadir A, Lienkamp SS, Brox T, Ronneberger O. 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: Medical Image Computing and Computer-Assisted Intervention-MICCAI 2016: 19th International Conference. Athens: Springer (2016). p. 424–432.

Google Scholar

11. Zhou Z, Rahman Siddiquee MM, Tajbakhsh N, Liang J. UNet++: a nested U-Net architecture for medical image segmentation. In: Deep learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Held in Conjunction with MICCAI 2018. Granada: Springer (2018). p. 3–11.

Google Scholar

12. Milletari F, Navab N, Ahmadi SA. V-Net: Fully convolutional neural networks for volumetric medical image segmentation. In: 2016 Fourth International Conference on 3D Vision (3DV). Stanford, CA: IEEE (2016). p. 565–571.

PubMed Abstract | Google Scholar

13. Hatamizadeh A, Nath V, Tang Y, Yang D, Roth HR, Xu D. Swin UNETR: swin transformers for semantic segmentation of brain tumors in MRI images. In:Crimi A, Bakas S, , editors. Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries. Cham: Springer International Publishing (2022). p. 272–84.

Google Scholar

14. Chen J, Lu Y, Yu Q, Luo X, Adeli E, Wang Y, et al. Transunet: Transformers make strong encoders for medical image segmentation. arXiv [preprint] arXiv:210204306. (2021). doi: 10.48550/arXiv.2102.04306

Crossref Full Text | Google Scholar

15. Roy S, Koehler G, Ulrich C, Baumgartner M, Petersen J, Isensee F, et al. MedNeXt: transformer-driven scaling of ConvNets for medical image segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer (2023). p. 405–415.

Google Scholar

16. Huang Z, Wang H, Deng Z, Ye J, Su Y, Sun H, et al. STU-Net: Scalable and transferable medical image segmentation models empowered by large-scale supervised pre-training. arXiv [preprint] arXiv:230406716. (2023). doi: 10.48550/arXiv.2304.06716

Crossref Full Text | Google Scholar

17. Ma J, Li F, Wang B. U-mamba: Enhancing long-range dependency for biomedical image segmentation. arXiv [preprint] arXiv:240104722. (2024). doi: 10.48550/arXiv.2401.04722

Crossref Full Text | Google Scholar

18. Isensee F, Jaeger PF, Kohl SA, Petersen J, Maier-Hein KH. nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat Methods. (2021) 18:203–11. doi: 10.1038/s41592-020-01008-z

PubMed Abstract | Crossref Full Text | Google Scholar

19. Paulano F. Jiménez JJ, Pulido R. 3D segmentation and labeling of fractured bone from CT images. Visual Comp. (2014) 30:939–48. doi: 10.1007/s00371-014-0963-0

Crossref Full Text | Google Scholar

20. Seim H, Kainmueller D, Heller M, Lamecker H, Zachow S, Hege HC. Automatic segmentation of the pelvic bones from CT data based on a statistical shape model. VCBM. (2008) 8:93–100. doi: 10.5555/2384008.2384023

PubMed Abstract | Crossref Full Text | Google Scholar

21. Liu J, Xing F, Shaikh A, French B, Linguraru MG, Porras AR. Joint cranial bone labeling and landmark detection in pediatric CT images using context encoding. IEEE Trans Med Imag. (2023) 42:3117–26. doi: 10.1109/TMI.2023.3278493

PubMed Abstract | Crossref Full Text | Google Scholar

22. Liu P, Han H, Du Y, Zhu H, Li Y, Gu F, et al. Deep learning to segment pelvic bones: large-scale CT datasets and baseline models. Int J Comput Assist Radiol Surg. (2021) 16:749–56. doi: 10.1007/s11548-021-02363-8

PubMed Abstract | Crossref Full Text | Google Scholar

23. Yang J, Gu S, Wei D, Pfister H, Ni B. RibSeg dataset and strong point cloud baselines for rib segmentation from CT scans. In: Medical Image Computing and Computer Assisted Intervention-MICCAI 2021: 24th International Conference. Strasbourg: Springer (2021). p. 611–621.

Google Scholar

24. Cheng P, Yang Y, Yu H, He Y. Automatic vertebrae localization and segmentation in CT with a two-stage Dense-U-Net. Sci Rep. (2021) 11:22156. doi: 10.1038/s41598-021-01296-1

PubMed Abstract | Crossref Full Text | Google Scholar

25. Liu J, Xing F, Shaikh A, Linguraru MG, Porras AR. Learning with context encoding for single-stage cranial bone labeling and landmark localization. In: Medical Image Computing and Computer Assisted Intervention-MICCAI 2022: 25th International Conference. Singapore: Springer (2022). p. 286–296.

PubMed Abstract | Google Scholar

26. Wang W, Huang W, Lu Q, Chen J, Zhang M, Qiao J, et al. Attention mechanism-based deep learning method for hairline fracture detection in hand X-rays. Neural Comp Appl. (2022) 34:18773–85. doi: 10.1007/s00521-022-07412-0

PubMed Abstract | Crossref Full Text | Google Scholar

27. Jin L, Yang J, Kuang K, Ni B, Gao Y, Sun Y, et al. Deep-learning-assisted detection and segmentation of rib fractures from CT scans: development and validation of FracNet. EBioMedicine. (2020) 62:103106. doi: 10.1016/j.ebiom.2020.103106

PubMed Abstract | Crossref Full Text | Google Scholar

28. Ukai K, Rahman R, Yagi N, Hayashi K, Maruo A, Muratsu H, et al. Detecting pelvic fracture on 3D-CT using deep convolutional neural networks with multi-orientated slab images. Sci Rep. (2021) 11:11716. doi: 10.1038/s41598-021-91144-z

PubMed Abstract | Crossref Full Text | Google Scholar

29. Tomita N, Cheung YY, Hassanpour S. Deep neural networks for automatic detection of osteoporotic vertebral fractures on CT scans. Comput Biol Med. (2018) 98:8–15. doi: 10.1016/j.compbiomed.2018.05.011

PubMed Abstract | Crossref Full Text | Google Scholar

30. Zeng B, Wang H, Xu J, Tu P, Joskowicz L, Chen X. Two-stage structure-focused contrastive learning for automatic identification and localization of complex pelvic fractures. IEEE Trans Med Imaging. (2023) 42:2751–62. doi: 10.1109/TMI.2023.3264298

PubMed Abstract | Crossref Full Text | Google Scholar

31. Tomazevic M, Kreuh D, Kristan A, Puketa V, Cimerman M. Preoperative planning program tool in treatment of articular fractures: process of segmentation procedure. In: XII Mediterranean Conference on Medical and Biological Engineering and Computing 2010 (Chalkidiki: Springer (2010). p. 430–433.

Google Scholar

32. Neubauer A, Bühler K, Wegenkittl R, Rauchberger A, Rieger M. Advanced virtual corrective osteotomy. In: International Congress Series. (London: Elsevier) (2005). p. 684–689.

Google Scholar

33. Pettersson J, Knutsson H, Borga M. Non-rigid registration for automatic fracture segmentation. In: 2006 International Conference on Image Processing. Atlanta, GA: IEEE (2006). p. 1185–1188.

PubMed Abstract | Google Scholar

34. Bittner-Frank M, Strassl A, Unger E, Hirtler L, Eckhart B, Koenigshofer M, et al. Accuracy analysis of 3D bone fracture models: effects of computed tomography (CT) imaging and image segmentation. J Imag Inform Med. (2024) 37:1889–901. doi: 10.1007/s10278-024-00998-y

PubMed Abstract | Crossref Full Text | Google Scholar

35. Han R, Uneri A, Vijayan RC, Wu P, Vagdargi P, Sheth N, et al. Fracture reduction planning and guidance in orthopaedic trauma surgery via multi-body image registration. Med Image Anal. (2021) 68:101917. doi: 10.1016/j.media.2020.101917

PubMed Abstract | Crossref Full Text | Google Scholar

36. Wang D, Yu K, Feng C, Zhao D, Min X, Li W. Graph cuts and shape constraint based automatic femoral head segmentation in CT images. In: Proceedings of the Third International Symposium on Image Computing and Digital Medicine. New York, NY: Association for Computing Machinery (2019). p. 1–6.

Google Scholar

37. Yang L, Gao S, Li P, Shi J, Zhou F. Recognition and segmentation of individual bone fragments with a deep learning approach in ct scans of complex intertrochanteric fractures: a retrospective study. J Digit Imaging. (2022) 35:1681–9. doi: 10.1007/s10278-022-00669-w

PubMed Abstract | Crossref Full Text | Google Scholar

38. Kim H, Jeon YD, Park KB, Cha H, Kim MS, You J, et al. Automatic segmentation of inconstant fractured fragments for tibia/fibula from CT images using deep learning. Sci Rep. (2023) 13:20431. doi: 10.1038/s41598-023-47706-4

PubMed Abstract | Crossref Full Text | Google Scholar

39. Wang D, Wu Z, Fan G, Liu H, Liao X, Chen Y, et al. Accuracy and reliability analysis of a machine learning based segmentation tool for intertrochanteric femoral fracture CT. Front Surg. (2022) 9:913385. doi: 10.3389/fsurg.2022.913385

PubMed Abstract | Crossref Full Text | Google Scholar

40. Liu Y, Yibulayimu S, Sang Y, Zhu G, Wang Y, Zhao C, et al. Pelvic fracture segmentation using a multi-scale distance-weighted neural network. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer (2023). p. 312–321.

PubMed Abstract | Google Scholar

41. Wang J, Zhang X, Guo L, Shi C, Tamura S. Multi-scale attention and deep supervision-based 3D UNet for automatic liver segmentation from CT. Mathem Biosci Eng: MBE. (2023) 20:1297–316. doi: 10.3934/mbe.2023059

PubMed Abstract | Crossref Full Text | Google Scholar

42. Qamar S, Jin H, Zheng R, Ahmad P, Usama M, A. variant form of 3D-UNet for infant brain segmentation. Future Generat Comp Syst. (2020) 108:613–23. doi: 10.1016/j.future.2019.11.021

Crossref Full Text | Google Scholar

43. Ghimire K, Chen Q, Feng X. Head and neck tumor segmentation with deeply-supervised 3D UNet and progression-free survival prediction with linear model. In: Head and Neck Tumor Segmentation and Outcome Prediction: Second Challenge, HECKTOR 2021, Held in Conjunction with MICCAI 2021. Strasbourg: Springer (2022). p. 141–149.

Google Scholar

44. Yibulayimu S, Sang Y, Liu Y, Zhu G, Wang Y, Zhao C, et al. Automatic pelvic structure restoration: a sim-to-real approach via recursive pose estimation network. In: 2024 IEEE International Symposium on Biomedical Imaging (ISBI). Athens: IEEE (2024). p. 1–5.

Google Scholar

45. Yang Q, Weng X, Xia C, Shi C, Liu J, Liang C, et al. Comparison between guide plate navigation and virtual fixtures in robot-assisted osteotomy. Comput Methods Biomech Biomed Eng. (2024) 27:1387–97. doi: 10.1080/10255842.2023.2243359

PubMed Abstract | Crossref Full Text | Google Scholar

46. Liu Y, Sang Y, Yibulayimu S, Zhu G, Shi C, Liang C, et al. Automatic intraoperative CT-CBCT registration for image-guided pelvic fracture reduction. In: 2024 IEEE International Symposium on Biomedical Imaging (ISBI). Athens: IEEE (2024). p. 1–5.

Google Scholar

47. Liu Y, Wu X, Sang Y, Zhao C, Wang Y, Shi B, et al. Evolution of surgical robot systems enhanced by artificial intelligence: a review. Adv Intellig Syst. (2024) 6:2300268. doi: 10.1002/aisy.202300268

Crossref Full Text | Google Scholar

48. Yibulayimu S, Liu Y, Sang Y, Zhu G, Wang Y, Liu J, et al. Pelvic fracture reduction planning based on morphable models and structural constraints. In:Greenspan H, Madabhushi A, Mousavi P, Salcudean S, Duncan J, Syeda-Mahmood T, et al., , editors. Medical Image Computing and Computer Assisted Intervention-MICCAI 2023. Cham: Springer Nature Switzerland (2023). p. 322–32.

Google Scholar

49. Liu Y, Yibulayimu S, Sang Y, Zhu G, Shi C, Liang C, et al. Preoperative fracture reduction planning for image-guided pelvic trauma surgery: a comprehensive pipeline with learning. Med Image Anal. (2025) 102:103506. doi: 10.1016/j.media.2025.103506

PubMed Abstract | Crossref Full Text | Google Scholar

Keywords: CT segmentation, deep learning, pelvic fracture, reduction planning, image-guided surgery

Citation: Liu Y, Yibulayimu S, Zhu G, Shi C, Liang C, Zhao C, Wu X, Sang Y and Wang Y (2025) Automatic pelvic fracture segmentation: a deep learning approach and benchmark dataset. Front. Med. 12:1511487. doi: 10.3389/fmed.2025.1511487

Received: 15 October 2024; Accepted: 28 March 2025;
Published: 15 April 2025.

Edited by:

Björn Krüger, University of Bonn, Germany

Reviewed by:

Ruchi Mittal, Chitkara University, India
Muhammad Usman Saeed, Central South University, China

Copyright © 2025 Liu, Yibulayimu, Zhu, Shi, Liang, Zhao, Wu, Sang and Wang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Yudi Sang, c2FuZ3l1ZGlAcm9zc3Vtcm9ib3QuY24=; Yu Wang, d2FuZ3l1QGJ1YWEuZWR1LmNu

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.