Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Radiol.

Sec. Artificial Intelligence in Radiology

Volume 5 - 2025 | doi: 10.3389/fradi.2025.1680803

Synthetic CT Generation from CBCT Using Deep Learning for Adaptive Radiotherapy in Prostate Cancer

Provisionally accepted
Mustafa  ÇağlarMustafa Çağlar1*Kerime  Selin ErtaşKerime Selin Ertaş2Mehmet  Sıddık CebeMehmet Sıddık Cebe2Navid  KheradmandNavid Kheradmand2Evrim  MetcalfeEvrim Metcalfe2
  • 1Istanbul Medipol University, Istanbul, Türkiye
  • 2Istanbul Medipol Universitesi, Fatih, Türkiye

The final, formatted version of the article will be published soon.

Objective: In this study, the accuracy of deep learning-based models developed for synthetic CT (sCT) generation from conventional Cone Beam Computed Tomography (CBCT) images of prostate cancer patients was evaluated. The clinical applicability of these sCTs in treatment planning and their potential to support adaptive radiotherapy decision-making were also investigated. Methods: A total of 50 CBCT-CT mappings were obtained for each of 10 retrospectively selected prostate cancer patients, including one planning CT (pCT) and five CBCT scans taken on different days during the treatment process. All images were preprocessed, anatomically matched and used as input to the U-Net and ResU-Net models trained with PyTorch after z-score normalisation. The sCT outputs obtained from model outputs were quantitatively compared with the pCT with metrics such as SSIM, PSNR, MAE, and HU difference distribution. Results: Both models produced sCT images with higher similarity to pCT compared to CBCT images. The mean SSIM value was 0.763±0.040 for CBCT-CT matches, 0.840±0.026 with U-Net and 0.851±0.026 with ResU-Net, with a significant increase in both models (p<0.05). PSNR values were 21.55 ± 1.38 dB for CBCT, 24.74±1.83 dB for U-Net, and 25.24±1.61 dB for ResU-Net. ResU-Net provided a statistically significant higher PSNR value compared to U-Net (p<0.05). In terms of MAE, while the mean error in CBCT-CT matches was 75.2±18.7 HU, the U-Net model reduced this value to 65.3±14.8 HU and ResU-Net to 61.8±13.7 HU (p<0.05). Conclusion: Deep learning models trained with simple architectures such as U-Net and ResU-Net provide effective and feasible solutions for the generation of clinically relevant sCT from CBCT images, supporting accurate dose calculation and facilitating adaptive radiotherapy workflows in prostate cancer management.

Keywords: synthetic CT, CBCT, deep learning, image-guided radiotherapy (IGRT), adaptive radiotherapy

Received: 06 Aug 2025; Accepted: 22 Oct 2025.

Copyright: © 2025 Çağlar, Ertaş, Cebe, Kheradmand and Metcalfe. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Mustafa Çağlar, mcaglar@medipol.edu.tr

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.