AUTHOR=Yang Mingyu , Ma Jianli , Zhang Chengcheng , Zhang Liming , Xu Jianyu , Liu Shilong , Li Jian , Han Jiabin , Hu Songliu TITLE=Multimodal data deep learning method for predicting symptomatic pneumonitis caused by lung cancer radiotherapy combined with immunotherapy JOURNAL=Frontiers in Immunology VOLUME=Volume 15 - 2024 YEAR=2025 URL=https://www.frontiersin.org/journals/immunology/articles/10.3389/fimmu.2024.1492399 DOI=10.3389/fimmu.2024.1492399 ISSN=1664-3224 ABSTRACT=ObjectivesThe pairing of immunotherapy and radiotherapy in the treatment of locally advanced nonsmall cell lung cancer (NSCLC) has shown promise. By combining radiotherapy with immunotherapy, the synergistic effects of these modalities not only bolster antitumor efficacy but also exacerbate lung injury. Consequently, developing a model capable of accurately predicting radiotherapy- and immunotherapy-related pneumonitis in lung cancer patients is a pressing need. Depth image features extracted from deep learning, combined with radiomics and clinical characteristics, were used to create a deep learning model. This model was developed to forecast symptomatic pneumonitis (SP) (≥Grade 2) in lung cancer patients undergoing thoracic radiotherapy in combination with immunotherapy.MethodsThe prediction was based on CT scans taken prior to the start of thoracic radiotherapy. Retrospective collection of clinical data was conducted on 261 lung cancer patients undergoing a combination of thoracic radiotherapy and immunotherapy from January 2018 to May 2023. Imaging data in the form of pre-RT-CT scans were obtained for all individuals included in the study. The region of interest (ROI) in the lung parenchyma was outlined separately from the tumor volume, and standard radiomic features were obtained through the use of 3D Slicer software. In addition, the images were cropped to a uniform size of 224x224 pixels. Data augmentation techniques, including random horizontal flipping, were employed. The normalized image data was then input into a pre-trained deep residual network, ResNet34, which utilized convolutional layers and global average pooling layers for deep feature extraction. A five-fold cross-validation approach was implemented to construct the model, automatically splitting the dataset into training and validation sets at an 8:2 ratio. This process was repeated five times, and the results from these iterations were aggregated to compute the average values of performance metrics, thereby assessing the overall performance and stability of the model.ResultsThe multimodal fusion model developed in this research, which incorporated depth image characteristics, radiomics properties, and clinical data, demonstrated an AUC of 0.922 (95% CI: 0.902-0.945, P value < 0.001). This amalgamated model surpassed the performance of the radiomic feature model (AUC 0.811, 95% CI: 0.786-0.832, P value < 0.001), the clinical information model (AUC 0.711, 95% CI: 0.682-0.753, P value < 0.001), as well as the model that integrated omics attributes with clinical data (AUC 0.872, 95% CI: 0.845-0.896, P value < 0.001) utilizing deep neural networks (DNNs). Comparatively, the radiomic feature model based on random forest (RF) yielded an AUC of 0.576, with a 95% confidence interval of 0.523-0.628. The clinical information model based on RF had an AUC of 0.525, with a 95% confidence interval of 0.479-0.572. When both radiomic features and clinical information were combined in a model based on RF, the AUC improved slightly to 0.611, with a 95% confidence interval of 0.566-0.652.ConclusionsIn this study, a deep neural network-based multimodal fusion model improved the prediction performance compared to traditional radiomics. The model accurately predicted Grade 2 or higher SP in lung cancer patients undergoing radiotherapy combined with immunotherapy.