In the rapidly evolving field of radiation oncology, artificial intelligence (AI) plays a transformative role with its potential to revolutionize patient care. Within this context, enhancing AI model reliability and explainability is critical, particularly through approaches like uncertainty quantification and explainability. Addressing these elements not only ensures robust AI-driven insights but also promotes trust among clinical practitioners, improving real-world decision-making processes.
This collection seeks contributions that explore advanced methodologies for AI model reliability, including but not limited to uncertainty quantification and explainability within radiation oncology. Innovative approaches that leverage multi-modal AI integration, utilize foundational models, or implement novel frameworks for enhancing AI interpretability are particularly encouraged. Emphasizing both the technical and translational aspects can further the field’s impact on actionable patient outcomes.
To capture comprehensive insights into AI reliability and explainability and their applications in radiation oncology, we welcome articles covering, but not limited to, the following themes: • Advanced uncertainty estimation techniques in contemporary AI models. • Design and deployment of explainable and reliable AI systems in healthcare. • Multi-modal AI applications and their integration in radiation oncology workflows. • Automatic treatment planning and outcome prediction using AI in radiation oncology. • Reliability of foundation models (LLMs, vision-language models, among others). • Practical assessments of AI applications in clinical settings, including adaptive therapy and quality assurance. • Techniques for improving AI model robustness against dataset-induced biases. • Real-world case studies showcasing AI's role in enhancing radiotherapy outcomes. • Evaluation and detection of Out-of-Distribution (OOD) data and its implications on AI effectiveness. • Collaborative approaches merging AI-driven insights with clinician expertise to boost treatment personalization.
We welcome submissions that integrate these methodologies effectively into broader AI and oncology contexts. This multidisciplinary focus aims to bridge the gap between emerging AI technologies and their tangible benefits in patient care.
Please note: manuscripts consisting solely of bioinformatics, computational analysis, or predictions of public databases which are not accompanied by validation (independent clinical or patient cohort, or biological validation in vitro or in vivo, which are not based on public databases) are not suitable for publication in this journal.
Article types and fees
This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:
Brief Research Report
Case Report
Clinical Trial
Editorial
FAIR² Data
General Commentary
Hypothesis and Theory
Methods
Mini Review
Articles that are accepted for publication by our external editors following rigorous peer review incur a publishing fee charged to Authors, institutions, or funders.
Article types
This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:
Brief Research Report
Case Report
Clinical Trial
Editorial
FAIR² Data
General Commentary
Hypothesis and Theory
Methods
Mini Review
Opinion
Original Research
Perspective
Review
Systematic Review
Technology and Code
Keywords: AI Reliability, Explainability, Radiation Oncology, Uncertainty Quantification, Multi-modal AI, Radiotherapy, Clinical Decision-making, Model Robustness, Treatment Personalization, AI Interpretability
Important note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.