Machine-learning (ML) methodologies have shown immense promise in clinical prediction, diagnosis, and decision support. Recent studies involving clinical ML models have demonstrated strong performance measures in retrospective and in silico validation settings, often by leveraging the power of complex models and expansive multimodal datasets. Despite these advances, there are significant barriers that continue to hinder ML models from transitioning beyond internal validation to real-world clinical deployment. This disconnect underscores persistent challenges in model generalizability, transparency, and accessibility within clinical settings, which would need to be addressed before we can fully harness ML's potential in healthcare.
This Research Topic aims to explore the journey of translating ML models from theoretical development to practical clinical implementations. While performance metrics such as AUC-ROC and accuracy are frequently emphasized in current ML studies, they offer limited insights into the true clinical value of these models. To move ML from "computer-to-bedside," studies must consider crucial facets such as model interpretability, external validation, real-world testing, regulatory requirements, and the seamless integration of these models into clinical workflows.
To gather further insights into the translational journey of ML in healthcare, we welcome articles addressing, but not limited to, the following themes:
• External validation of ML models in clinical environments or using multicentred datasets
• Shadow testing of ML models in real-world clinical environments
• Prospective trials and cohort studies of ML-powered clinical decision support tools
• Novel frameworks for clinical deployment of ML models, such as local explainability methods or clinical calculator platforms
• Regulatory, ethical, and equity considerations in ML implementation
• Usability testing and human-AI interaction, including clinician trust and alert fatigue in healthcare settings
• Failure modes of ML deployments and lessons learned from unsuccessful implementations
Submissions should emphasize translational relevance and provide extensive methodological details to support reproducibility. Rather than focusing solely on the technical aspects of model development, we encourage work that engages with clinical context and applicability. Studies involving general-purpose large language models (LLMs) or AI chatbots are welcome when these models are fine-tuned for clinical purposes and evaluated in real-world settings. Interdisciplinary perspectives from data science, medicine, implementation science, and policy are encouraged to foster comprehensive discussions on advancing ML integration in clinical practice.
Topic Editor Jiawen Deng and Topic Editor Fangwen Zhou have received a research grant from OpenAI. The other Topic Editors declare no conflicts of interest.
Article types and fees
This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:
Brief Research Report
Case Report
Clinical Trial
Editorial
FAIR² Data
FAIR² DATA Direct Submission
General Commentary
Hypothesis and Theory
Methods
Articles that are accepted for publication by our external editors following rigorous peer review incur a publishing fee charged to Authors, institutions, or funders.
Article types
This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:
Brief Research Report
Case Report
Clinical Trial
Editorial
FAIR² Data
FAIR² DATA Direct Submission
General Commentary
Hypothesis and Theory
Methods
Mini Review
Opinion
Original Research
Perspective
Review
Study Protocol
Systematic Review
Technology and Code
Keywords: machine-learning, neural network, deep learning, knowledge translation, clinical application
Important note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.