Generative AI and Large Language Models (LLMs) stand at the forefront of technological innovation, reshaping our interactions with data across numerous fields, particularly healthcare. In this sector, LLMs have demonstrated immense potential for automating clinical documentation, bolstering diagnostic reasoning, and enriching patient communication. Furthermore, their ability to process extensive structured and unstructured data offers unprecedented avenues for enhancing the quality, efficiency, and accessibility of patient care. Despite these promising capabilities, deploying LLMs within healthcare environments presents notable challenges. The issues of clinical accuracy, ethical and regulatory compliance, patient privacy, and bias in model outputs are prominent concerns. Consequently, a comprehensive understanding of integrating LLMs responsibly into medical workflows is urgently required.
This Research Topic aims to compile pioneering research, case studies, and expert viewpoints on the application of Generative AI and LLMs in the fields of medicine and healthcare. We endeavor to explore not just the potential applications but also the performance, limitations, and necessary safeguards for their responsible use. By fostering a multidisciplinary dialogue, this research topic seeks to illuminate both the opportunities and critical issues encountered in adopting LLMs across various domains, such as clinical practice, medical education, healthcare policy, and research.
To gather further insights in responsibly integrating LLMs into healthcare, we welcome articles addressing, but not limited to, the following themes:
- Clinical applications of LLMs in disease diagnosis, triage, and differential diagnosis - Integration of LLMs with multimodal data, including electronic health records, imaging, and pathology - Use of LLMs for biomedical literature mining, knowledge graph generation, and clinical guideline development - Enhancing doctor-patient communication, health literacy, and telehealth services through LLMs - Techniques for improving the transparency, interpretability, and reliability of LLM outputs - Real-world evaluations of LLMs' impact on clinical decision-making, patient safety, and healthcare costs - Ethical challenges in deploying LLMs in healthcare, including bias, fairness, and accountability - Regulatory, legal, and data governance considerations in medical AI deployment - Educational uses of LLMs for medical students and professionals, including tutoring and simulation - Comparative analyses of open-source versus proprietary LLMs in healthcare environments.
This Research Topic invites contributions from researchers, clinicians, engineers, ethicists, and policymakers to collectively pave a responsible and impactful path for the use of Generative AI in advancing human health.
Article types and fees
This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:
Brief Research Report
Clinical Trial
Conceptual Analysis
Curriculum, Instruction, and Pedagogy
Data Report
Editorial
FAIR² Data
FAIR² DATA Direct Submission
General Commentary
Articles that are accepted for publication by our external editors following rigorous peer review incur a publishing fee charged to Authors, institutions, or funders.
Article types
This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:
Brief Research Report
Clinical Trial
Conceptual Analysis
Curriculum, Instruction, and Pedagogy
Data Report
Editorial
FAIR² Data
FAIR² DATA Direct Submission
General Commentary
Hypothesis and Theory
Methods
Mini Review
Opinion
Original Research
Perspective
Policy and Practice Reviews
Policy Brief
Review
Study Protocol
Systematic Review
Technology and Code
Keywords: Generative AI, Large Language Models (LLMs), Medical Artificial Intelligence, Healthcare Applications, Clinical Decision Support, Ethical AI in Healthcare, Digital Health Innovation, AI in Clinical Workflows
Important note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.