Skip to main content

About this Research Topic

Submission closed.

The potential of AI to increase access, quality, and efficiency of healthcare is high, with a wide variety of applications being developed at rapid speed. Some of the most significant and promising AI health applications include using AI for diagnostics, decision support systems, patient care monitoring, ...

The potential of AI to increase access, quality, and efficiency of healthcare is high, with a wide variety of applications being developed at rapid speed. Some of the most significant and promising AI health applications include using AI for diagnostics, decision support systems, patient care monitoring, robotics, personalized medicine, drug discovery, clinical trials, surveillance, and organizational workflow. Nevertheless, despite the high potential of AI, the actual adoption of AI-based tools in healthcare remains limited. Limited AI adoption is partially due to new challenges emerging with AI, including the availability of large high-quality data sets, organizational challenges, and limitations or inability to 'explain' the AI decision-making process, making the technologies challenging to access and trust. To achieve safe, scalable and sustainable AI adoption in health it is important to advance trustworthy AI, including multifaceted aspects such as transparency, interpretability and explainability of AI-supported technologies.
Achieving trustworthy AI requires collaboration between academia and industry, where bridging these worlds is paramount to fostering progress. Industry and academia bring different expertise and perspectives to the table. Industry often has practical, real-world experience with implementing and deploying AI systems, while academia often focuses on research and theory. By collaborating, these two groups can combine their strengths and insights to create trustworthy AI systems and mitigate concerns about the potential negative impacts of AI.

AI systems will continue to impact and revolutionize the healthcare system in ways we cannot yet imagine. In this context, it is crucial that the AI systems we are developing and implementing are worthy of our trust and that we can ensure that risks and possible adverse effects are considered. However, despite an overbridging consensus that AI should be trustworthy, it is less clear what trust and trustworthiness entail in the field of AI in health. To trust AI technologies, we need to know that they are fair, reliable, not harmful, and accountable. The High-Level Expert Group on AI (AI HLEG) on trustworthy AI has identified three components for general AI technologies, which should be met throughout the AI system's entire life cycle [link] :

• it should be lawful, complying with all applicable laws and regulations;
• it should be ethical, ensuring adherence to ethical principles and values; and
• it should be robust, both from a technical and social perspective, since, even with good intentions, AI systems can cause unintentional harm.

In this Research Topic, we seek to explore different facets of trustworthy AI in health. We welcome the submission of Original Research, Reviews, Methods, and Perspective articles related to the Research Topic on the Trustworthy adoption of AI in Healthcare. We encourage the collaboration of academia and industry in addressing these topics. Following the requirements of Trustworthy Artificial Intelligence, as specified by AI HLEG, the submissions must address at least one of the following dimensions:
1. Human Agency and Oversight: Empowerment of human beings, human autonomy, and decision-making, oversight mechanisms for AI
2. Technical Robustness and Safety: Security, safety, accuracy, reliability, fall-back plans, and reproducibility
3. Privacy and Data Governance: The impact of the AI system's impact on privacy and data protection
4. Transparency: Traceability, explainability, and open communication about possible limitations of the AI system
5. Diversity, Non-discrimination, and Fairness: Unfair bias, equality, and justice in AI systems
6. Societal and Environmental Well-being: sustainability, social and environmental impact
7. Accountability: Auditability, responsibility, and accountability for AI systems and their outcomes


---
Dr. Oleg Agafonov, Dr. Aleksandar Babic and Dr. Sharmini Alagaratnam are employed at DNV. All other Topic Editors declare no conflicts of interest.

Keywords: AI, machine learning, clinical decision support, digital health, electronic health records


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Topic Editors

Loading..

Topic Coordinators

Loading..

Recent Articles

Loading..

Articles

Sort by:

Loading..

Authors

Loading..

total views

total views article views downloads topic views

}
 
Top countries
Top referring sites
Loading..

About Frontiers Research Topics

With their unique mixes of varied contributions from Original Research to Review Articles, Research Topics unify the most influential researchers, the latest key findings and historical advances in a hot research area! Find out more on how to host your own Frontiers Research Topic or contribute to one as an author.