In the past years, the increasing use of AI-based technologies for supporting decisions in diverse settings have impacted healthcare, industry, and society. Although clearly effective in handling complex problems, these systems are typically opaque and may present ethical and legal risks. Particularly, they rarely provide a means to investigate and comprehend their behavior, the reasons underlying the decisions taken, and how to introduce trust and reduce bias. These limitations are bringing up concerns for users and the larger public and will certainly restrain the adoption of AI systems in several application domains. For example, in drug design and discovery, the interpretability of AI models could help to understand complex structure-activity relationships, as well as, ensure the drugs designed by AI are not obtained by chance. Regarding ethics-related issues, society must face the possibility that AI also could be used for harmful purposes and this discussion must be faced to avoid future problems.
The main goal of this Research Topic is to attract recent studies and innovative work focusing on explainable and trustworthy Artificial Intelligence systems for Drug Discovery and Development, thereby enhancing research in this field.
The (non-exclusive) list of topics includes:
- Design of explainable AI systems for Drug Discovery and Development
- Trust and interpretability of AI systems for Drug Discovery and Development
- AI-based systems using human-interpretable features
- Evaluation of transparency and interpretability of AI systems
- Interpretable deep learning
- Design of new explanation styles
- Strategies to explain black-box decision systems
- Ethics in explainable AI
Keywords:
Drug Discovery, Drug Development, Machine Learning, Explainable AI, Transparency and Ethics in AI, Artificial Intelligence
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.
In the past years, the increasing use of AI-based technologies for supporting decisions in diverse settings have impacted healthcare, industry, and society. Although clearly effective in handling complex problems, these systems are typically opaque and may present ethical and legal risks. Particularly, they rarely provide a means to investigate and comprehend their behavior, the reasons underlying the decisions taken, and how to introduce trust and reduce bias. These limitations are bringing up concerns for users and the larger public and will certainly restrain the adoption of AI systems in several application domains. For example, in drug design and discovery, the interpretability of AI models could help to understand complex structure-activity relationships, as well as, ensure the drugs designed by AI are not obtained by chance. Regarding ethics-related issues, society must face the possibility that AI also could be used for harmful purposes and this discussion must be faced to avoid future problems.
The main goal of this Research Topic is to attract recent studies and innovative work focusing on explainable and trustworthy Artificial Intelligence systems for Drug Discovery and Development, thereby enhancing research in this field.
The (non-exclusive) list of topics includes:
- Design of explainable AI systems for Drug Discovery and Development
- Trust and interpretability of AI systems for Drug Discovery and Development
- AI-based systems using human-interpretable features
- Evaluation of transparency and interpretability of AI systems
- Interpretable deep learning
- Design of new explanation styles
- Strategies to explain black-box decision systems
- Ethics in explainable AI
Keywords:
Drug Discovery, Drug Development, Machine Learning, Explainable AI, Transparency and Ethics in AI, Artificial Intelligence
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.