About this Research Topic
This Research Topic focuses on how to conceptualize, design, and implement human-AI systems that can explain their decisions and actions to different types of consumers and personas. Current approaches in Machine Learning are tailored more towards interpretations and explanations that are more suitable for modelers and less for technically inexperienced users. In other human-AI interactions, for instance, Google Assistance, Alexa, Social Robots, Web search, and recommendation systems, explanations for recommendations, search results, or actions are not even considered as an integral part of the human-AI interaction mechanism. As a result, there is a need to revisit the conceptualization, design, and implementation of human-AI systems in a way that they provide more transparency in their way of reasoning and how they communicate this via adaptive explanation techniques for different types of users. This can be better achieved by taking a cross-disciplinary approach to the concept of “explanation” and views of “what is a good explanation”. For instance, disciplines such as philosophy and psychology of science (e.g., theory of explanation, causal reasoning), social sciences (e.g., social expectations), psychology (e.g., cognitive bias), communication, and media science offer an intellectual basis of what ‘explanation’ is and how to do people select, to evaluate, and communicate explanations.
This Research Topic invites researchers and practitioners from academic institutions and private companies to submit their articles on the conceptualization, design, and implementation of explainable human-AI systems from a theoretical/systemic, and practical standpoint.
From a theoretical and systemic point of view, these may address the following aspects:
• Theory of explanation, meaning, and semantics
• Explainable and Interpretable Algorithms (e.g., the ethical algorithm) and Data
• Languages (spoken, written, sign) for explanations
• Self-explainable and intelligible AI systems
• Explanations in Cognitive Sciences and Computing
• Brain informatics and biocomputing
• Evaluation of explanations
• Historical perspectives of explanations in Artificial Intelligence and Machine Learning as well as in other disciplines
• Explanation research in social sciences
• Psychology of explanations (e.g., cognitive and confirmation bias)
• Legal, ethical, and social consequences of automated decision-making (ADM)
From an application point of view, articles may address the following domains:
• Autonomous systems (e.g., vehicles, industrial robots)
• Social robots
• Web search and recommendation systems (e.g., Google, Netflix, Amazon)
• Machine Learning and Data Science
• Big Data and Analytics
Keywords: Artificial Intelligence, Machine Learning, Autonomous Systems, Theory of Explanation, Philosophy of Science
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.