Skip to main content

About this Research Topic

Submission closed.

Intelligent systems and applications, mainly Machine Learning (ML) based Artificial Intelligence (AI), have been employed at almost all levels and in all domains of society: from AI systems in agriculture (e.g., greenhouse optimization) to algorithm-based trading in finance as well as in our personal ...

Intelligent systems and applications, mainly Machine Learning (ML) based Artificial Intelligence (AI), have been employed at almost all levels and in all domains of society: from AI systems in agriculture (e.g., greenhouse optimization) to algorithm-based trading in finance as well as in our personal companions such as social robots and personal voice assistants (e.g., Siri, Alexa). Concerns, however, have been raised on the grounds of their - transparency, safety and liability, algorithmic bias and fairness, and trustworthiness. In response to these concerns, regulatory frameworks governed by AI principles in society have emerged at both, institutional and governmental levels. In addition, a response from Artificial Intelligence and Machine Learning (AI/ML) communities has emerged in the form of interpretable models and Explainable AI (XAI) tools and approaches. However, these come with limitations in explaining the behavior of complex AI/ML systems to technically inexperienced users.

This Research Topic focuses on how to conceptualize, design, and implement human-AI systems that can explain their decisions and actions to different types of consumers and personas. Current approaches in Machine Learning are tailored more towards interpretations and explanations that are more suitable for modelers and less for technically inexperienced users. In other human-AI interactions, for instance, Google Assistance, Alexa, Social Robots, Web search, and recommendation systems, explanations for recommendations, search results, or actions are not even considered as an integral part of the human-AI interaction mechanism. As a result, there is a need to revisit the conceptualization, design, and implementation of human-AI systems in a way that they provide more transparency in their way of reasoning and how they communicate this via adaptive explanation techniques for different types of users. This can be better achieved by taking a cross-disciplinary approach to the concept of “explanation” and views of “what is a good explanation”. For instance, disciplines such as philosophy and psychology of science (e.g., theory of explanation, causal reasoning), social sciences (e.g., social expectations), psychology (e.g., cognitive bias), communication, and media science offer an intellectual basis of what ‘explanation’ is and how to do people select, to evaluate, and communicate explanations.

This Research Topic invites researchers and practitioners from academic institutions and private companies to submit their articles on the conceptualization, design, and implementation of explainable human-AI systems from a theoretical/systemic, and practical standpoint.

From a theoretical and systemic point of view, these may address the following aspects:
• Theory of explanation, meaning, and semantics
• Explainable and Interpretable Algorithms (e.g., the ethical algorithm) and Data
• Languages (spoken, written, sign) for explanations
• Self-explainable and intelligible AI systems
• Explanations in Cognitive Sciences and Computing
• Brain informatics and biocomputing
• Evaluation of explanations
• Historical perspectives of explanations in Artificial Intelligence and Machine Learning as well as in other disciplines
• Explanation research in social sciences
• Psychology of explanations (e.g., cognitive and confirmation bias)
• Legal, ethical, and social consequences of automated decision-making (ADM)

From an application point of view, articles may address the following domains:
• Autonomous systems (e.g., vehicles, industrial robots)
• Social robots
• Web search and recommendation systems (e.g., Google, Netflix, Amazon)
• Machine Learning and Data Science
• Big Data and Analytics

Keywords: Artificial Intelligence, Machine Learning, Autonomous Systems, Theory of Explanation, Philosophy of Science


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Topic Editors

Loading..

Topic Coordinators

Loading..

Recent Articles

Loading..

Articles

Sort by:

Loading..

Authors

Loading..

views

total views views downloads topic views

}
 
Top countries
Top referring sites
Loading..

Share on

About Frontiers Research Topics

With their unique mixes of varied contributions from Original Research to Review Articles, Research Topics unify the most influential researchers, the latest key findings and historical advances in a hot research area! Find out more on how to host your own Frontiers Research Topic or contribute to one as an author.