About this Research Topic
In this Research Topic, we bring together neuroscientists and AI researchers to consider the problem of explainable AI. Though deep learning is the main pillar of current AI techniques and is ubiquitous in basic science and real-world applications, it is also flagged by AI researchers for its black-box problem: it is easy to fool, and it also cannot explain how it makes a prediction or decision. Therefore, the significance of creating transparent, explainable AI, should not be underestimated. In particular, we are interested in leveraging insights from neurobiology that might be useful for novel approaches in AI, and techniques for analyzing artificial neural networks that might be applied, in turn, to neuroscience.
As Artificial Intelligence (AI) becomes more pervasive, its failures will become more salient. This is not a paradox. In both the case of biological brains and AI, intelligence involves decision-making using data that are noisy and often ambiguously labeled. Input data can also be incorrect due to faulty sensors. Moreover, during the skill acquisition process, failure is required to learn.
When human intelligence fails with significant consequence, we look for root causes. When AI fails, we increasingly will require an explanation for what went wrong. Already European Union law requires the explanation of an AI decision that produces adverse consequences for its citizens. It is not just a matter of finding answers. The actual consequence is to determine responsibility both in the legal and economic respects. It is unlikely that AI can be free to be broadly applied in real contexts if responsibility assignment is not clearly defined. As with biological intelligence, an explanation for how an AI decision came to fruition is challenging. In all cases, biological and AI, decision-making is distributed across a large number of nodes (or neurons) that each contributes to the final product to some varying degree.
Neuroscientists have developed new techniques to examine brain networks, which could be applied to artificial neural networks. Among these are the ability to label neurons actively involved in a particular memory (a cell assembly) and then to use molecular biology to mechanistically control those particular neurons in order to affect the specific memory. Since it is often easier to instrument and record from AI nodes than biological neurons, such an approach might be fruitful in AI. A number of AI researchers are analyzing the layers of neural networks to compare the artificial receptive fields with biological receptive fields. In this way, AI may have explanatory power for neurobiology.
We believe that there are likely to be many other instances where such insights from neuroscience may be of utility for explaining AI decisions, and vice versa, and that a Research Topic focused on this nexus through original research and reviews would offer the potential for catalyzing progress in these areas.
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.