Research Topic

Explainable Artificial Intelligence (XAI) in Systems Neuroscience

About this Research Topic

In recent years, many studies have been carried out to develop original and reliable systems, using the complex network and machine learning modeling approach, to support early diagnosis of brain diseases. Biomarkers are essential for the identification and study of the mechanisms underlying the development, ...

In recent years, many studies have been carried out to develop original and reliable systems, using the complex network and machine learning modeling approach, to support early diagnosis of brain diseases. Biomarkers are essential for the identification and study of the mechanisms underlying the development, organization, and evolution of brain diseases. It is difficult however to compare and merge these approaches as different data sets and methodologies were used for evaluation. In addition, it is unclear how the algorithms would perform on previously unseen data, and thus, how they would perform in clinical practice when there is no opportunity to adapt the algorithm to the great heterogeneity characterizing current neuroimaging data. Machine learning (ML) and deep learning offer predictive models that can automatically extract complex patterns among multiple elements from a large dataset in a data-driven manner, with relatively few assumptions and employ these patterns for classification or prediction so ML techniques could be used to address some issues related to diagnostic and clinical purposes. Although these algorithms are effective in classifying pathologies, they are very often highly complex, making them difficult to use in clinical practice. Many machine learning models, such as boosted trees, support vector machines, neural networks, and deep learning methods were extensively applied to analyze high-dimensional data for neuroimaging biomarker characterization. However, usually higher complexity allows higher accuracy, but at the expense of interpretability. In practice, model interpretability is as important as accuracy in many critical applications such as clinical decision-making context, in which the understanding of how the model makes the prediction is the key to facilitate physicians to trust the model and utilize the prediction results. Moreover, explainable models could improve the integration between the computational modeling of the brain and basic neuroscience. Indeed, reliable and understandable features capturing the essential mechanisms of the biological system at multiple spatial-temporal scales could steer towards a better understanding of the principles that govern the development, structure, physiology and cognitive abilities of the nervous system.

Explainable machine learning, or XAI, aims to create a suite of techniques that produce more explainable models while maintaining a high accuracy to increase confidence in predictive models, allow experts to identify bias and errors in order to correct models and to extract the associations between predictors and predictions. Extensive research is being done towards understanding the existing models better, with deep networks’ interpretability in the limelight. Both white-box approaches and model-agnostic explanations have been developed to extract model-specific explanations, leveraging the model’s internal structure and insights.

The purpose of this Research Topic is to bring together the latest discoveries in XAI with a focused application to the neuroscience field.

The proposed theme can have two views:

(a) the use of artificial intelligence (AI) systems to promote and enhance further knowledge in neuroscientific research
(b) neuroscience-inspired AI models

We would like to encourage multidisciplinary researchers to provide state-of-the-art techniques for the extraction of reliable markers from neuroimaging for the classification of cognitive states and brain pathologies. We encourage submissions that provide novel research results that clearly outline the potential effects of XAI techniques in different diagnostic scenarios or offer an overview of original empirical studies, drawing together the current state of knowledge and future directions.


Keywords: Explainable Machine Learning, Deep Learning, Brain Connectivity, Neuroimaging, Cad Systems, Interpretable Biomarkers


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Recent Articles

Loading..

About Frontiers Research Topics

With their unique mixes of varied contributions from Original Research to Review Articles, Research Topics unify the most influential researchers, the latest key findings and historical advances in a hot research area! Find out more on how to host your own Frontiers Research Topic or contribute to one as an author.

Topic Editors

Loading..

Submission Deadlines

11 September 2020 Manuscript

Participating Journals

Manuscripts can be submitted to this Research Topic via the following journals:

Loading..

Topic Editors

Loading..

Submission Deadlines

11 September 2020 Manuscript

Participating Journals

Manuscripts can be submitted to this Research Topic via the following journals:

Loading..
Loading..

total views article views article downloads topic views

}
 
Top countries
Top referring sites
Loading..