Advances in Explainable Analysis Methods for Cognitive and Computational Neuroscience

  • 326

    Total downloads

  • 10k

    Total views and downloads

About this Research Topic

Submission deadlines

  1. Manuscript Submission Deadline 13 February 2026

  2. This Research Topic is currently accepting articles.

Background

In the fields of cognitive and computational neuroscience, the past decade has witnessed a significant surge in the application of Artificial Intelligence (AI) techniques to decipher neural activity recorded with neuroimaging and neurophysiology. This has resulted in remarkable performance enhancements in neuroscience tasks, encompassing Brain-Computer Interfaces (BCIs) (e.g., motor imagery BCI and affective BCI), understanding complex cognitive processes (e.g., music and speech perception and visual cognition), and clinical applications (e.g., seizure detection and sleep staging). Nonetheless, the sophisticated nature and lack of transparency within these computational models pose challenges, as they often obscure the neural processes they aim to explore. This lack of transparency undermines their potential for clinical translation and hampers the progression of theoretical neuroscience.

This Research Topic aims to illuminate and enhance the interpretability of analysis methods in cognitive and computational neuroscience. It proposes to address critical questions about the underlying neural activities deciphered by AI models. The objective is to examine groundbreaking methodologies that promise to improve model transparency and reliability, thus enhancing confidence in the application of AI-driven diagnostics and neurotechnologies. This exploration aims to delve into the nuances of individual neural variability and to reveal latent processes that traditional analysis might overlook.

To gather further insights within the boundaries of explainable analysis methods in cognitive and computational neuroscience, we welcome articles addressing, but not limited to, the following themes:

- Implementation of advanced eXplainable AI (XAI) frameworks in neuroscience tasks, including ante-hoc and post-hoc XAI methods.

- Development of parameter sensitivity and feature importance analyses in neuroscience.

- Advancement in statistical attribution methods and their applications in neuroscience.

- Exploration of causal inference modeling and its integration with neural systems.

- Utilization of innovative neurobiologically plausible models.

In addition, studies incorporating EEG, MRI, eye-tracking, or multimodal neuroimaging paradigms, with applications to neuroscience tasks, are particularly encouraged. We invite submissions showcasing these techniques and their potential to deepen the understanding of neural mechanisms and achieve more trustworthy technology applications through more interpretable methods.

Article types and fees

This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:

  • Brief Research Report
  • Case Report
  • Clinical Trial
  • Community Case Study
  • Conceptual Analysis
  • Curriculum, Instruction, and Pedagogy
  • Data Report
  • Editorial
  • FAIR² Data

Articles that are accepted for publication by our external editors following rigorous peer review incur a publishing fee charged to Authors, institutions, or funders.

Keywords: Advanced eXplainable AI, Causal inference modeling, Cognitive neuroscience

Important note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Topic editors

Manuscripts can be submitted to this Research Topic via the main journal or any other participating journal.

Impact

  • 10kTopic views
  • 8,282Article views
  • 326Article downloads
View impact