Interpretable Pretrained and Multimodal Models for EEG and Physiological Time Series

  • 457

    Total views and downloads

About this Research Topic

Submission deadlines

  1. Manuscript Submission Deadline 29 March 2026

  2. This Research Topic is currently accepting articles.

Background

In the field of computational neuroscience and affective computing, Electroencephalography (EEG) and related physiological signals like ECG, EDA, and EMG offer substantial insights into cognitive and emotional states as well as human behavior and brain-computer interaction. These physiological signals are typically high-dimensional, noisy, and subject-specific, posing substantial challenges in their modeling and generalization. However, the rise of large pretrained models in natural language processing (NLP) has inspired a burgeoning interest in deploying self-supervised and foundation model approaches to neurophysiological time series. The ability to derive transferrable and stable representations from such complex data sets is unlocking new doors in various domains, including neuroscience, health monitoring, and affective computing. Beyond predictive performance, we place a strong emphasis on visualization and interpretability to relate model behavior and learned features to underlying neural and physiological mechanisms.

This Research Topic aims to delve into the advancements in pretrained and multimodal models tailored for EEG and physiological time series. Our goal is to advance beyond the limitations of task-specific architectures by adopting methods that facilitate the learning of generalizable representations through pretraining techniques, contrastive learning, and generative modeling. Recent strides in model design, including Transformer-based models, diffusion models, and temporal contrastive learning, hold potential for efficiently capturing long-range dependencies and extracting high-level semantic features inherent in biosignals. Concurrently, there is a growing interest in cross-modal integration of EEG with other modalities, such as language, text, or speech. An example includes modeling EEG responses to natural language stimuli, enhancing emotion recognition through semantic context, and anchoring neural activity within language constructs developed from large language models (LLMs). By foregrounding interpretability while addressing the heterogeneity and complexity of these data sources, we aim to push forward reusable neural models and multimodal alignment strategies that work across tasks and subjects and yield neuroscientific insight that connects model decisions to brain and physiological mechanisms.

To gather further insights in advancing pretrained, self-supervised, and cross-modal representation learning for EEG and related physiological signals, we welcome articles addressing, but not limited to, the following themes:

- Interpretability and neuroscientific insight
- Visualization of learned representations and dynamics
- Pretrained and self-supervised learning models for EEG and physiological time series
- Transformer-based architectures for temporal and multimodal representation learning
- Diffusion models for time series generation, reconstruction, or imputation
- Contrastive learning and cross-modal representation alignment (e.g., EEG–ECG, EEG–text, EEG–audio/video)
- EEG and language-related time series modeling, inclusive of semantic alignment and language-conditioned decoding
- Integration of large language models (LLMs) with EEG-centric cognition or emotion analysis
- Context-aware EEG decoding using naturalistic language, speech, or audiovisual stimuli
- Multimodal fusion and cross-attention mechanisms for affective computing and cognitive state understanding
- Transfer learning and domain adaptation across subjects, tasks, devices, and modalities
- Benchmarks, datasets, interpretability, and robustness of large models in neurophysiological and language domains

Article types and fees

This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:

  • Brief Research Report
  • Case Report
  • Clinical Trial
  • Community Case Study
  • Curriculum, Instruction, and Pedagogy
  • Data Report
  • Editorial
  • FAIR² Data
  • General Commentary

Articles that are accepted for publication by our external editors following rigorous peer review incur a publishing fee charged to Authors, institutions, or funders.

Keywords: EEG, self-supervised learning, multimodal representation, time series modeling, language integration

Important note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Topic editors

Manuscripts can be submitted to this Research Topic via the main journal or any other participating journal.

Impact

  • 457Topic views
View impact