Your new experience awaits. Try the new design now and help us make it even better

BRIEF RESEARCH REPORT article

Front. Neurosci.

Sec. Brain Imaging Methods

This article is part of the Research TopicAI-enabled processing, integrating, and understanding neuroimages and behaviorsView all 3 articles

RS-STGCN: Regional-Synergy Spatio-Temporal Graph Convolutional Network for Emotion Recognition

Provisionally accepted
Yunqi  HanYunqi Han1Yifan  ChenYifan Chen2*Hang  RuanHang Ruan2Deqing  SongDeqing Song3Haoxuan  XuHaoxuan Xu3Haiqi  ZhuHaiqi Zhu3
  • 1Universiti Putra Malaysia, Serdang, Malaysia
  • 2University of Nottingham Malaysia, Semenyih, Malaysia
  • 3Harbin Institute of Technology, Harbin, China

The final, formatted version of the article will be published soon.

ABSTRACT Decoding emotional states from electroencephalography (EEG) signals is a fundamental goal in affective neuroscience. This endeavor requires accurately modeling the complex spatio-temporal dynamics of brain activity. However, prevailing approaches for defining brain connectivity often fail to reconcile predefined neurophysiological priors with task-specific functional dynamics. This paper presents the Regional-Synergy Spatio-Temporal Graph Convolutional Network (RS-STGCN), a novel framework designed to bridge this gap. The core innovation is the Regional Synergy Graph Learner (RSGL), which integrates known physiological brain-region priors with a task-driven optimization process. It constructs a sparse, adaptive graph by modelling connectivity at two distinct levels. At the intra-regional level, it establishes core information backbones within functional areas. This ensures efficient and stable local information processing. At the inter-regional level, it adaptively identifies critical, sparse long-range connections. These connections are essential for global emotional integration. This dual-level, dynamically learned graph then serves as the foundation for the spatio-temporal network. This network effectively captures evolving emotional features. The proposed framework demonstrates superior recognition accuracy, achieving state-of-the-art results of 88.00% and 85.43% on the public SEED and SEED-IV datasets, respectively, under a strict subject-independent protocol. It also produces a neuroscientifically interpretable map of functional brain connectivity, identifying key frontal-parietal pathways consistent with established attentional networks. This work offers a powerful computational approach to investigate the dynamic network mechanisms underlying human Yunqi Han et al. RS-STGCN for Emotion Recognition emotion, providing new data-driven insights into functional brain organization. The code and datasets are available at https://github.com/YUNQI1014/RS-STGCN.

Keywords: emotion recognition, Electroencephalography, Spatio-temporal graph convolutional network, Dynamic Graph Construction, functional connectivity

Received: 13 Sep 2025; Accepted: 20 Nov 2025.

Copyright: © 2025 Han, Chen, Ruan, Song, Xu and Zhu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Yifan Chen, hcxyc1@nottingham.edu.my

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.