In the published article, there was an error in the legend for “Figure 1. Multimodal emotion recognition framework. (A) Dual branch feature extraction module. (B) Multi-scale feature fusion module.” as published. There are a few errors in the comments in the diagram, one is the convolutional block word error, all ConvBlovk is changed to ConvBlock. The following four EEGConvBlovk should be replaced with EYEConvBlock. The correct legend appears below.
Figure 1

Multimodal emotion recognition framework. (A) Dual branch feature extraction module. (B) Multi-scale feature fusion module.
The authors apologize for this error and state that this does not change the scientific conclusions of the article in any way. The original article has been updated.
Statements
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Summary
Keywords
multimodal emotion recognition, electroencephalogram (EEG), eye movement, feature fusion, multi-scale, Convolutional Neural Networks (CNN)
Citation
Fu B, Gu C, Fu M, Xia Y and Liu Y (2023) Corrigendum: A novel feature fusion network for multimodal emotion recognition from EEG and eye movement signals. Front. Neurosci. 17:1287377. doi: 10.3389/fnins.2023.1287377
Received
01 September 2023
Accepted
13 September 2023
Published
25 September 2023
Volume
17 - 2023
Edited and reviewed by
Jiahui Pan, South China Normal University, China
Updates
Copyright
© 2023 Fu, Gu, Fu, Xia and Liu.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Yinhua Liu liuyinhua@qdu.edu.cn
Disclaimer
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.