AUTHOR=Zhang Yaqing , Chen Jinling , Tan Jen Hong , Chen Yuxuan , Chen Yunyi , Li Dihan , Yang Lei , Su Jian , Huang Xin , Che Wenliang TITLE=An Investigation of Deep Learning Models for EEG-Based Emotion Recognition JOURNAL=Frontiers in Neuroscience VOLUME=Volume 14 - 2020 YEAR=2020 URL=https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2020.622759 DOI=10.3389/fnins.2020.622759 ISSN=1662-453X ABSTRACT=Emotion is a reaction of human brain to objective things. In real life, human emotions are complex and changeable, so the research on emotion recognition is of great significance in real life applications. Recently, many deep learning methods and machine learning have been widely applied in emotion recognition based on EEG signals. However, the traditional machine learning method has a major disadvantage that the feature extraction process is usually cumbersome, which relies heavily on the human experts. Then, end-to-end deep learning methods have emerged as an effective method to address this disadvantage with the help of raw signal features and time-frequency spectrums. Here, we investigated the application of several deep learning models to the research field of EEG-based emotion recognition, including deep neural networks(DNN), convolutional neural networks(CNN), long short-term memory(LSTM) and hybrid model of CNN and LSTM(CNN-LSTM). The experiments were carried on the famous DEAP dataset. Experimental results show that CNN and CNN-LSTM models have high classification performance in EEG-based emotion recognition, and their accurate extraction rate of RAW data reaches 90.12% and 94.17%. The performance of DNN model is not as accurate as other models, but the training speed is fast. The LSTM model is not as stable as CNN and CNN-LSTM models. Moreover, with the same number of parameters, the training speed of LSTM is much slower and it is difficult to achieve convergence. Additional parameter comparison experiments with other models, including epoch, learning rate, dropout probability, are also conducted in the paper. Comparison results prove that the DNN model can converges to optimal with fewer epochs and a higher learning rate. In contrast, the CNN model needs more epochs to learn. As for dropout probability, reducing the parameters by about 50% each time is appropriate.