MINI REVIEW article

Front. Syst. Neurosci., 23 November 2021

Volume 15 - 2021 | https://doi.org/10.3389/fnsys.2021.729707

Application of Electroencephalography-Based Machine Learning in Emotion Recognition: A Review

  • College of Instrumentation and Electrical Engineering, Jilin University, Changchun, China

Article metrics

View details

26

Citations

7,7k

Views

2,8k

Downloads

Abstract

Emotion recognition has become increasingly prominent in the medical field and human-computer interaction. When people’s emotions change under external stimuli, various physiological signals of the human body will fluctuate. Electroencephalography (EEG) is closely related to brain activity, making it possible to judge the subject’s emotional changes through EEG signals. Meanwhile, machine learning algorithms, which are good at digging out data features from a statistical perspective and making judgments, have developed by leaps and bounds. Therefore, using machine learning to extract feature vectors related to emotional states from EEG signals and constructing a classifier to separate emotions into discrete states to realize emotion recognition has a broad development prospect. This paper introduces the acquisition, preprocessing, feature extraction, and classification of EEG signals in sequence following the progress of EEG-based machine learning algorithms for emotion recognition. And it may help beginners who will use EEG-based machine learning algorithms for emotion recognition to understand the development status of this field. The journals we selected are all retrieved from the Web of Science retrieval platform. And the publication dates of most of the selected articles are concentrated in 2016–2021.

Introduction

Emotions are the changes in people’s psychological and physiological states when they face external stimuli such as sounds, images, smells, temperature, and so on. And it plays a vital role in mental and physical health, decision-making, and social communication. To realize emotion recognition, Ekman regarded emotions as six discrete and measurable states related to physiological information, namely happy, sad, anger, fear, surprise, and disgust (Ekman, 1999; Gilda et al., 2018). Subsequent studies on emotion recognition mostly followed this emotion classification basis, but some researchers had added new emotional states, including neutral, arousal, relaxed (Bong et al., 2012; Selvaraj et al., 2013; Walter et al., 2014; Goshvarpour et al., 2017; Minhad et al., 2017; Wei et al., 2018). Some people had also provided a new classification standard for emotions, including relaxation, mental stress, physical load, mental stress combined with physical load (Mikuckas et al., 2014). The setting that emotions are discretized states makes the emotion recognition can be perfectly realized by classification in machine learning. The overall process of machine learning for emotion recognition is as follows: the subjects’ facial expressions, speech sounds, body movements (Kessous et al., 2010), electromyography (EMG), respiration (RSP) (Wei, 2013), galvanic skin response (GSR) (Tarnowski et al., 2018), blood volume pulsation (BVP), skin temperature (SKT) (Gouizi et al., 2011), photoplethysmographic (PPG) (Lee et al., 2019), electrocardiogram (ECG) (Hsu et al., 2020), heart rate (HR) (Wen et al., 2014) and electroencephalography (EEG) will appear corresponding changes when stimulated by external audio, visual, audio-visual and other stimuli. In addition to the above external factors that will affect the changes in emotions, autonomic nervous system (ANS) activity is viewed as a major component of the emotion response (Kreibig, 2010). Ekman (1992) analyzed six basic emotions by recording six ANS parameters. And Levenson (2014) discussed emotions activate different patterns of ANS response for different emotions.

The above-mentioned physiological information can be collected via specific devices, then features related to emotion states can be extracted after preprocessing the collected data, and finally, emotion recognition will be realized by classifying these features. Compared with external body changes such as facial expressions and speech sounds, the internal physiological information such as EMG, SKT, ANS, and EEG can more genuinely reflect the emotional changes of the subject due to its inability to conceal deliberately. And among the many physiological signals, there are a vast number of researches on collecting EEG, which contains relatively affluent information to recognize emotions through machine learning algorithms. Aim to classify physically disabled people and Autism children’s emotional expressions, Hassouneh et al. (2020) achieved a maximum emotion recognition rate of 87.25% using the long short-term memory (LSTM) as the classifier to EEG signals. Aim to classify Parkinson’s disease (PD) from healthy controls, Yuvaraj et al. (2014) presented a computational framework using emotional information from the brain’s electrical activity. Face the situation that the diagnosis of depression almost exclusively depends on doctor-patient communication and scale analysis, which has obvious disadvantages such as patient denial, poor sensitivity, subjective biases, and inaccuracy. Li et al. (2019) committed to automatically and accurately depression recognition using the transformation of EEG features and machine learning methods.

This paper summarizes the development of EEG-based machine learning methods for emotion recognition from four aspects: acquisition, preprocessing, feature extraction, and feature classification. It is helpful for beginners who rely upon EEG-based machine learning algorithms for emotion recognition to understand the current development of the field and then find their breakthrough points in this field.

Acquisition of Electroencephalography Signals for Emotion Recognition

There are generally two ways to acquire EEG signals related to emotions. One way is to stimulate the subject to produce emotional changes by playing audio, video, or other materials and obtain the EEG signal through the EEG device worn by the subject. Yuvaraj et al. (2014) obtained EEG data using the Emotive EPOC 14-channel EEG wireless recording headset (Emotive Systems, Inc., San Francisco, CA) with 128 Hz sampling frequency per channel from 20 PD patients and 20 healthy by inducing the six basic emotions of happiness, sadness, fear, anger, surprise, and disgust using multimodal (audio and visual) stimuli. Bhatti et al. (2016) used music tracks as stimuli to evoke different emotions and created a new dataset of EEG signals in response to audio music tracks using the single-channel EEG headset (Neurosky) with a sampling rate 512 Hz. Chai et al. (2016) recorded EEG signals related to audio-visual stimuli using a Biosemi Active Two system. And EEG signals were digitized by a 24-bit analog-digital converter with a 512 Hz sampling rate. Chen et al. (2018) used a 16-lead Emotiv brainwave instrument (14 of which were EEG acquisition channels and 2 of which were reference electrodes) at a frequency of 128 Hz. Later, Seo et al. (2019) used a video stimulus to evoke boredom and non-boredom and collected EEG data using the Muse EEG headband from 28 Korean adult participants. And Li et al. (2019) conducted an experiment based on emotional face stimuli and recorded 28 subjects’ EEG data from 128-channel HydroCel Geodesic Sensor Net by Net Station software. In Hou et al. (2020), the Cerebus system (Blackrock Microsystems, United States) was used to collect EEG data at a 1 kHz sampling rate using a 32-channel EEG cap. In the same year, Maeng et al. (2020) introduced a new multimodal dataset via Biopac’s M150 equipment called MERTI-Apps based on Asian physiological signals. And Gupta et al. (2020) used an HTC Vive VR display to enable participants to interact with immersive 360° videos in VR and collected EEG signals using a 16-channel OpenBCI EEG Cap with a 125 Hz sampling frequency. Later, Keelawat et al. (2021) acquired EEG data based on a Waveguard EEG cap with a 250 Hz sampling rate from 12 students from Osaka University, to whom song samples were presented. What’s more, to effectively collect EEG signals, the attachment position of electrodes for EEG equipment in many studies follows the international 10–20 system (Chai et al., 2016; Seo et al., 2019; Hou et al., 2020; Huang, 2021).

Another way is to use the existing, well-known database in the field of emotion recognition based on EEG, including DEAP (Izquierdo-Reyes et al., 2018), MAHNOB-HCI (Izquierdo-Reyes et al., 2018), GAMEEMO (Özerdem and Polat, 2017), SEED (Lu et al., 2020), LUMED (Cimtay and Ekmekcioglu, 2020), AMIGOS (Galvão et al., 2021), and DREAMER (Galvão et al., 2021). After obtaining the original EEG signal related to emotion states, the following operation is to preprocess the EEG signal to improve the quality of the EEG data.

Preprocessing Method of Electroencephalography Signal

The raw EEG data collected through EEG equipment is mixed with electronic equipment noise, as well as potential artifacts of electrooculography (EOG), electromyogram (EMG), respiration and body movements. Therefore, a series of preprocessing operations are usually performed before the feature extraction of the EEG signal to improve the signal-to-noise ratio.

Bandpass filters are used by most research institutes as a simple and effective noise removal method. However, since there is no precise regulation on the effective frequency band in the EEG signal, the bandpass filters used in different studies had different cutoff frequencies. Generally, the purpose of setting the low cutoff frequency at about 4 Hz (Özerdem and Polat, 2017; Chao et al., 2018; Pane et al., 2019; Yin et al., 2020) was to remove electrooculography (EOG) artifacts (0–4 Hz) and potential artifacts of respiration and body movements within 0–3 Hz. While some documents set the low cutoff frequency at about 1 Hz (Yuvaraj et al., 2014; Bhatti et al., 2016; Liang et al., 2019; Hou et al., 2020; Keelawat et al., 2021), the purpose of which was to remove the baseline drift (DC component) in the EEG signal and the 1/f noise introduced by the acquire equipment. On the other hand, for high cutoff frequency, most researchers set it to about 45 Hz (Kessous et al., 2010; Yuvaraj et al., 2014; Liang et al., 2019; Yin et al., 2020) to remove the other artifact noises at the high frequencies. While, some recent studies (Hou et al., 2020; Lu et al., 2020; Rahman et al., 2020) set it around 70–75 Hz to preserve more emotion-related features among the EEG to improve the accuracy of emotion recognition.

In addition to using bandpass filters for noise suppression, scholars have also adopted many other excellent methods for preprocessing EEG signals. For example, in the work of Aguiñaga and Ramirez (2018), the Laplacian filter described by Murugappan (2012) was implemented to mitigate the problem that EEG signals were naturally contaminated with noise and artifacts. And then, a blind source separation (BSS) algorithm was implemented to remove redundancy between active elements meanwhile preserve information of non-active elements. And in the study of Chen et al. (2018), the independent component analysis (ICA) was used to suppress noise. Furthermore, the study conducted in Cimtay and Ekmekcioglu (2020) compared three types of smoothing filters (smooth filter, median filter, and Savitzky-Golay) on EEG data and concluded that the most useful filter was the classical Savitzky-Golaly which smoothed the data without distorting the shape of the waves. And the main contribution of Alhalaseh and Alasasfeh (2020) relied on using empirical mode decomposition/intrinsic mode functions (EMD/IMF) and variational mode decomposition (VMD) for signal processing purposes. Besides, Keelawat et al. (2021) used EEGLAB, an open-source MATLAB environment for EEG processing, to remove contaminated artifacts based on ICA.

In addition to removing noise and artifacts, there are other tasks to be done in the preprocessing process. Since the effective frequency band of the EEG signal does not exceed 75 Hz, while the sampling rate of some acquisition devices was even as high as 1,000 Hz, far exceeding the required sampling rate, down-sampling was usually required to reduce the amount of data and increase the execution rate of the algorithm (Chao et al., 2018; Rahman et al., 2020). Besides, to correlate EEG data with brain events easily, the continuously recorded EEG data were usually segmented with time windows of different lengths according to the timestamp of occurrence (Cimtay and Ekmekcioglu, 2020). In addition, considering that the EEG signal is composed of different rhythmic components, including Delta rhythm (< 3 Hz), Theta rhythm (4–7 Hz), Alpha rhythm (8–12 Hz), Beta rhythm (13–30 Hz), and Gamma rhythm (>31 Hz), some studies used bandpass filters to separate the rhythm components in the preprocessing stage to facilitate later feature extraction (Yulita et al., 2019).

Feature Extraction of Emotion-Related Electroencephalography Signals

Feature extraction is the algorithm of extracting the specific characteristic features from the EEG signals. These distinctive features describe each emotion in a unique way. The complexity of the emotion recognition is also reduced when the complex input signal is converted into a crisp dataset (Hemanth et al., 2018). Ten features from the time domain, frequency domain, and wavelet domain are usually extracted. Features in the frequency domain are including power spectral density (PSD). Features belonging to the time domain include latency to amplitude ratio (LAR), peak-to-peak value, kurtosis, mean value, peak-to-peak time window, and signal power. And features from the wavelet domain are including entropy and energy (Bhatti et al., 2016). Besides, fractal dimension and statistical features were used by Nawaz et al. (2020). And several non-linear features such as correlation dimension (CD), approximate entropy (AP), largest Lyapunov exponent (LLE), higher-order spectra (HOS), and Hurst exponent (HE) had been used widely to characterize the emotional EEG signal (Balli and Palaniappan, 2010; Chua et al., 2011).

To extract features related to emotional states from EEG signals, a large number of researches on feature extraction algorithms have emerged. Chai et al. (2016) proposed a novel feature extraction method called the subspace alignment auto-encoder (SAAE), which combined an auto-encoder network and a subspace alignment solution in a unified framework and took advantage of both non-linear transformation and a consistency constraint. And Özerdem and Polat (2017) used Discrete wavelet transform (DWT) for feature extraction from EEG signals. Later, Li et al. (2018) organized differential entropy features from different channels as two-dimensional maps to train the hierarchical convolutional neural network (HCNN). In the same year, Izquierdo-Reyes et al. (2018) applied the Welch algorithm to estimate the PSD of each EEG channel, using a Hanning window of 128 samples. Soroush et al. (2018) extracted non-linear features from EEG data, and they suggested feature variability through time intervals instead of absolute values of features. What’s more, discriminant features were selected using the genetic algorithm (GA). And Chen et al. (2018) leveraged EMD to obtain several intrinsic eigenmode functions and Approximation Entropy (AE) of the first four IMFs as features from EEG signals for learning and recognition. Later, In Chao et al. (2019), the frequency domain, frequency band characteristics, and spatial characteristics of the multichannel EEG signals were combined to construct the multiband feature matrix (MFM). Consider that the rhythmic patterns of an EEG series could differ between subjects and between different mental states of the same subject, Liang et al. (2019) used a segment-based feature extraction method to obtain EEG features in three domains (frequency, time, and wavelet). In Li et al. (2019), the PSD and activity were extracted as original features using the Auto-regress model and Hjorth algorithm with different time windows. And Qing et al. (2019) used the autoencoder to further process the differential feature to improve the discriminative power of the features. Besides, Yulita et al. (2019) used principal component analysis (PCA) to change most of the original variables that correlate with each other into a set of variables that are smaller and mutually independent. Later Alhalaseh and Alasasfeh (2020) used entropy and Higuchi’s fractal dimension (HFD) in the feature extraction stage. And Salankar et al. (2021) first adapted EMD to decomposes the signals into several oscillatory IMF and then extracted features including area, mean, and central tendency measure of the elliptical region from second-order difference plots (SODP). In the same year, Wang et al. (2021) proposed an emotion quantification analysis (EQA) method, which was conducted based on the emotional similarity quantification (ESQ) algorithm in which each emotion was mapped in the valence-arousal domains according to the emotional similarity matrixes.

After feature extraction, some studies also reduced the feature space by feature selection (FS) technique to avoid over-specification using large number of extracted features and to make the feature extraction feasible online. In study of Jirayucharoensak et al. (2014), the input features of the deep learning network (DLN) were power spectral densities of 32-channel EEG signals from 32 subjects. To alleviate the overfitting problem, PCA was applied to extract the most important components of initial input features. Later, Rahman et al. (2020) implemented spatial PCA to reduce signal dimensionality and to select suitable features based on the t-statistical inferences. And Zhang et al. (2020) proposed a shared-subspace feature elimination (SSFE) approach to identify EEG variables with common characteristics across multiple individuals. Yin et al. (2020) proposed a new locally robust feature selection (LRFS) method to determine generalizable features of EEG within several subsets of accessible subjects. Besides, Maeng et al. (2020) used GA to determine the active feature group from the extracted features. Also, other FS algorithms, including correlation ratio (CR), mutual information (MI), and random forest (RF), were used in Suzuki et al. (2021). After extracting the emotional state-related feature vectors from the EEG signal, the next important step is to classify these features to achieve emotion recognition.

Classification of Emotion-Related Electroencephalography Signals

The concept of classification is to construct a classifier based on existing data. The classifier is a general term for the methods of classifying samples, and for emotion recognition using EEG signals, it is a crucial part, which takes the features extracted in the above process as input to complete the recognition of the emotional states.

Many classifiers have been implemented to help emotion recognition, including Support Vector Machine (SVM), multilayer perceptron (MLP), Circular Back Propagation Neural Network (CBPN), Deep Kohonen Neural Network (DKNN), deep belief networks with glia chains (DBN-GCs), artificial neural network (ANN), linear discriminant analysis (LDA), capsule network (CapsNet), convolutional neural network (CNN), multi-scale frequency bands ensemble learning (MSFBEL) and so on. And their emotion recognition accuracies are listed in Table 1.

TABLE 1

Classification ItemAuthorModelAccuracy (%)
Arousal and valenceJirayucharoensak et al., 2014DLNArousal: 46.03 Valence: 49.52
Choi and Kim, 2018LSTMArousal: 74.65 Valence: 78.00
Chao et al., 2018DBN-GCsArousal: 75.92 Valence: 76.83
Maeng et al., 2020GA-LSTMArousal: 94.8 Valence: 91.3
Keelawat et al., 2021CNNArousal: 56.85 Valence: 73.34
Arousal, valence, and dominanceChao et al., 2019CapsNetArousal: 68.285 Valence: 66.73 Dominance: 67.25
Positive and negativeÖzerdem and Polat, 2017MLP77.14
Lu et al., 2020SVM85.11
Positive, negative, and neutralLi et al., 2018HCNN97
Rahman et al., 2020ANN86.57 ± 4.08
Wang et al., 2020CNN90.59
Boredom and non-boredomSeo et al., 2019KNN86.73
Pleasand and unpleasandGupta et al., 2020KNN, SVMKNN:96.5 SVM:83.7
Happy, calm, sad, and fearChen et al., 2018DBN-SVM87.32
Happy, sad, angry, and astoundedLi et al., 2019SVM89.02
Happy, angry, sad, and relaxedPane et al., 2019RF75.6
Kessous et al., 2010DKNN, CBPN95–98
Sad, disgust, angry, and surpriseSakalle et al., 2021LSTM94.12
Happy, fear, sad, and neutralGalvão et al., 2021MSFBEL74.22
Happy, sad, surprise, fear, disgust, and angryHassouneh et al., 2020LSTM87.25

Classifiers and their performance.

Liu et al. (2020) by combining the CNN, SAE, and DNN and training them separately, the proposed network is shown as an efficient method with a faster convergence than the conventional CNN. And, for the SEED dataset, the best recognition accuracy reaches 96.77%. Topic and Russo (2021) propose a new model for emotion recognition based on the topographic (TOPO-FM) and holographic (HOLO-FM) representation of EEG signal characteristics. Experimental results show that the proposed methods can improve the emotion recognition rate on the different size datasets.

Unlike researches listed in Table 1, which only identified a limited set of emotional states (e.g., happiness, sadness, anger, etc.), Galvão et al. (2021) were dedicated to predicting the exact values of valence and arousal in a subject-independent scenario. The systematic analysis revealed that the best prediction model was a KNN regressor (K = 1) with Manhattan distance, features from the alpha, beta, gamma bands, and the differential asymmetry from the alpha band. Results, using the DEAP, AMIGOS, and DREAMER datasets, showed that this model could predict valence and arousal values with a low error (MAE < 0.06, RMSE < 0.16).

Conclusion and Discussion

To improve the accuracy of EEG signal-based machine learning algorithms in emotion recognition, researchers have made a lot of efforts in the acquisition, preprocessing, feature extraction, and classification of EEG signals. From the above summary, it can be found that the current stage of emotion recognition based on machine learning is mainly focused on the improvement of accuracy. What’s more, some combinations of feature extraction algorithms and classifiers can even achieve 100% accuracy in the two-classification of emotion recognition. And we believe that the following two goals that need to be achieved in emotion recognition based on machine learning are: (1) Perception of smaller changes in emotion; (2). Reduction in the complexity of emotion recognition algorithms so that the algorithm can be transplanted to wearable devices to realize real-time emotion recognition.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Statements

Author contributions

All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.

Funding

This work described in this manuscript was supported by the Science and Technology Development Plan Project of Jilin Province (20190303043SF) and the “13th Five-Year Plan” Science and Technology Project of the Education Department of Jilin Province (JJKH20200964KJ).

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  • 1

    AguiñagaA. R.RamirezM. A. L. (2018). Emotional states recognition, implementing a low computational complexity strategy.Health Informatics J.24146170. 10.1177/1460458216661862

  • 2

    AlhalasehR.AlasasfehS. (2020). Machine-learning-based emotion recognition system using EEG signals.Computers9:95. 10.3390/computers9040095

  • 3

    BalliT.PalaniappanR. (2010). Classification of biological signals using linear and nonlinear features.Physiol. Meas.31903920. 10.1088/0967-3334/31/7/003

  • 4

    BhattiA. M.MajidM.AnwarS. M.KhanB. (2016). Human emotion recognition and analysis in response to audio music using brain signals.Comput. Hum. Behav.65267275. 10.1016/j.chb.2016.08.029

  • 5

    BongS. Z.MurugappanM.YaacobS. (2012). “Analysis of electrocardiogram (ECG) signals for human emotional stress classification,” in Communications in Computer and Information Science, edsPonnambalamS. G.ParkkinenJ.RamanathanK. C. (Berlin: Springer), 198205. 10.1007/978-3-642-35197-6_22

  • 6

    ChaiX.WangQ.ZhaoY.LiuX.BaiO.LiY. (2016). Unsupervised domain adaptation techniques based on auto-encoder for non-stationary EEG-based emotion recognition.Comput. Biol. Med.79205214. 10.1016/j.compbiomed.2016.10.019

  • 7

    ChaoH.DongL.LiuY.LuB. (2019). Emotion Recognition from Multiband EEG Signals Using CapsNet.Sensors19:2212. 10.3390/s19092212

  • 8

    ChaoH.ZhiH.DongL.LiuY. (2018). Recognition of Emotions Using Multichannel EEG Data and DBN-GC-Based Ensemble Deep Learning Framework.Comput. Intell. Neurosci.2018:9750904.

  • 9

    ChenT.JuS.YuanX.ElhosenyM.RenF.FanM.et al (2018). Emotion recognition using empirical mode decomposition and approximation entropy.Comput. Electr. Eng.72383392. 10.1016/j.compeleceng.2018.09.022

  • 10

    ChoiE. J.KimD. K. (2018). Arousal and valence classification model based on long short-term memory and DEAP data for mental healthcare management.Healthc. Inform. Res.24309316. 10.4258/hir.2018.24.4.309

  • 11

    ChuaK. C.ChandranV.AcharyaU. R.LimC. M. (2011). Application of higher order spectra to identify epileptic EEG.J. Med. Syst.3515631571. 10.1007/s10916-010-9433-z

  • 12

    CimtayY.EkmekciogluE. (2020). Investigating the Use of Pretrained Convolutional Neural Network on Cross-Subject and Cross-Dataset EEG Emotion Recognition.Sensors20:2034. 10.3390/s20072034

  • 13

    EkmanP. (1992). An argument for basic emotions.Cogn. Emot.6169200. 10.1080/02699939208411068

  • 14

    EkmanP. (1999). “Basic emotions,” in Handbook of Cognition and Emotion, Vol.1, edsDalgleishT.PowerM. J. (Hoboken: John Wiley & Sons Ltd), 4560.

  • 15

    GalvãoF.AlarcãoS. M.FonsecaM. J. (2021). Predicting Exact Valence and Arousal Values from EEG.Sensors21:3414. 10.3390/s21103414

  • 16

    GildaS.ZafarH.SoniC.WaghurdekarK. (2018). “Smart music player integrating facial emotion recognition and music mood recommendation,” in Proceedings of the 2017 International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET), (Chennai: IEEE), 154158. 10.1109/WiSPNET.8299738

  • 17

    GoshvarpourA.AbbasiA.GoshvarpourA. (2017). An accurate emotion recognition system using ECG and GSR signals and matching pursuit method.Biomed. J.40355368. 10.1016/j.bj.2017.11.001

  • 18

    GouiziK.Bereksi ReguigF.MaaouiC. (2011). Emotion recognition from physiological signals.J. Med. Eng. Technol.35300307. 10.3109/03091902.2011.601784

  • 19

    GuptaK.LazarevicJ.PaiY. S.BillinghurstM. (2020). ““Affectively VR: Towards VR Personalized Emotion Recognition,” in Proceedings of the ACM Symposium on Virtual Reality Software and Technology (VRST), (New York: ACM), 14. 10.1145/3385956.3422122

  • 20

    HassounehA.MutawaA. M.MurugappanM. (2020). Development of a Real-Time Emotion Recognition System Using Facial Expressions and EEG based on machine learning and deep neural network methods.Inform. Med. Unlocked20:100372. 10.1016/j.imu.2020.100372

  • 21

    HemanthD. J.AnithaJ.SonL. H. (2018). Brain signal based human emotion analysis by circular back propagation and Deep Kohonen Neural Networks.Comput. Electr. Eng.68170180. 10.1016/j.compeleceng.2018.04.006

  • 22

    HouH. R.ZhangX. N.MengQ. H. (2020). Odor-induced emotion recognition based on average frequency band division of EEG signals.J. Neurosci. Methods334:108599. 10.1016/j.jneumeth.2020.108599

  • 23

    HsuY. L.WangJ. S.ChiangW. C.HungC. H. (2020). Automatic ECG-Based Emotion Recognition in Music Listening.IEEE Trans. Affect. Comput.118599. 10.1109/TAFFC.2017.2781732

  • 24

    HuangC. (2021). Recognition of psychological emotion by EEG features.Netw. Model. Anal. Health Inform. Bioinform.10:12. 10.1007/s13721-020-00283-2

  • 25

    Izquierdo-ReyesJ.Ramirez-MendozaR. A.Bustamante-BelloM. R.Pons-RoviraJ. L.Gonzalez-VargasJ. E. (2018). Emotion recognition for semi-autonomous vehicles framework.Int. J. Interact. Des. Manuf.1214471454. 10.1007/s12008-018-0473-9

  • 26

    JirayucharoensakS.Pan-NgumS.IsrasenaP. (2014). EEG-Based Emotion Recognition Using Deep Learning Network with Principal Component Based Covariate Shift Adaptation.Sci. World J.2014:627892. 10.1155/2014/627892

  • 27

    KeelawatP.ThammasanN.NumaoM.KijsirikulB. (2021). A Comparative Study of Window Size and Channel Arrangement on EEG-Emotion Recognition Using Deep CNN.Sensors21:1678. 10.3390/s21051678

  • 28

    KessousL.CastellanoG.CaridakisG. (2010). Multimodal emotion recognition in speech-based interaction using facial expression, body gesture and acoustic analysis.J. Multimodal User Interfaces33348. 10.1007/s12193-009-0025-5

  • 29

    KreibigS. D. (2010). Autonomic nervous system activity in emotion: a review.Biol. Psychol.84394421. 10.1016/j.biopsycho.2010.03.010

  • 30

    LeeM. S.LeeY. K.PaeD. S.LimM. T.KimD. W.KangT. K. (2019). Fast emotion recognition based on single pulse PPG signal with convolutional neural network.Appl. Sci.9:3355. 10.3390/app9163355

  • 31

    LevensonR. W. (2014). The Autonomic Nervous System and Emotion.Emot. Rev.6100112. 10.1177/1754073913512003

  • 32

    LiJ.ZhangZ.HeH. (2018). Hierarchical Convolutional Neural Networks for EEG-Based Emotion Recognition.Cogn. Comput.10368380. 10.1007/s12559-017-9533-x

  • 33

    LiX.ZhangX.ZhuJ.MaoW.SunS.WangZ.et al (2019). Depression recognition using machine learning methods with different feature generation strategies.Artif. Intell. Med.99:101696. 10.1016/j.artmed.2019.07.004

  • 34

    LiangZ.ObaS.IshiiS. (2019). An unsupervised EEG decoding system for human emotion recognition.Neural Netw.116257268. 10.1016/j.neunet.2019.04.003

  • 35

    LiuJ. X.WuG. P.LuoY. L.QiuS. H.YangS.LiW.et al (2020). EEG-Based Emotion Classification Using a Deep Neural Network and Sparse Autoencoder.Front. Syst. Neurosci.14:43. 10.3389/fnsys.2020.00043

  • 36

    LuY.WangM.WuW.HanY.ZhangQ.ChenS. (2020). Dynamic entropy-based pattern learning to identify emotions from EEG signals across individuals.Measurement150:107003. 10.1016/j.measurement.2019.107003

  • 37

    MaengJ. H.KangD. H.KimD. H. (2020). Deep Learning Method for Selecting Effective Models and Feature Groups in Emotion Recognition Using an Asian Multimodal Database.Electronics9:1988. 10.3390/electronics9121988

  • 38

    MikuckasA.MikuckieneI.VenckauskasA.KazanaviciusE.LukasR.PlauskaI. (2014). Emotion recognition in human computer interaction systems.Elektron. Elektrotech.205156. 10.5755/j01.eee.20.10.8878

  • 39

    MinhadK. N.AliS. H. M. D.ReazM. B. I. (2017). A design framework for human emotion recognition using electrocardiogram and skin conductance response signals.J. Eng. Sci. Technol.1231023119.

  • 40

    NawazR.CheahK. H.NisarH.YapV. V. (2020). Comparison of different feature extraction methods for EEG-based emotion recognition.Biocybern. Biomed. Eng.40910926. 10.1016/j.bbe.2020.04.005

  • 41

    ÖzerdemM. S.PolatH. (2017). Emotion recognition based on EEG features in movie clips with channel selection.Brain Inform.4241252. 10.1007/s40708-017-0069-3

  • 42

    PaneE. S.WibawaA. D.PurnomoM. H. (2019). Improving the accuracy of EEG emotion recognition by combining valence lateralization and ensemble learning with tuning parameters.Cogn. Process.20405417. 10.1007/s10339-019-00924-z

  • 43

    QingC.QiaoR.XuX.ChengY. (2019). Interpretable Emotion Recognition Using EEG Signals.IEEE Access79416094170. 10.1109/ACCESS.2019.2928691

  • 44

    RahmanM. A.HossainM. F.HossainM.AhmmedR. (2020). Employing PCA and t-statistical approach for feature extraction and classification of emotion from multichannel EEG signal.Egypt. Inform. J.212335. 10.1016/j.eij.2019.10.002

  • 45

    SakalleA.TomarP.BhardwajH.AcharyaD.BhardwajA. (2021). A LSTM based deep learning network for recognizing emotions using wireless brainwave driven system.Expert Syst. Appl.173:114516. 10.1016/j.eswa.2020.114516

  • 46

    SalankarN.MishraP.GargL. (2021). Emotion recognition from EEG signals using empirical mode decomposition and second-order difference plot.Biomed. Signal Process. Control65:102389. 10.1016/j.bspc.2020.102389

  • 47

    SelvarajJ.MurugappanM.WanK.YaacobS. (2013). Classification of emotional states from electrocardiogram signals: a nonlinear approach based on hurst.Biomed. Eng. Online12:44. 10.1186/1475-925X-12-44

  • 48

    SeoJ.LaineT. H.SohnK. A. (2019). Machine learning approaches for boredom classification using EEG.J. Ambient Intell. Humaniz. Comput.1038313846. 10.1007/s12652-019-01196-3

  • 49

    SoroushM. Z.MaghooliK.SetarehdanS. K.NasrabadiA. M. (2018). A novel method of EEG-based emotion recognition using nonlinear features variability and dempster-shafer theory.Biomed. Eng. Appl. Basis Commun.30:1850026. 10.4015/S1016237218500266

  • 50

    SuzukiK.LaohakangvalvitT.MatsubaraR.SugayaM. (2021). Constructing an Emotion Estimation Model Based on EEG/HRV Indexes Using Feature Extraction and Feature Selection Algorithms.Sensors21:2910. 10.3390/s21092910

  • 51

    TarnowskiP.KołodziejM.MajkowskiA.RakR. J. (2018). “Combined analysis of GSR and EEG signals for emotion recognition,” in International Interdisciplinary PhD Workshop (IIPhDW), (Poland: IEEE), 137141. 10.1109/IIPHDW.2018.8388342

  • 52

    TopicA.RussoM. (2021). Emotion recognition based on EEG feature maps through deep learning network.Eng. Sci. Technol.2414421454. 10.1016/j.jestch.2021.03.012

  • 53

    WalterS.GrussS.Limbrecht-EcklundtK.TraueH. C.WernerP.Al-HamadiA.et al (2014). Automatic pain quantification using autonomic parameters.Psychol. Neurosci.7363380. 10.3922/j.psns.2014.041

  • 54

    WangF.WuS.ZhangW.XuZ.ZhangY.WuC.et al (2020). Emotion recognition with convolutional neural network and EEG-based EFDMs.Neuropsychologia146:107506. 10.1016/j.neuropsychologia.2020.107506

  • 55

    WangL.LiuH.ZhouT.LiangW.ShanM. (2021). Multidimensional Emotion Recognition Based on Semantic Analysis of Biomedical EEG Signal for Knowledge Discovery in Psychological Healthcare.Appl. Sci.11:1338. 10.3390/app11031338

  • 56

    WeiC. Z. (2013). Stress emotion recognition based on RSP and EMG signals.Adv. Mater. Res.709827831. 10.4028/www.scientific.net/AMR.709.827

  • 57

    WeiW.JiaQ.FengY.ChenG. (2018). Emotion Recognition Based on Weighted Fusion Strategy of Multichannel Physiological Signals.Comput. Intell. Neurosci.2018:5296523. 10.1155/2018/5296523

  • 58

    WenW.LiuG.ChengN.WeiJ.ShangguanP.HuangW. (2014). Emotion recognition based on multi-variant correlation of physiological signals.IEEE Trans. Affect. Comput.5126140. 10.1109/TAFFC.2014.2327617

  • 59

    YinZ.LiuL.ChenJ.ZhaoB.WangY. (2020). Locally robust EEG feature selection for individual-independent emotion recognition.Expert Syst. Appl.162:113768. 10.1016/j.eswa.2020.113768

  • 60

    YulitaI. N.JulviarR. R.TriwahyuniA.WidiastutiT. (2019). Multichannel Electroencephalography-based Emotion Recognition Using Machine Learning.J. Phys. Conf. Ser.1230:012008. 10.1088/1742-6596/1230/1/012008

  • 61

    YuvarajR.MurugappanM.Mohamed IbrahimN.SundarajK.OmarM. I.MohamadK.et al (2014). Detection of emotions in Parkinson’s disease using higher order spectral features from brain’s electrical activity.Biomed. Signal Process. Control14108116. 10.1016/j.bspc.2014.07.005

  • 62

    ZhangW.YinZ.SunZ.TianY.WangY. (2020). Selecting transferrable neurophysiological features for inter-individual emotion recognition via a shared-subspace feature elimination approach.Comput. Biol. Med.123:103875. 10.1016/j.compbiomed.2020.103875

Summary

Keywords

EEG, machine learning, emotion recognition, feature extraction, classification

Citation

Cai J, Xiao R, Cui W, Zhang S and Liu G (2021) Application of Electroencephalography-Based Machine Learning in Emotion Recognition: A Review. Front. Syst. Neurosci. 15:729707. doi: 10.3389/fnsys.2021.729707

Received

23 June 2021

Accepted

08 November 2021

Published

23 November 2021

Volume

15 - 2021

Edited by

Yan Mark Yufik, Virtual Structures Research Inc., United States

Reviewed by

Oksana Zayachkivska, Danylo Halytsky Lviv National Medical University, Ukraine; Wellington Pinheiro dos Santos, Federal University of Pernambuco, Brazil

Updates

Copyright

*Correspondence: Guangda Liu,

Disclaimer

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Outline

Cite article

Copy to clipboard


Export citation file


Share article

Article metrics