<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="review-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Comput. Neurosci.</journal-id>
<journal-title>Frontiers in Computational Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Comput. Neurosci.</abbrev-journal-title>
<issn pub-type="epub">1662-5188</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fncom.2022.1006763</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Review</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Status of deep learning for EEG-based brain&#x02013;computer interface applications</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Hossain</surname> <given-names>Khondoker Murad</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1649828/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Islam</surname> <given-names>Md. Ariful</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/2126508/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Hossain</surname> <given-names>Shahera</given-names></name>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Nijholt</surname> <given-names>Anton</given-names></name>
<xref ref-type="aff" rid="aff4"><sup>4</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/81342/overview"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Ahad</surname> <given-names>Md Atiqur Rahman</given-names></name>
<xref ref-type="aff" rid="aff5"><sup>5</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/402251/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Department of Computer Science and Electrical Engineering, University of Maryland Baltimore County</institution>, <addr-line>Baltimore, MD</addr-line>, <country>United States</country></aff>
<aff id="aff2"><sup>2</sup><institution>Department of Robotics and Mechatronics Engineering, University of Dhaka</institution>, <addr-line>Dhaka</addr-line>, <country>Bangladesh</country></aff>
<aff id="aff3"><sup>3</sup><institution>Kyushu Institute of Technology</institution>, <addr-line>Kitakyushu</addr-line>, <country>Japan</country></aff>
<aff id="aff4"><sup>4</sup><institution>Human Media Interaction, University of Twente</institution>, <addr-line>Enschede</addr-line>, <country>Netherlands</country></aff>
<aff id="aff5"><sup>5</sup><institution>Department of Computer Science and Digital Technology, University of East London</institution>, <addr-line>London</addr-line>, <country>United Kingdom</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Md. Kafiul Islam, Independent University, Bangladesh</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Shinji Kawakura, Osaka City University, Japan; Amir Rastegarnia, Malayer University, Iran</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Md Atiqur Rahman Ahad &#x02709; <email>mahad&#x00040;uel.ac.uk</email></corresp>
</author-notes>
<pub-date pub-type="epub">
<day>16</day>
<month>01</month>
<year>2023</year>
</pub-date>
<pub-date pub-type="collection">
<year>2022</year>
</pub-date>
<volume>16</volume>
<elocation-id>1006763</elocation-id>
<history>
<date date-type="received">
<day>29</day>
<month>07</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>23</day>
<month>12</month>
<year>2022</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2023 Hossain, Islam, Hossain, Nijholt and Ahad.</copyright-statement>
<copyright-year>2023</copyright-year>
<copyright-holder>Hossain, Islam, Hossain, Nijholt and Ahad</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license> </permissions>
<abstract>
<p>In the previous decade, breakthroughs in the central nervous system bioinformatics and computational innovation have prompted significant developments in brain&#x02013;computer interface (BCI), elevating it to the forefront of applied science and research. BCI revitalization enables neurorehabilitation strategies for physically disabled patients (e.g., disabled patients and hemiplegia) and patients with brain injury (e.g., patients with stroke). Different methods have been developed for electroencephalogram (EEG)-based BCI applications. Due to the lack of a large set of EEG data, methods using matrix factorization and machine learning were the most popular. However, things have changed recently because a number of large, high-quality EEG datasets are now being made public and used in deep learning-based BCI applications. On the other hand, deep learning is demonstrating great prospects for solving complex relevant tasks such as motor imagery classification, epileptic seizure detection, and driver attention recognition using EEG data. Researchers are doing a lot of work on deep learning-based approaches in the BCI field right now. Moreover, there is a great demand for a study that emphasizes only deep learning models for EEG-based BCI applications. Therefore, we introduce this study to the recent proposed deep learning-based approaches in BCI using EEG data (from 2017 to 2022). The main differences, such as merits, drawbacks, and applications are introduced. Furthermore, we point out current challenges and the directions for future studies. We argue that this review study will help the EEG research community in their future research.</p></abstract>
<kwd-group>
<kwd>deep learning</kwd>
<kwd>EEG</kwd>
<kwd>BCI</kwd>
<kwd>future challenge</kwd>
<kwd>convolutional neural network (CNN)</kwd>
</kwd-group>
<counts>
<fig-count count="8"/>
<table-count count="3"/>
<equation-count count="0"/>
<ref-count count="132"/>
<page-count count="17"/>
<word-count count="13674"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>1. Introduction</title>
<p>BCI is a method that uses psychology, electronics, computers, neuroscience, signal processing, and pattern recognition to work together. It is used to generate various control signals or commands from recorded brain signals of neural responses in order to determine the intentions of the medically challenged subject to perform a motor action to restore a quality of life. In a nutshell, the BCI turns the neural responses of the human brain into control signals or commands that can be used to control things such as prosthetic limbs, walking, neurorehabilitation, and movement. It is also used to assist medically challenged people with severe motor disorders, as well as healthy people, in their daily activities.</p>
<p>A generic BCI system (Schalk et al., <xref ref-type="bibr" rid="B101">2004</xref>; Hassanien and Azar, <xref ref-type="bibr" rid="B49">2015</xref>) comprises: (i) electrodes to obtain electrophysiological scheme patterns from a human subject; (ii) signal acquisition devices to record the neural responses of the subject&#x00027;s brain scheme; (iii) feature extraction to generate the discriminative nature of brain signals to decrease the size of data needed to classify the neural scheme; (iv) a translation algorithm to generate operative control signals; (v) a control interface to convert into output device commands; and (vi) a feedback system to guide the subject to refine specific neural activity to ensure a better control mechanism.</p>
<p>On the other hand, there are two types of signal acquisition methods to trace neural activity, namely invasive and non-invasive methods (Schalk et al., <xref ref-type="bibr" rid="B101">2004</xref>). A generic EEG-based BCI architecture is shown in <xref ref-type="fig" rid="F1">Figure 1</xref>. Microelectrodes are neurosurgically implanted to the entire surface of the cerebral cortex or over the entire surface of the cerebrum under the scalp in an invasive method (Abdulkader et al., <xref ref-type="bibr" rid="B1">2015</xref>). Even though this method gives high-resolution neural signals, it is not the best way to record neural activity from a human brain because it can cause scar tissue and infections.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p>A generic brain&#x02013;computer interface system.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fncom-16-1006763-g0001.tif"/>
</fig>
<p>In that case, the non-invasive method is preferred due to its flexibility and reduced risk. There are many techniques (Lotte et al., <xref ref-type="bibr" rid="B76">2018</xref>) by which the neural activity is recorded, such as magnetoencephalography (MEG), functional magnetic resonance imaging (fMRI) (Acar et al., <xref ref-type="bibr" rid="B3">2022</xref>; Hossain et al., <xref ref-type="bibr" rid="B52">2022</xref>), and electroencephalography (EEG), and fully functioning near-infrared spectroscopy (fNIRS). The EEG method is preferred due to its robustness and user-friendly approach (Bi et al., <xref ref-type="bibr" rid="B23">2013</xref>).</p>
<p>Artificial intelligence (AI) refers to systems or computers that imitate human intelligence to carry out tasks and can (iteratively) improve themselves depending on the information that they acquire. AI can take several forms, including machine learning and deep learning. Machine learning refers to the form of AI that can automatically adapt with only minimal intervention from humans. On the other hand, deep learning is a subset of machine learning that learns with large data by exploiting more neural network layers than classical machine learning schemes. There are several reviews on EEG-based BCI using signal processing and machine learning (Craik et al., <xref ref-type="bibr" rid="B32">2019</xref>; Al-Saegh et al., <xref ref-type="bibr" rid="B10">2021</xref>; Alzahab et al., <xref ref-type="bibr" rid="B11">2021</xref>; Rahman et al., <xref ref-type="bibr" rid="B95">2021</xref>; Wang and Wang, <xref ref-type="bibr" rid="B117">2021</xref>). Nevertheless, machine learning reviews consist of a small part of deep learning modalities, so no review has focused exclusively on deep learning. One of the best things about deep learning is that it can do feature engineering on its own. In this method, the data are combed through by an algorithm that searches for features that correlate with one another, and then combines those features to facilitate faster learning without any explicit instructions. A comprehensive review is much anticipated as deep learning is the state-of-the-art classification pipeline. In this review, we report the most recent deep learning-based BCI research studies for the last 6 years. <xref ref-type="fig" rid="F2">Figure 2</xref> shows the PRISMA flow diagram of our literature review process.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p>PRISMA flow diagram of the literature review process for studies on deep learning-based EEG-based BCI.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fncom-16-1006763-g0002.tif"/>
</fig>
<p>We used PubMed, ERIC, JSTOR, IEEE Xplore, and Google Scholar as the electronic databases to get and retrieve the articles. As our goal is to include studies that relate to the three keywords: EEG data, BCI applications, and deep learning, we looked for studies that included all three keywords. From the 245 studies, we removed 31 as they were either fully duplicated or subversions of other articles. After screening the remaining 214 papers, we excluded 57 because they used deep learning only for related works or as only a part of the full pipeline, resulting in 157 studies. But we could not fully retrieve 34 studies out of 157, and this filtering gives us 123 articles, of which five do not have a clear dataset description, and the tasks of eight studies are irrelevant to our review. Finally, we explored 110 articles for this review.</p>
<p>To show how important this review is, we compare it to review that have been done recently in <xref ref-type="table" rid="T1">Table 1</xref>. As the comparison criteria, we have selected the coverage of the studies, the number of studies that are included in the review, the presence of dataset-specific studies in the review, whether the review is BCI application-specific, having future recommendations for the researchers, and whether the review is based only on deep learning. This study is the most recent study, which covers the articles until late 2022 and comprises the highest number of studies for the past 6 years. There has been no other review study that has done dataset-specific filtration of EEG-based BCI research, whereas we show the number of studies and results for each dataset separately. Furthermore, with a rich tabular comparison between the two works, we only consider the EEG data classification for BCI application-specific. Finally, we only concentrate on the deep learning algorithms for the EEG classification in contrast to most of the reviews.</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p>Comparison between previous review works and our proposed review study.</p></caption>
<table frame="box" rules="all">
<thead><tr>
<th valign="top" align="left" style="background-color:#919497;color:#ffffff"><bold>References</bold></th>
<th valign="top" align="left" style="background-color:#919497;color:#ffffff"><bold>Coverage</bold></th>
<th valign="top" align="left" style="background-color:#919497;color:#ffffff"><bold>No. of studies</bold></th>
<th valign="top" align="left" style="background-color:#919497;color:#ffffff"><bold>Dataset specific studies</bold></th>
<th valign="top" align="left" style="background-color:#919497;color:#ffffff"><bold>Only BCI application?</bold></th>
<th valign="top" align="left" style="background-color:#919497;color:#ffffff"><bold>Deep learning specific</bold></th>
<th valign="top" align="left" style="background-color:#919497;color:#ffffff"><bold>Future recommendation</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Cao (<xref ref-type="bibr" rid="B26">2020</xref>)</td>
<td valign="top" align="left">2017&#x02013;2020</td>
<td valign="top" align="left">Unspecified</td>
<td valign="top" align="left">No</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">No</td>
<td valign="top" align="left">No</td>
</tr> <tr>
<td valign="top" align="left">Abiri et al. (<xref ref-type="bibr" rid="B2">2019</xref>)</td>
<td valign="top" align="left">1991&#x02013;2017</td>
<td valign="top" align="left">Unspecified</td>
<td valign="top" align="left">No</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">No</td>
<td valign="top" align="left">No</td>
</tr> <tr>
<td valign="top" align="left">Rahman et al. (<xref ref-type="bibr" rid="B95">2021</xref>)</td>
<td valign="top" align="left">2009&#x02013;2021</td>
<td valign="top" align="left">54</td>
<td valign="top" align="left">No</td>
<td valign="top" align="left">No</td>
<td valign="top" align="left">No</td>
<td valign="top" align="left">Yes</td>
</tr> <tr>
<td valign="top" align="left">Craik et al. (<xref ref-type="bibr" rid="B32">2019</xref>)</td>
<td valign="top" align="left">2014&#x02013;2018</td>
<td valign="top" align="left">90</td>
<td valign="top" align="left">No</td>
<td valign="top" align="left">No</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">No</td>
</tr> <tr>
<td valign="top" align="left">Alzahab et al. (<xref ref-type="bibr" rid="B11">2021</xref>)</td>
<td valign="top" align="left">2015&#x02013;2020</td>
<td valign="top" align="left">47</td>
<td valign="top" align="left">No</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">Yes (hybrid deep learning)</td>
<td valign="top" align="left">No</td>
</tr> <tr>
<td valign="top" align="left">Al-Saegh et al. (<xref ref-type="bibr" rid="B10">2021</xref>)</td>
<td valign="top" align="left">2016&#x02013;2020</td>
<td valign="top" align="left">40</td>
<td valign="top" align="left">No</td>
<td valign="top" align="left">No</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">No</td>
</tr> <tr>
<td valign="top" align="left">Wang and Wang (<xref ref-type="bibr" rid="B117">2021</xref>)</td>
<td valign="top" align="left">2016&#x02013;2020</td>
<td valign="top" align="left">Unspecified</td>
<td valign="top" align="left">No</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">No</td>
<td valign="top" align="left">No</td>
</tr> <tr>
<td valign="top" align="left">Our study</td>
<td valign="top" align="left">2017&#x02013;2022</td>
<td valign="top" align="left">110</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">Yes</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>The study is organized as follows: After the introduction in Section 1, we introduce the core elements of EEG-based BCI in Section 2. Section 3 includes the classical methods, which have been exploited for EEG-based BCI tasks. Then, we analyzed the implementation of deep learning and related parts of this domain in Section 4. Sections 5, 6 are the discussion and conclusion of this article, respectively.</p></sec>
<sec id="s2">
<title>2. EEG-based BCI preliminaries</title>
<p>To translate mortal objectives or aspirations into real-time equipment control signals, the cognitive responses of humans are related to the physical world. In <xref ref-type="fig" rid="F3">Figure 3</xref>, we depict the usage of EEG data in BCI applications. Electrophysiological activity patterns of human subjects are recorded by the acquisition device. Scalp electrodes are mounted over the headset to capture the neural responses of human subjects (Sakkalis, <xref ref-type="bibr" rid="B100">2011</xref>). Furthermore, a pre-amplifier is used to make the brain signals stronger, and then the signal that has been strengthened is sent through a filter to get rid of unwanted parts, noise, or interference. After that, an analog-to-digital converter (ADC) converts the filtered analog signal to a digital signal. The electrical activities that had been recorded were then standardized to improve the signal-to-noise ratio (SNR) of the digital signal.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p>An architecture of BCI based on EEG data.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fncom-16-1006763-g0003.tif"/>
</fig>
<p>It is important to note that feature extraction gives you the things that neural activity cannot do. This means that you need less data to put the neural strategy into a category. Then, the data or information is put into a specific group or category of brain patterns. After this stage, the retrieved feature set is transformed into operational control signals. The control signals made in the previous step are used to control the external interface device. Thus, the BCI applications can be controlled by these command signals.</p></sec>
<sec id="s3">
<title>3. Classical methods for EEG-based BCI applications</title>
<p>EEG is by far the most prevalent strategy due to its high efficiency and usability (Schalk et al., <xref ref-type="bibr" rid="B101">2004</xref>). Be that as it may, pattern-based control utilizing EEG signals is troublesome due to being exceedingly boisterous and containing numerous exceptions. The human neural impulses acquired from a BCI based on EEG include noise and other attributes in addition to the signal of neural activity. The challenges are getting rid of noise, trying to pull out relevant characteristics, and accurately classifying the signal. By translating the extracted feature set to give it a proper class label, operational commands can be made. There are two categories of classification algorithms: linear and non-linear classifiers (Guger et al., <xref ref-type="bibr" rid="B47">2021</xref>).</p>
<p>The goal of quantitative classification is to figure out an object&#x00027;s system of classification based on how it looks. To recognize distinct types of brain activity, linear classifiers subscribe to the regime of trying to establish a linear relationship/function between both the dependent and independent variables of a classification method (Schalk et al., <xref ref-type="bibr" rid="B101">2004</xref>). This set of classifiers involves linear discriminant analysis (LDA) and support vector machines (Wang et al., <xref ref-type="bibr" rid="B116">2009</xref>). It sets up a hyperplane, which is a linear numerical operation that separates the different functions of the brain from the disentangled collection of characteristics.</p>
<p>Because of its simple, strong, and non-overfit operation and computing needs, the LDA, presuming the Gaussian distribution of data, has been implemented in several BCI platforms (Wang et al., <xref ref-type="bibr" rid="B116">2009</xref>). Support Vector Machine (SVM) is a type of artificial intelligence that can be used for both regression and classification (Wang et al., <xref ref-type="bibr" rid="B116">2009</xref>). Even though we mention regression issues, it is best suited for classification. The primary goal of the SVM algorithm is to track down a hyperplane in an N-dimensional space that evidently summarizes the data points. When no algorithmic solution can be found between the dependent and independent variables of the classification method, nonlinear classifiers are now used. Artificial neural networks (ANNs), k-nearest neighbor (KNN), and SVMs are some of these machine learning approaches (Lotte et al., <xref ref-type="bibr" rid="B76">2018</xref>; Akhter et al., <xref ref-type="bibr" rid="B6">2020</xref>; Islam et al., <xref ref-type="bibr" rid="B60">2020</xref>).</p>
<p>The ANNs are broadly utilized in an assortment of classification and design acknowledgment assignments as they can memorize from preparing tests, and, in this way, classify the input tests in a like manner. These are the most broadly utilized ANNs for efficaciously characterizing multiclass neurological actions. They operate on the basis of conducting a preparatory calculation to adjust the weights pertaining to specific input and hidden layer neurons to minimize the violent square error (Wang et al., <xref ref-type="bibr" rid="B116">2009</xref>).</p>
<p>Herman et al. (<xref ref-type="bibr" rid="B51">2008</xref>) conducted a classification of EEG-based BCI by investigating the type-2 fuzzy logic approach. They claimed that their model exhibited better classification accuracy than the type-1 model of fuzzy logic. They also compared this method with a well-known classifier based on LDA. On the other hand, Aznan and Yang (<xref ref-type="bibr" rid="B20">2013</xref>) applied the Kalman filter to an EEG-based BCI for recognizing motor visuals in an attempt to optimize the system&#x00027;s accuracy and consistency.</p>
<p>The quintessential dispersion (CSP) was used to collect the necessary information, and the radial basis function (RBF) was used to categorize the signal. They also compared their results with the LDA method and claimed that their RBF method showed a better result.</p>
<p>Zhang H. et al. (<xref ref-type="bibr" rid="B125">2013</xref>) linked Bayes classification error to spatial filtering, which is an important tool to extract and classify the EEG signal. They claimed that by validating the positive relationship between the Bayes error and the Rayleigh quotient, a spatial filter with a lower Rayleigh quotient measuring the ratio of power features could reduce the Bayes error. Zhang R. et al. (<xref ref-type="bibr" rid="B126">2013</xref>) proposed z-score LDA, an updated version of LDA that introduces a new decision boundary capable of effectively handling heteroscedastic class distribution-related classification.</p>
<p>Agrawal and Bajaj (<xref ref-type="bibr" rid="B4">2020</xref>) proposed a brain state signal measuring method based on non-muscular channel EEG to record the brain activity acting as a source to facilitate communication between a patient and the outside environment. They used fast and short-term Fourier transforms to decompose the signals obtained from neural activity into smaller segments. They implemented the classification tasks using a support vector machine. Depending on the values of the evaluation grades, the overall accuracy of the system was found to be approximately 92%. Pan et al. (<xref ref-type="bibr" rid="B88">2016</xref>) suggested a framework for a sentiment state detection system based on EEG-based BCI technology. They categorized two emotional responses, including happiness and sadness, using SVM. According to their observations, roughly 74.17% precision was noticed for such two classes.</p>
<p>Bousseta et al. (<xref ref-type="bibr" rid="B24">2018</xref>) proposed a BCI system based on EEG to control a robot arm by decoding the disabled person&#x00027;s thoughts obtained from the brain. They combined the principal component analysis with the fast Fourier transform to perform the feature extraction and then fed it to the radial basis function-based support vector machine as a classifier. The outputs of this classifier were turned into commands that the robot arm followed.</p>
<p>Amarasinghe et al. (<xref ref-type="bibr" rid="B12">2014</xref>) proposed a method consisting of three steps based on self-organizing maps to recognize neural activities for unsupervised clustering. They identified two thought patterns, such as moving forward and resting. They also implemented the classification process based on feed-forward ANNs. They claimed that their mapping methods showed approximately 8% improvement over ANN-based classification.</p>
<p>Korovesis et al. (<xref ref-type="bibr" rid="B67">2019</xref>) established an electroencephalography BCI system that controls the movement of a mobile robot in response to the eye blinking of a human operator. They used the EEG signals of brain activity to find the right features and then fed those features into a well-trained neural network to guide the mobile robot. They achieved an accuracy of 92.1%. Sulaiman et al. (<xref ref-type="bibr" rid="B104">2011</xref>) extracted distinguishing features for human stress from EEG-based BCI neural activity. They combined the power spectrum ratio of EEG and spectral centroid techniques to enhance the accuracy (88.89%) of the k-nearest neighbor (kNN) classifier, detecting and classifying human stress in two states, such as close-eye and open-eye.</p>
<p>Wang et al. (<xref ref-type="bibr" rid="B116">2009</xref>) conducted a review of various classification approaches for motor imagery (BCI competition III) and finger movement (BCI competition IV) on EEG signals. They compared the results in terms of the accuracy of the classification. Gaussian SVM (GSVM) and k-NN show the desired performance because these types of classification are more vigorous than nonlinear classifiers, as shown in <xref ref-type="fig" rid="F4">Figure 4</xref>. However, learning vector quantization neural networks (LVQNN) and quadratic discriminant analysis (QDA) demonstrate the lowest accuracy. In addition, the performances of linear discriminant analysis (LDA) and linear SVM are almost identical. These demonstrate that the classical machine learning methods are not yet optimal for this domain. Therefore, we need to try out deep learning methods on large datasets in EEG-based BCI applications.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p>Classification algorithms and the corresponding accuracies of different classical classification methods based on a study.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fncom-16-1006763-g0004.tif"/>
</fig></sec>
<sec id="s4">
<title>4. Utilizing deep learning in EEG-based BCI</title>
<p><xref ref-type="table" rid="T2">Table 2</xref> lists all the EEG-based BCI studies using deep learning for the last 6 years. We have listed the five most important parts of the studies: datasets, number of subjects, deep learning mode, BCI application, and classification result. This table will assist future researchers in determining the state of the art in this domain.</p>
<table-wrap position="float" id="T2">
<label>Table 2</label>
<caption><p>EEG-based BCI studies using deep learning for the last 6 years.</p></caption>
<table frame="box" rules="all">
<thead><tr>
<th valign="top" align="left" style="background-color:#919497;color:#ffffff"><bold>References</bold></th>
<th valign="top" align="left" style="background-color:#919497;color:#ffffff"><bold>Dataset (No. of subjects)</bold></th>
<th valign="top" align="left" style="background-color:#919497;color:#ffffff"><bold>Deep learning modality</bold></th>
<th valign="top" align="left" style="background-color:#919497;color:#ffffff"><bold>Application</bold></th>
<th valign="top" align="left" style="background-color:#919497;color:#ffffff"><bold>Classification result</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left" colspan="5" style="background-color:#e0e1e3"><bold>EEG-based BCI with deep learning</bold></td>
</tr> <tr>
<td valign="top" align="left">Tang et al. (<xref ref-type="bibr" rid="B108">2017</xref>)</td>
<td valign="top" align="left">2 able-body subjects (2)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Classifying left hand and right hand movement</td>
<td valign="top" align="left">86.41%</td>
</tr> <tr>
<td valign="top" align="left">Aznan et al. (<xref ref-type="bibr" rid="B19">2018</xref>)</td>
<td valign="top" align="left">4 subjects (4)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Classifying SSVEP frequencies</td>
<td valign="top" align="left">96.00%</td>
</tr> <tr>
<td valign="top" align="left">Dose et al. (<xref ref-type="bibr" rid="B38">2018</xref>)</td>
<td valign="top" align="left">Physionet EEG MI Dataset (109)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Stroke rehabilitation</td>
<td valign="top" align="left">80.38%</td>
</tr> <tr>
<td valign="top" align="left">El-Fiqi et al. (<xref ref-type="bibr" rid="B40">2018</xref>)</td>
<td valign="top" align="left">2 datasets (5 and 12)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Person identification</td>
<td valign="top" align="left">96.80%</td>
</tr> <tr>
<td valign="top" align="left">Amber et al. (<xref ref-type="bibr" rid="B13">2019</xref>)</td>
<td valign="top" align="left">DRYAD (30)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Lie detection</td>
<td valign="top" align="left">99.60%</td>
</tr> <tr>
<td valign="top" align="left">Nguyen and Chung (<xref ref-type="bibr" rid="B84">2018</xref>)</td>
<td valign="top" align="left">8 healthy subjects (8)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Developing a speller system</td>
<td valign="top" align="left">99.20%</td>
</tr> <tr>
<td valign="top" align="left">Shoeibi et al. (<xref ref-type="bibr" rid="B102">2021</xref>)</td>
<td valign="top" align="left">21 patients with focal epilepsy (21)</td>
<td valign="top" align="left">CNN, LSTM</td>
<td valign="top" align="left">Diagnosing epileptic seizures</td>
<td valign="top" align="left">99.10% (CNN), 100% (LSTM)</td>
</tr> <tr>
<td valign="top" align="left">Antoniades et al. (<xref ref-type="bibr" rid="B16">2018</xref>)</td>
<td valign="top" align="left">17 subjects (17)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Detecting epileptic discharges</td>
<td valign="top" align="left">68.00%</td>
</tr> <tr>
<td valign="top" align="left">V&#x000F6;lker et al. (<xref ref-type="bibr" rid="B115">2018</xref>)</td>
<td valign="top" align="left">Flanker task dataset (31)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Decoding error</td>
<td valign="top" align="left">81.70%</td>
</tr> <tr>
<td valign="top" align="left">Behncke et al. (<xref ref-type="bibr" rid="B22">2018</xref>)</td>
<td valign="top" align="left">5 males and 6 females (11)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Decoding robot errors</td>
<td valign="top" align="left">75.00%</td>
</tr> <tr>
<td valign="top" align="left">Oh et al. (<xref ref-type="bibr" rid="B85">2020</xref>)</td>
<td valign="top" align="left">20 Parkinson patients (20)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Identifying Parkinson Disease</td>
<td valign="top" align="left">88.25%</td>
</tr> <tr>
<td valign="top" align="left">Zeng et al. (<xref ref-type="bibr" rid="B122">2018</xref>)</td>
<td valign="top" align="left">10 healthy subjects (10)</td>
<td valign="top" align="left">LSTM</td>
<td valign="top" align="left">Predicting mental states of drivers</td>
<td valign="top" align="left">91.79%</td>
</tr> <tr>
<td valign="top" align="left">Hussein et al. (<xref ref-type="bibr" rid="B56">2019</xref>)</td>
<td valign="top" align="left">BCI (7)</td>
<td valign="top" align="left">LSTM</td>
<td valign="top" align="left">Detecting epileptic seizures</td>
<td valign="top" align="left">100%</td>
</tr> <tr>
<td valign="top" align="left">Vilamala et al. (<xref ref-type="bibr" rid="B114">2017</xref>)</td>
<td valign="top" align="left">10 males and 10 females (20)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Scoring sleep stage</td>
<td valign="top" align="left">89&#x02013;97%</td>
</tr> <tr>
<td valign="top" align="left">Tabar and Halici (<xref ref-type="bibr" rid="B106">2016</xref>)</td>
<td valign="top" align="left">BCI competition IV dataset 2b (9)</td>
<td valign="top" align="left">CNN&#x0002B;SAE</td>
<td valign="top" align="left">Classifying right and left hand movement</td>
<td valign="top" align="left">72.40%</td>
</tr> <tr>
<td valign="top" align="left">Olivas-Padilla and Chacon-Murguia (<xref ref-type="bibr" rid="B86">2019</xref>)</td>
<td valign="top" align="left">BCI competition IV dataset 2a (9)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Classifying MI</td>
<td valign="top" align="left">67.50% - 82.09%</td>
</tr> <tr>
<td valign="top" align="left">Tayeb et al. (<xref ref-type="bibr" rid="B109">2019</xref>)</td>
<td valign="top" align="left">BCI competition IV dataset 2b (9)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Decoding MI movements</td>
<td valign="top" align="left">77.72%</td>
</tr> <tr>
<td valign="top" align="left">Sundaresan et al. (<xref ref-type="bibr" rid="B105">2021</xref>)</td>
<td valign="top" align="left">8 with autism and 5 healthy subjects (13)</td>
<td valign="top" align="left">CNN&#x0002B;RNN</td>
<td valign="top" align="left">Classifying mental stress with autism</td>
<td valign="top" align="left">93.27%</td>
</tr> <tr>
<td valign="top" align="left">Cai et al. (<xref ref-type="bibr" rid="B25">2021</xref>)</td>
<td valign="top" align="left">26 healthy subjects (26)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Classifying attentive state</td>
<td valign="top" align="left">72.73%</td>
</tr> <tr>
<td valign="top" align="left">Ieracitano et al. (<xref ref-type="bibr" rid="B58">2021</xref>)</td>
<td valign="top" align="left">15 subjects (15)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Discriminating hand motion planning</td>
<td valign="top" align="left">76.21%</td>
</tr> <tr>
<td valign="top" align="left">Reddy et al. (<xref ref-type="bibr" rid="B97">2021</xref>)</td>
<td valign="top" align="left">27 subjects (27)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Detecting drowsiness</td>
<td valign="top" align="left">85.42%</td>
</tr> <tr>
<td valign="top" align="left">Petoku and Capi (<xref ref-type="bibr" rid="B91">2021</xref>)</td>
<td valign="top" align="left">462 trials of a single subject (1)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Detecting object movement</td>
<td valign="top" align="left">60.00%</td>
</tr> <tr>
<td valign="top" align="left">Zhang et al. (<xref ref-type="bibr" rid="B124">2021</xref>)</td>
<td valign="top" align="left">BCI Competition IV dataset 2a and 2b (18)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Classifying MI</td>
<td valign="top" align="left">88.40%</td>
</tr> <tr>
<td valign="top" align="left">Mai et al. (<xref ref-type="bibr" rid="B77">2021</xref>)</td>
<td valign="top" align="left">4 males and 2 females (6)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Detecting emotional states</td>
<td valign="top" align="left">93.34%</td>
</tr> <tr>
<td valign="top" align="left">Deng et al. (<xref ref-type="bibr" rid="B37">2021</xref>)</td>
<td valign="top" align="left">BCI Competition IV 2a, III (12)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Classifying MI tasks</td>
<td valign="top" align="left">85.30%</td>
</tr> <tr>
<td valign="top" align="left">Huang et al. (<xref ref-type="bibr" rid="B55">2022</xref>)</td>
<td valign="top" align="left">PhysioNet dataset (109)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Classifying MI</td>
<td valign="top" align="left">92.00%</td>
</tr> <tr>
<td valign="top" align="left">Cho et al. (<xref ref-type="bibr" rid="B31">2021</xref>)</td>
<td valign="top" align="left">12 subjects (12)</td>
<td valign="top" align="left">Bi-LSTM</td>
<td valign="top" align="left">Classifying MI task</td>
<td valign="top" align="left">68.00%</td>
</tr> <tr>
<td valign="top" align="left">Atilla and Alimardani (<xref ref-type="bibr" rid="B18">2021</xref>)</td>
<td valign="top" align="left">14 subjects while driving (14)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Classifying drivers attention</td>
<td valign="top" align="left">89.00%</td>
</tr> <tr>
<td valign="top" align="left">Mammone et al. (<xref ref-type="bibr" rid="B79">2021</xref>)</td>
<td valign="top" align="left">15 participants (15)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Decoding motion planning</td>
<td valign="top" align="left">90.77%</td>
</tr> <tr>
<td valign="top" align="left">Ak et al. (<xref ref-type="bibr" rid="B5">2022</xref>)</td>
<td valign="top" align="left">5 subjects (5)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Controlling robot manipulator</td>
<td valign="top" align="left">90.00%</td>
</tr> <tr>
<td valign="top" align="left">Huang et al. (<xref ref-type="bibr" rid="B54">2021</xref>)</td>
<td valign="top" align="left">BCI competition IV dataset 2a (9)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Classifying MI</td>
<td valign="top" align="left">90.00%</td>
</tr> <tr>
<td valign="top" align="left">Aldayel et al. (<xref ref-type="bibr" rid="B8">2020</xref>)</td>
<td valign="top" align="left">DEAP (32)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Classifying preference in neuromarketing</td>
<td valign="top" align="left">94.00%</td>
</tr> <tr>
<td valign="top" align="left">Le&#x000F3;n et al. (<xref ref-type="bibr" rid="B70">2020</xref>)</td>
<td valign="top" align="left">10 subjects (10)</td>
<td valign="top" align="left">CNN, RNN</td>
<td valign="top" align="left">Classifying SSMVEP signals</td>
<td valign="top" align="left">96.83%</td>
</tr> <tr>
<td valign="top" align="left">Miao et al. (<xref ref-type="bibr" rid="B82">2020</xref>)</td>
<td valign="top" align="left">BCI competition IVa (5), right index finger MI dataset (10)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Classifying MI</td>
<td valign="top" align="left">90.00%</td>
</tr> <tr>
<td valign="top" align="left">Ko et al. (<xref ref-type="bibr" rid="B65">2020</xref>)</td>
<td valign="top" align="left">SEED-VIG dataset (15)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Estimating driver vigilance</td>
<td valign="top" align="left">96.00%</td>
</tr> <tr>
<td valign="top" align="left">Penchina et al. (<xref ref-type="bibr" rid="B90">2020</xref>)</td>
<td valign="top" align="left">11 subjects (11)</td>
<td valign="top" align="left">RNN, LSTM</td>
<td valign="top" align="left">Classifying anxiety in adolescents with autism</td>
<td valign="top" align="left">93.27%</td>
</tr> <tr>
<td valign="top" align="left">Tortora et al. (<xref ref-type="bibr" rid="B111">2020</xref>)</td>
<td valign="top" align="left">11 healthy subjects walking on a treadmill (8)</td>
<td valign="top" align="left">LSTM</td>
<td valign="top" align="left">Decoding gait</td>
<td valign="top" align="left">AUC=90%</td>
</tr>
<tr>
<td valign="top" align="left">Rammy et al. (<xref ref-type="bibr" rid="B96">2020</xref>)</td>
<td valign="top" align="left">BCI Competition IV dataset 2a (9)</td>
<td valign="top" align="left">LSTM</td>
<td valign="top" align="left">Recognizing motor imagination</td>
<td valign="top" align="left">Mean kappa: 0.64</td>
</tr> <tr>
<td valign="top" align="left">Liu J. et al. (<xref ref-type="bibr" rid="B74">2020</xref>)</td>
<td valign="top" align="left">DEAP (32), SEED (15)</td>
<td valign="top" align="left">CNN&#x0002B;SAE</td>
<td valign="top" align="left">Classifying emotion</td>
<td valign="top" align="left">92.86% (DEAP), 96.77% (SEED)</td>
</tr> <tr>
<td valign="top" align="left">Li Y. et al. (<xref ref-type="bibr" rid="B73">2020</xref>)</td>
<td valign="top" align="left">EEGMMIDB (109)</td>
<td valign="top" align="left">Recurrent-CNN</td>
<td valign="top" align="left">Recognizing intention</td>
<td valign="top" align="left">97.36%</td>
</tr> <tr>
<td valign="top" align="left">Maiorana (<xref ref-type="bibr" rid="B78">2020</xref>)</td>
<td valign="top" align="left">40 subjects (40)</td>
<td valign="top" align="left">RNN, CNN</td>
<td valign="top" align="left">Recognizing biometric</td>
<td valign="top" align="left">75.00%</td>
</tr> <tr>
<td valign="top" align="left">Gao et al. (<xref ref-type="bibr" rid="B45">2020b</xref>)</td>
<td valign="top" align="left">DEAP (32), SEED (15)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Recognizing emotion</td>
<td valign="top" align="left">90.63%</td>
</tr> <tr>
<td valign="top" align="left">Hwang et al. (<xref ref-type="bibr" rid="B57">2020</xref>)</td>
<td valign="top" align="left">SEED dataset (15)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Recognizing emotion</td>
<td valign="top" align="left">90.41%</td>
</tr> <tr>
<td valign="top" align="left">Gao et al. (<xref ref-type="bibr" rid="B44">2020a</xref>)</td>
<td valign="top" align="left">15 right-handed healthy students (15)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Recognizing emotion</td>
<td valign="top" align="left">92.44%</td>
</tr> <tr>
<td valign="top" align="left">Yang et al. (<xref ref-type="bibr" rid="B120">2020</xref>)</td>
<td valign="top" align="left">BCI competition IV dataset 1 (7)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Decoding MI EEG</td>
<td valign="top" align="left">86.40%</td>
</tr> <tr>
<td valign="top" align="left">Fahimi et al. (<xref ref-type="bibr" rid="B42">2019</xref>)</td>
<td valign="top" align="left">120 healthy subjects performed the Stroop color test (120)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Detecting attention</td>
<td valign="top" align="left">79.26%</td>
</tr> <tr>
<td valign="top" align="left">Tang et al. (<xref ref-type="bibr" rid="B107">2019</xref>)</td>
<td valign="top" align="left">BCI competition data IV 2a (9)</td>
<td valign="top" align="left">CNN&#x0002B;SAE</td>
<td valign="top" align="left">Classifying eMI task</td>
<td valign="top" align="left">79.90%</td>
</tr> <tr>
<td valign="top" align="left">Roy et al. (<xref ref-type="bibr" rid="B98">2019</xref>)</td>
<td valign="top" align="left">BCI competition IV 2b dataset (9)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Classifying brain states</td>
<td valign="top" align="left">80.32%</td>
</tr> <tr>
<td valign="top" align="left">Fares et al. (<xref ref-type="bibr" rid="B43">2019</xref>)</td>
<td valign="top" align="left">ImageNet-EEG (1)</td>
<td valign="top" align="left">Bi-LSTM</td>
<td valign="top" align="left">Classifying image</td>
<td valign="top" align="left">97.30%</td>
</tr> <tr>
<td valign="top" align="left">Wilaiprasitporn et al. (<xref ref-type="bibr" rid="B118">2019</xref>)</td>
<td valign="top" align="left">DEAP dataset (32)</td>
<td valign="top" align="left">CNN, RNN</td>
<td valign="top" align="left">Identifying person</td>
<td valign="top" align="left">99.90%</td>
</tr> <tr>
<td valign="top" align="left">Qiao and Bi (<xref ref-type="bibr" rid="B94">2019</xref>)</td>
<td valign="top" align="left">BCI competition IV 2a dataset (9)</td>
<td valign="top" align="left">CNN&#x0002B;Bi-GRU</td>
<td valign="top" align="left">Classifying MI</td>
<td valign="top" align="left">76.62%</td>
</tr> <tr>
<td valign="top" align="left">Zgallai et al. (<xref ref-type="bibr" rid="B123">2019</xref>)</td>
<td valign="top" align="left">10 volunteers (10)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">EEG-driven BCI smart wheelchair</td>
<td valign="top" align="left">70.00 (raw EEG), 96.00% (Fourier)</td>
</tr> <tr>
<td valign="top" align="left">Gao et al. (<xref ref-type="bibr" rid="B46">2019</xref>)</td>
<td valign="top" align="left">8 subjects in fatigue states (8)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Evaluating driver fatigue</td>
<td valign="top" align="left">97.37%</td>
</tr> <tr>
<td valign="top" align="left">Puengdang et al. (<xref ref-type="bibr" rid="B93">2019</xref>)</td>
<td valign="top" align="left">20 subjects (20)</td>
<td valign="top" align="left">LSTM</td>
<td valign="top" align="left">Authenticating person</td>
<td valign="top" align="left">91.44%</td>
</tr> <tr>
<td valign="top" align="left">Song et al. (<xref ref-type="bibr" rid="B103">2019</xref>)</td>
<td valign="top" align="left">BCI Competition IV dataset 2a (9)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Classifying MI</td>
<td valign="top" align="left">73.40%</td>
</tr> <tr>
<td valign="top" align="left">Zhao et al. (<xref ref-type="bibr" rid="B131">2019</xref>)</td>
<td valign="top" align="left">BCI Competition IV dataset 2a (9)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Classifying MI</td>
<td valign="top" align="left">Mean kappa: 0.64</td>
</tr> <tr>
<td valign="top" align="left">Chen et al. (<xref ref-type="bibr" rid="B30">2019b</xref>)</td>
<td valign="top" align="left">DEAP dataset (32)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Recognizing emotion</td>
<td valign="top" align="left">AUC: 99.88%</td>
</tr> <tr>
<td valign="top" align="left">Chen et al. (<xref ref-type="bibr" rid="B29">2019a</xref>)</td>
<td valign="top" align="left">157 subjects (157)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Identifying biometric</td>
<td valign="top" align="left">96.00%</td>
</tr> <tr>
<td valign="top" align="left">Dai et al. (<xref ref-type="bibr" rid="B34">2019</xref>)</td>
<td valign="top" align="left">BCI Competition IV dataset 2b (9)</td>
<td valign="top" align="left">CNN&#x0002B;VAE</td>
<td valign="top" align="left">Classifying MI</td>
<td valign="top" align="left">Kappa = 0.60</td>
</tr> <tr>
<td valign="top" align="left">Amin et al. (<xref ref-type="bibr" rid="B15">2019b</xref>)</td>
<td valign="top" align="left">BCI Competition IV dataset 2a (9)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Classifying MI</td>
<td valign="top" align="left">75.7%</td>
</tr> <tr>
<td valign="top" align="left">Saha et al. (<xref ref-type="bibr" rid="B99">2019</xref>)</td>
<td valign="top" align="left">KARA (14)</td>
<td valign="top" align="left">CNN&#x0002B;LSTM</td>
<td valign="top" align="left">Categorizing phonology</td>
<td valign="top" align="left">77.90%</td>
</tr> <tr>
<td valign="top" align="left">Ozdemir et al. (<xref ref-type="bibr" rid="B87">2019</xref>)</td>
<td valign="top" align="left">DEAP dataset (32)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Estimating emotional state</td>
<td valign="top" align="left">95.96%</td>
</tr> <tr>
<td valign="top" align="left">Tiwari et al. (<xref ref-type="bibr" rid="B110">2021</xref>)</td>
<td valign="top" align="left">BCI competition V dataset (3), Emotiv dataset (16)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Classifying left hand and right hand task</td>
<td valign="top" align="left">72.51% (BCI V), 72.00% (Emotiv)</td>
</tr> <tr>
<td valign="top" align="left">Dang et al. (<xref ref-type="bibr" rid="B35">2021</xref>)</td>
<td valign="top" align="left">CHB-MIT datasets (24)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Detecting epilepsy</td>
<td valign="top" align="left">99.56%</td>
</tr> <tr>
<td valign="top" align="left">Polat and &#x000D6;zerdem (<xref ref-type="bibr" rid="B92">2020</xref>)</td>
<td valign="top" align="left">BCI competition 2003 (1)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Detecting cursor movements</td>
<td valign="top" align="left">90.38%</td>
</tr> <tr>
<td valign="top" align="left">Chakladar et al. (<xref ref-type="bibr" rid="B28">2020</xref>)</td>
<td valign="top" align="left">STEW dataset (48)</td>
<td valign="top" align="left">Bi-LSTM</td>
<td valign="top" align="left">Estimating mental workload</td>
<td valign="top" align="left">82.57%</td>
</tr> <tr>
<td valign="top" align="left">Li F. et al. (<xref ref-type="bibr" rid="B71">2020</xref>)</td>
<td valign="top" align="left">BCI Competition IV 2b (9)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Classifying MI</td>
<td valign="top" align="left">83.20%</td>
</tr> <tr>
<td valign="top" align="left">Alazrai et al. (<xref ref-type="bibr" rid="B7">2019</xref>)</td>
<td valign="top" align="left">22 subjects (22)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Decoding MI tasks of the same hand</td>
<td valign="top" align="left">73.70%</td>
</tr> <tr>
<td valign="top" align="left">Liu Y. et al. (<xref ref-type="bibr" rid="B75">2020</xref>)</td>
<td valign="top" align="left">DEAP dataset (32)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Recognizing emotion</td>
<td valign="top" align="left">95.27%</td>
</tr> <tr>
<td valign="top" align="left">Arnau-Gonz&#x000E1;lez et al. (<xref ref-type="bibr" rid="B17">2017</xref>)</td>
<td valign="top" align="left">DREAMER dataset (23)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Identifying subject</td>
<td valign="top" align="left">94.01%</td>
</tr> <tr>
<td valign="top" align="left">Zhu et al. (<xref ref-type="bibr" rid="B132">2022</xref>)</td>
<td valign="top" align="left">MBT-42 (42), Med-62 (62)</td>
<td valign="top" align="left">ConvNet, 3D-CNN</td>
<td valign="top" align="left">Classifying MI</td>
<td valign="top" align="left">73.12% (MBT-42), 72.66% (Med-62)</td>
</tr> <tr>
<td valign="top" align="left">Mattioli et al. (<xref ref-type="bibr" rid="B81">2022</xref>)</td>
<td valign="top" align="left">EEG Motor Movement Dataset V 1.0.0 (109)</td>
<td valign="top" align="left">1D-CNN</td>
<td valign="top" align="left">Classifying MI</td>
<td valign="top" align="left">99.38%</td>
</tr> <tr>
<td valign="top" align="left">Du and Liu (<xref ref-type="bibr" rid="B39">2022</xref>)</td>
<td valign="top" align="left">MRCP (12)</td>
<td valign="top" align="left">InceptionEEG-Net (CNN)</td>
<td valign="top" align="left">Classifying MI</td>
<td valign="top" align="left">AUC: 0.91%</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p>It covers the key features, namely: datasets, number of subjects, deep learning modality, BCI application, and classification results.</p>
</table-wrap-foot>
</table-wrap>
<sec>
<title>4.1. Data preprocessing</title>
<p>Due to the presence of artifacts and contamination, EEG data arestill not being used for large-scale BCI studies (Pedroni et al., <xref ref-type="bibr" rid="B89">2019</xref>). Even though some deep learning studies for EEG-based BCI say they did not use any preprocessing steps, most of the time, preprocessing steps are very important. Some research works combine the preprocessing steps in their deep learning pipeline and call it as end-to-end framework (Antoniades et al., <xref ref-type="bibr" rid="B16">2018</xref>; Aznan et al., <xref ref-type="bibr" rid="B19">2018</xref>; Zhang et al., <xref ref-type="bibr" rid="B124">2021</xref>). Moreover, an additional CNN layer has been used for the preprocessing in some cases (Amin et al., <xref ref-type="bibr" rid="B14">2019a</xref>; Tang et al., <xref ref-type="bibr" rid="B107">2019</xref>).</p>
<p>Most of the time, frequency domain filters were used in research to limit the bandwidth of the EEG signal. This is useful when there is a specific frequency range of interest so that the rest can be safely ignored (Islam et al., <xref ref-type="bibr" rid="B61">2016</xref>; Kilicarslan et al., <xref ref-type="bibr" rid="B63">2016</xref>). In 30% of the studies, a signal below 45 Hz, or below a typical low gamma band, was low-pass filtered. The filtered frequency ranges were grouped by task type and artifact reduction methods. It shows that most research used a technique to get rid of artifacts along with lowering the frequency ranges that were studied.</p>
<p>From our observation, 20% of the studies manually eliminated artifacts (Rammy et al., <xref ref-type="bibr" rid="B96">2020</xref>; Atilla and Alimardani, <xref ref-type="bibr" rid="B18">2021</xref>; Sundaresan et al., <xref ref-type="bibr" rid="B105">2021</xref>). It is easy to see unexpected outliers visually, such as when data are missing or significant EEG artifacts are evident. But it might be hard to tell the difference between noisy channels that are always on and noisy channels that are only noisy sometimes. Furthermore, since the way the data are processed is very random, it is hard for other researchers to copy the methods. In addition to this, 10% of the studies did not routinely eliminate EEG artifacts. Independent component analysis (ICA) (Delorme et al., <xref ref-type="bibr" rid="B36">2007</xref>) and discrete wavelet transformation (DWT) were the most common artifact removal methods that were utilized in the remaining two-thirds of the analyzed research (Kline et al., <xref ref-type="bibr" rid="B64">2015</xref>).</p>
<p>The EEG electrodes also take up undesired electrical physiological signals from eye blinks and neck muscles (Crespo-Garcia et al., <xref ref-type="bibr" rid="B33">2008</xref>; Amin et al., <xref ref-type="bibr" rid="B15">2019b</xref>). Additionally, there are issues with motion artifacts brought on by cable motion and electrode displacement while the individual is moving (Arnau-Gonz&#x000E1;lez et al., <xref ref-type="bibr" rid="B17">2017</xref>; Chen et al., <xref ref-type="bibr" rid="B29">2019a</xref>; Gao et al., <xref ref-type="bibr" rid="B46">2019</xref>). There have been a lot of studies performed on how to find and remove EEG artifacts (Nathan and Contreras-Vidal, <xref ref-type="bibr" rid="B83">2016</xref>), but it is not the primary focus of our review work. In summary, one of the three methods (i.e., manual process, automated process, or no removal of artifact) is considered in each study to conduct the artifact removal procedure.</p></sec>
<sec>
<title>4.2. Datasets</title>
<p>One of the main limitations of the classical EEG-based BCI is the number of subjects who participated in this study. Within the course of this review, EEG-based datasets were covered. This scope was taken into account as keywords to find the right research articles on the Google Scholar and Research Gate websites. For this literature review, more than 100 research studies were found on these two websites by using the above criteria. Among these, around 47% of research has been conducted based on the BCI competition dataset. Moreover, 9%, 16%, and 7% of the studies have been conducted on DRYAD, SEED-VIG, and EEG MI datasets, respectively (<xref ref-type="fig" rid="F5">Figure 5</xref>).</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p>Distributions of datasets that are explored for EEG-based BCI applications.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fncom-16-1006763-g0005.tif"/>
</fig>
<p>Deep learning has enabled larger datasets and more rigorous experiments in BCI. &#x0201C;How much data is enough data?&#x0201D; remains a significant question when using DL with EEG data. We looked at numerous descriptive dimensions to investigate this question: the number of participants, the amount of EEG data collected, and the task of the datasets. There are few studies that make use of their own collected datasets (Tang et al., <xref ref-type="bibr" rid="B108">2017</xref>; Vilamala et al., <xref ref-type="bibr" rid="B114">2017</xref>; Antoniades et al., <xref ref-type="bibr" rid="B16">2018</xref>; Aznan et al., <xref ref-type="bibr" rid="B19">2018</xref>; Behncke et al., <xref ref-type="bibr" rid="B22">2018</xref>; El-Fiqi et al., <xref ref-type="bibr" rid="B40">2018</xref>; Nguyen and Chung, <xref ref-type="bibr" rid="B84">2018</xref>; Alazrai et al., <xref ref-type="bibr" rid="B7">2019</xref>; Chen et al., <xref ref-type="bibr" rid="B30">2019b</xref>; Fahimi et al., <xref ref-type="bibr" rid="B42">2019</xref>; Hussein et al., <xref ref-type="bibr" rid="B56">2019</xref>; Zgallai et al., <xref ref-type="bibr" rid="B123">2019</xref>; Gao et al., <xref ref-type="bibr" rid="B45">2020b</xref>; Le&#x000F3;n et al., <xref ref-type="bibr" rid="B70">2020</xref>; Maiorana, <xref ref-type="bibr" rid="B78">2020</xref>; Penchina et al., <xref ref-type="bibr" rid="B90">2020</xref>; Tortora et al., <xref ref-type="bibr" rid="B111">2020</xref>; Atilla and Alimardani, <xref ref-type="bibr" rid="B18">2021</xref>; Cai et al., <xref ref-type="bibr" rid="B25">2021</xref>; Cho et al., <xref ref-type="bibr" rid="B31">2021</xref>; Mai et al., <xref ref-type="bibr" rid="B77">2021</xref>; Mammone et al., <xref ref-type="bibr" rid="B79">2021</xref>; Petoku and Capi, <xref ref-type="bibr" rid="B91">2021</xref>; Reddy et al., <xref ref-type="bibr" rid="B97">2021</xref>; Shoeibi et al., <xref ref-type="bibr" rid="B102">2021</xref>; Sundaresan et al., <xref ref-type="bibr" rid="B105">2021</xref>; Ak et al., <xref ref-type="bibr" rid="B5">2022</xref>). However, most of the deep learning studies have been conducted based on publicly available EEG datasets, such as:</p>
<list list-type="bullet">
<list-item><p>The dataset used to validate the classification method and signal processing for brain&#x02013;computer interfaces was obtained from the BCI competition (Tabar and Halici, <xref ref-type="bibr" rid="B106">2016</xref>; Amin et al., <xref ref-type="bibr" rid="B15">2019b</xref>; Dai et al., <xref ref-type="bibr" rid="B34">2019</xref>; Olivas-Padilla and Chacon-Murguia, <xref ref-type="bibr" rid="B86">2019</xref>; Qiao and Bi, <xref ref-type="bibr" rid="B94">2019</xref>; Roy et al., <xref ref-type="bibr" rid="B98">2019</xref>; Song et al., <xref ref-type="bibr" rid="B103">2019</xref>; Tang et al., <xref ref-type="bibr" rid="B107">2019</xref>; Tayeb et al., <xref ref-type="bibr" rid="B109">2019</xref>; Zhao et al., <xref ref-type="bibr" rid="B131">2019</xref>; Li Y. et al., <xref ref-type="bibr" rid="B73">2020</xref>; Miao et al., <xref ref-type="bibr" rid="B82">2020</xref>; Polat and &#x000D6;zerdem, <xref ref-type="bibr" rid="B92">2020</xref>; Rammy et al., <xref ref-type="bibr" rid="B96">2020</xref>; Yang et al., <xref ref-type="bibr" rid="B120">2020</xref>; Deng et al., <xref ref-type="bibr" rid="B37">2021</xref>; Huang et al., <xref ref-type="bibr" rid="B54">2021</xref>, <xref ref-type="bibr" rid="B55">2022</xref>; Tiwari et al., <xref ref-type="bibr" rid="B110">2021</xref>; Zhang et al., <xref ref-type="bibr" rid="B124">2021</xref>). This dataset comprises EEG data obtained from participants. Class 1 was the left hand, Class 2 was the dominant hand, Class 3 was both feet, and Class 4 was the tongue in the cue-based BCI structure. For each subject, two workouts were captured on interspersing time frames. Each session consisted of six runs separated by relatively short pauses. A phase includes 288 efforts, with each effort being implemented 48 times.</p></list-item>
<list-item><p>DRYAD dataset contains five studies that investigate natural speech understanding using a diversity of activities along with acoustic, cinematic, and envisioned verbal sensations (Amber et al., <xref ref-type="bibr" rid="B13">2019</xref>).</p></list-item>
<list-item><p>CHB-MIT dataset contains EEG recordings from children who have intractable seizures (Dang et al., <xref ref-type="bibr" rid="B35">2021</xref>). After people stopped taking their seizure medicine, they were watched for up to a few days to find out more about their seizures and see if they were good candidates for surgery. There are 23 patients in the dataset, separated into 24 cases (a patient has 2 recordings, 1.5 years apart). There are 969 h of scalp EEG recordings in this dataset, comprising 173 seizures. Seizures of various sorts can be found in the dataset (clonic, atonic, and tonic).</p></list-item>
<list-item><p>DEAP dataset (Koelstra et al., <xref ref-type="bibr" rid="B66">2011</xref>) includes 32 individuals who saw 1-min long music video snippets and judged arousal/valence/like&#x02013;dislike/dominance/familiarity, as well as the frontal facial recording of 22 out of 32 subjects (Chen et al., <xref ref-type="bibr" rid="B30">2019b</xref>; Ozdemir et al., <xref ref-type="bibr" rid="B87">2019</xref>; Wilaiprasitporn et al., <xref ref-type="bibr" rid="B118">2019</xref>; Aldayel et al., <xref ref-type="bibr" rid="B8">2020</xref>; Gao et al., <xref ref-type="bibr" rid="B44">2020a</xref>; Liu J. et al., <xref ref-type="bibr" rid="B74">2020</xref>).</p></list-item>
<list-item><p>The SEED-VIG dataset integrates EEG data with diligence indicators throughout a driving virtual environment. In addition, there are 18 conductive gels and eye-tracking (Ko et al., <xref ref-type="bibr" rid="B65">2020</xref>).</p></list-item>
<list-item><p>SEED dataset wherein EEG was documented over 62 streams from 15 participants as they regarded short videos eliciting positive, negative, or neutral feelings (Gao et al., <xref ref-type="bibr" rid="B44">2020a</xref>; Hwang et al., <xref ref-type="bibr" rid="B57">2020</xref>; Liu J. et al., <xref ref-type="bibr" rid="B74">2020</xref>).</p></list-item>
<list-item><p>The STEW dataset includes the raw EEG data of 48 participants who took part in a multi-threaded workflow test using the SIMKAP experiment (Chakladar et al., <xref ref-type="bibr" rid="B28">2020</xref>).</p></list-item>
<list-item><p>One participant observes an arbitrary picture (chosen from 14k pictures in the ImageNet ILSVRC2013 training dataset) for 3 s, while their EEG signals are documented. Over 70,000 specimens are also included (Fares et al., <xref ref-type="bibr" rid="B43">2019</xref>).</p></list-item>
</list></sec>
<sec>
<title>4.3. Deep learning modality</title>
<p>Deep Neural Networks (DNNs) are highly structured and therefore can learn features from unrefined or modestly heavily processed data, minimizing the need for domain-specific processing and feature extraction processes. Furthermore, DNN-learned attributes may be even more proficient or evocative than human-designed attributes. Second, as in many realms where DL has surpassed the previous condition, it has the potential to improve the effectiveness of other analyses and classifications. Third, DL makes it easier to make tasks such as conceptual sculpting and domain acclimation, which are not tried as often and fail less often when using EEG data. Deep learning techniques have made it feasible to synthesize high-dimensional structured data, such as images and audio.</p>
<p>Deep learning-based methods have been used to sum up high-dimensional, well-organized content such as images and speech. Computational methods could be used by readers to grasp transitional depictions or complement data. Deep neural networks combined with techniques such as linkage synchronization make it easier to learn representations that do not depend on the domain, while keeping information about the task for domain adaptation. Similar methods can be implemented with EEG data to obtain more accurate depictions, and as a result, improve the performance of EEG-based models across a wide range of subjects and tasks.</p>
<p>Various deep learning algorithms have been employed in EEG-based BCI applications, whereas CNN is clearly the most frequent one. For example, Arnau-Gonz&#x000E1;lez et al. (<xref ref-type="bibr" rid="B17">2017</xref>), Tang et al. (<xref ref-type="bibr" rid="B108">2017</xref>), Vilamala et al. (<xref ref-type="bibr" rid="B114">2017</xref>), Antoniades et al. (<xref ref-type="bibr" rid="B16">2018</xref>), Aznan et al. (<xref ref-type="bibr" rid="B19">2018</xref>), Behncke et al. (<xref ref-type="bibr" rid="B22">2018</xref>), Dose et al. (<xref ref-type="bibr" rid="B38">2018</xref>), El-Fiqi et al. (<xref ref-type="bibr" rid="B40">2018</xref>), Nguyen and Chung (<xref ref-type="bibr" rid="B84">2018</xref>), V&#x000F6;lker et al. (<xref ref-type="bibr" rid="B115">2018</xref>), Alazrai et al. (<xref ref-type="bibr" rid="B7">2019</xref>), Amber et al. (<xref ref-type="bibr" rid="B13">2019</xref>), Amin et al. (<xref ref-type="bibr" rid="B15">2019b</xref>), Chen et al. (<xref ref-type="bibr" rid="B29">2019a</xref>,<xref ref-type="bibr" rid="B30">b</xref>), Fahimi et al. (<xref ref-type="bibr" rid="B42">2019</xref>), Gao et al. (<xref ref-type="bibr" rid="B46">2019</xref>), Olivas-Padilla and Chacon-Murguia (<xref ref-type="bibr" rid="B86">2019</xref>), Ozdemir et al. (<xref ref-type="bibr" rid="B87">2019</xref>), Roy et al. (<xref ref-type="bibr" rid="B98">2019</xref>), Song et al. (<xref ref-type="bibr" rid="B103">2019</xref>), Tayeb et al. (<xref ref-type="bibr" rid="B109">2019</xref>), Zgallai et al. (<xref ref-type="bibr" rid="B123">2019</xref>), Zhao et al. (<xref ref-type="bibr" rid="B131">2019</xref>), Aldayel et al. (<xref ref-type="bibr" rid="B8">2020</xref>), Gao et al. (<xref ref-type="bibr" rid="B44">2020a</xref>,<xref ref-type="bibr" rid="B45">b</xref>), Hwang et al. (<xref ref-type="bibr" rid="B57">2020</xref>), Ko et al. (<xref ref-type="bibr" rid="B65">2020</xref>), Li Y. et al. (<xref ref-type="bibr" rid="B73">2020</xref>), Liu J. et al. (<xref ref-type="bibr" rid="B74">2020</xref>), Miao et al. (<xref ref-type="bibr" rid="B82">2020</xref>), Oh et al. (<xref ref-type="bibr" rid="B85">2020</xref>), Polat and &#x000D6;zerdem (<xref ref-type="bibr" rid="B92">2020</xref>), Atilla and Alimardani (<xref ref-type="bibr" rid="B18">2021</xref>), Cai et al. (<xref ref-type="bibr" rid="B25">2021</xref>), Dang et al. (<xref ref-type="bibr" rid="B35">2021</xref>), Deng et al. (<xref ref-type="bibr" rid="B37">2021</xref>), Huang et al. (<xref ref-type="bibr" rid="B54">2021</xref>), Ieracitano et al. (<xref ref-type="bibr" rid="B58">2021</xref>), Mai et al. (<xref ref-type="bibr" rid="B77">2021</xref>), Mammone et al. (<xref ref-type="bibr" rid="B79">2021</xref>), Petoku and Capi (<xref ref-type="bibr" rid="B91">2021</xref>), Reddy et al. (<xref ref-type="bibr" rid="B97">2021</xref>), Tiwari et al. (<xref ref-type="bibr" rid="B110">2021</xref>), Zhang et al. (<xref ref-type="bibr" rid="B124">2021</xref>), Ak et al. (<xref ref-type="bibr" rid="B5">2022</xref>), and, Huang et al. (<xref ref-type="bibr" rid="B55">2022</xref>) have explored deep learning-based algorithms. However, more recent BCI studies have implemented other deep learning modalities including,</p>
<list list-type="bullet">
<list-item><p>Long short-term memory (LSTM) (Zeng et al., <xref ref-type="bibr" rid="B122">2018</xref>; Fares et al., <xref ref-type="bibr" rid="B43">2019</xref>; Hussein et al., <xref ref-type="bibr" rid="B56">2019</xref>; Puengdang et al., <xref ref-type="bibr" rid="B93">2019</xref>; Saha et al., <xref ref-type="bibr" rid="B99">2019</xref>; Chakladar et al., <xref ref-type="bibr" rid="B28">2020</xref>; Penchina et al., <xref ref-type="bibr" rid="B90">2020</xref>; Rammy et al., <xref ref-type="bibr" rid="B96">2020</xref>; Tortora et al., <xref ref-type="bibr" rid="B111">2020</xref>; Cho et al., <xref ref-type="bibr" rid="B31">2021</xref>; Shoeibi et al., <xref ref-type="bibr" rid="B102">2021</xref>),</p></list-item>
<list-item><p>Recurrent neural network (RNN) (Wilaiprasitporn et al., <xref ref-type="bibr" rid="B118">2019</xref>; Le&#x000F3;n et al., <xref ref-type="bibr" rid="B70">2020</xref>; Li F. et al., <xref ref-type="bibr" rid="B71">2020</xref>; Penchina et al., <xref ref-type="bibr" rid="B90">2020</xref>; Mai et al., <xref ref-type="bibr" rid="B77">2021</xref>; Sundaresan et al., <xref ref-type="bibr" rid="B105">2021</xref>), and</p></list-item>
<list-item><p>Autoencoders (AE) and variational AE (VAE) (Tabar and Halici, <xref ref-type="bibr" rid="B106">2016</xref>; Dai et al., <xref ref-type="bibr" rid="B34">2019</xref>; Tang et al., <xref ref-type="bibr" rid="B107">2019</xref>).</p></list-item>
</list></sec></sec>
<sec id="s5">
<title>5. Results and discussion</title>
<sec>
<title>5.1. Dataset-specific studies</title>
<p>Different classification algorithms give different maximum accuracy values for different datasets, as shown in <xref ref-type="table" rid="T3">Table 3</xref>. The LSTM algorithm gave the highest accuracy, which was based on the BCI competition dataset. All researchers achieved an accuracy of over 80% for this dataset, that is, this dataset has the highest accuracy so far. We have found the highest classification accuracy for any algorithm on the BCI competition dataset from various studies, as shown in <xref ref-type="fig" rid="F6">Figure 6</xref>.</p>
<table-wrap position="float" id="T3">
<label>Table 3</label>
<caption><p>Maximum accuracy obtained from different algorithms.</p></caption>
<table frame="box" rules="all">
<thead><tr>
<th valign="top" align="left" style="background-color:#919497;color:#ffffff"><bold>References</bold></th>
<th valign="top" align="left" style="background-color:#919497;color:#ffffff"><bold>Dataset</bold></th>
<th valign="top" align="left" style="background-color:#919497;color:#ffffff"><bold>Max. accuracy (%)</bold></th>
<th valign="top" align="left" style="background-color:#919497;color:#ffffff"><bold>Algorithms used</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Rammy et al. (<xref ref-type="bibr" rid="B96">2020</xref>)</td>
<td valign="top" align="left">BCI competition IV</td>
<td valign="top" align="left">100</td>
<td valign="top" align="left">LSTM</td>
</tr> <tr>
<td valign="top" align="left">Wilaiprasitporn et al. (<xref ref-type="bibr" rid="B118">2019</xref>)</td>
<td valign="top" align="left">DEAP</td>
<td valign="top" align="left">99.90</td>
<td valign="top" align="left">CNN, RNN</td>
</tr> <tr>
<td valign="top" align="left">Amber et al. (<xref ref-type="bibr" rid="B13">2019</xref>)</td>
<td valign="top" align="left">DRYAD</td>
<td valign="top" align="left">99.60</td>
<td valign="top" align="left">CNN</td>
</tr> <tr>
<td valign="top" align="left">Dang et al. (<xref ref-type="bibr" rid="B35">2021</xref>)</td>
<td valign="top" align="left">CMB-MIT</td>
<td valign="top" align="left">99.56</td>
<td valign="top" align="left">CNN</td>
</tr> <tr>
<td valign="top" align="left">Li Y. et al. (<xref ref-type="bibr" rid="B73">2020</xref>)</td>
<td valign="top" align="left">EEGMMIDB</td>
<td valign="top" align="left">97.36</td>
<td valign="top" align="left">R-CNN</td>
</tr> <tr>
<td valign="top" align="left">Fares et al. (<xref ref-type="bibr" rid="B43">2019</xref>)</td>
<td valign="top" align="left">ImageNet-EEG</td>
<td valign="top" align="left">97.30</td>
<td valign="top" align="left">Bi-LSTM</td>
</tr> <tr>
<td valign="top" align="left">Hwang et al. (<xref ref-type="bibr" rid="B57">2020</xref>)</td>
<td valign="top" align="left">SEED</td>
<td valign="top" align="left">96.77</td>
<td valign="top" align="left">CNN</td>
</tr> <tr>
<td valign="top" align="left">Arnau-Gonz&#x000E1;lez et al. (<xref ref-type="bibr" rid="B17">2017</xref>)</td>
<td valign="top" align="left">DREAMER</td>
<td valign="top" align="left">94.01</td>
<td valign="top" align="left">CNN</td>
</tr> <tr>
<td valign="top" align="left">Huang et al. (<xref ref-type="bibr" rid="B55">2022</xref>)</td>
<td valign="top" align="left">Physionet</td>
<td valign="top" align="left">92.00</td>
<td valign="top" align="left">CNN</td>
</tr> <tr>
<td valign="top" align="left">Chakladar et al. (<xref ref-type="bibr" rid="B28">2020</xref>)</td>
<td valign="top" align="left">STEW</td>
<td valign="top" align="left">82.57</td>
<td valign="top" align="left">Bi-LSTM</td>
</tr> <tr>
<td valign="top" align="left">V&#x000F6;lker et al. (<xref ref-type="bibr" rid="B115">2018</xref>)</td>
<td valign="top" align="left">Flanker task</td>
<td valign="top" align="left">81.70</td>
<td valign="top" align="left">CNN</td>
</tr> <tr>
<td valign="top" align="left">Saha et al. (<xref ref-type="bibr" rid="B99">2019</xref>)</td>
<td valign="top" align="left">KARA</td>
<td valign="top" align="left">77.90</td>
<td valign="top" align="left">CNN&#x0002B;LSTM</td>
</tr> <tr>
<td valign="top" align="left">Tiwari et al. (<xref ref-type="bibr" rid="B110">2021</xref>)</td>
<td valign="top" align="left">Emotiv</td>
<td valign="top" align="left">72.00</td>
<td valign="top" align="left">CNN</td>
</tr>
</tbody>
</table>
</table-wrap>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption><p>A comparative schematic of accuracies by various deep learning approaches [i.e., Convolutional neural network (CNN) (Islam et al., <xref ref-type="bibr" rid="B59">2021</xref>), long short-term memory (LSTM), stacked autoencoder (SAE), and variational autoencoder (VAE)] on the BCI competition dataset.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fncom-16-1006763-g0006.tif"/>
</fig>
<p>For the DEAP dataset (Koelstra et al., <xref ref-type="bibr" rid="B66">2011</xref>), all researchers achieved an accuracy of roughly over 90% (<xref ref-type="fig" rid="F7">Figure 7</xref>), that is, this dataset has the highest reliability so far. Unlike the previous dataset, this one has received little attention in terms of deep learning applications. As with the previous two datasets, there are a few works on the SEED dataset. However, the published works have achieved over 90% accuracy based on CNN or CNN&#x0002B;SAE (Gao et al., <xref ref-type="bibr" rid="B44">2020a</xref>; Hwang et al., <xref ref-type="bibr" rid="B57">2020</xref>; Liu J. et al., <xref ref-type="bibr" rid="B74">2020</xref>). We can apply smarter algorithms to this dataset to explore further.</p>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption><p>A graph of accuracies by various deep learning approaches on the DEAP dataset.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fncom-16-1006763-g0007.tif"/>
</fig>
<p>Due to insufficient work on the rest of the datasets shown in <xref ref-type="fig" rid="F8">Figure 8</xref>, we cannot comment on them. However, we think that whether the accuracy can be increased on the rest of the dataset, it can be worked on in future.</p>
<fig id="F8" position="float">
<label>Figure 8</label>
<caption><p>The accuracies of the SEED dataset based on CNN or CNN&#x0002B;SAE.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fncom-16-1006763-g0008.tif"/>
</fig></sec>
<sec>
<title>5.2. Deep leaning models for BCI studies</title>
<p>Among the 110 publications that have been studied in this study, discriminative models, particularly CNN, are utilized most frequently. This is right since almost all BCI problems can be put into the category of classification problems. More than 75% of the models are powered by CNN algorithms, and we can summarize them as follows: (i) CNN can use EEG data to find hidden features and spatial correlations that can be used to classify something. As a result, CNN structures are used for classification in certain research while features are engineered in others; (ii) CNN has had considerable success in various research areas (especially in imaging and computer vision domains), making it exceedingly well-known and simple to use (through the available public code). Surprisingly, several BCI techniques naturally produce two-dimensional visuals that can be processed by CNN, and EEG data could be converted into two-dimensional images in the meantime for additional processing by CNN.</p>
<p>On the contrary, only 15% of the model-based articles used a recurrent neural network (RNN), even though RNN is capable of predicting temporal feature learning. One likely reason for this is that it takes time for an RNN to process a long sequence, and EEG signals are often long sequences. EEG signals, for example, are typically divided into 30-s segments with 2,500 time points at a 120 Hz sampling rate. Moreover, RNN takes more than 25 times as long to train as CNN for a sequence of 2,500 items.</p>
<p>Furthermore, among the typical models, the deep belief network (DBN), particularly the DBN-restricted Boltzmann machine (RBM), is the most often used model for feature extraction. DBN is commonly utilized in BCI for two reasons: 1) It is an efficient way to learn the top-down generative parameters that show how variables in one layer depend on variables in the layer above. 2) The values of the latent variables in each layer can be guessed by a single bottom-up pass that starts with an observed data vector in the bottom layer and uses the generative weights in the opposite direction. But most of the work that used the DBN-RBM model was published before 2018, which shows that DBN is not popular right now. Before 2018, researchers used DBN to learn about features, and then a classifier that did not use deep learning. Now, more and more studies use CNN or hybrid models for both learning about features and classifying them.</p>
<p>Finally, there are nine articles suggesting hybrid models for BCI research. Combinations of RNN and CNN account for approximately a third. It is logical to integrate RNN and CNN for both temporal and spatial feature learning, given that RNN and CNN are renowned for their exceptional temporal and spatial feature extraction capabilities. Combining representative and discriminative models is yet another sort of hybrid model. This is easy to understand since the first is used to pull out features and the second is used to put things into groups. There are nine articles that use this form of hybrid deep learning model, which encompasses almost all types of BCI signals. In addition, 12 studies have presented alternative hybrid models, including two discriminative ones. Several research, for instance, have advocated the combination of CNN with MLP in which the CNN structure is utilized to extract spatial data that are then given to an MLP for classification.</p></sec>
<sec>
<title>5.3. BCI applications and deep learning</title>
<p>Deep learning-based BCI systems are mostly used in the healthcare industry to identify and diagnose mental illnesses, including epilepsy, Alzheimer&#x00027;s disease, and other disorders (Dose et al., <xref ref-type="bibr" rid="B38">2018</xref>). First, research focusing on sleep-stage recognition based on sleeping spontaneous EEG is utilized to identify sleeping disorders (Vallabhaneni et al., <xref ref-type="bibr" rid="B113">2021</xref>). As a result, the researchers do not need to seek out patients with sleeping issues since it is simple to gather the sleeping EEG signals from healthy people in this condition. The diagnosis of epileptic seizures has also garnered a great deal of interest. The majority of seizure detection is dependent on spontaneous EEG and mental illness signs (Antoniades et al., <xref ref-type="bibr" rid="B16">2018</xref>; Hussein et al., <xref ref-type="bibr" rid="B56">2019</xref>; Dang et al., <xref ref-type="bibr" rid="B35">2021</xref>; Shoeibi et al., <xref ref-type="bibr" rid="B102">2021</xref>). CNN and RNN are common deep learning models in this context, as are hybrid models that combine RNN and CNN. Several methods (Turner et al., <xref ref-type="bibr" rid="B112">2014</xref>) combined deep learning models for feature extraction with classical classifiers for detection. To diagnose seizures, researchers used a VAE in feature engineering followed by SVM.</p>
<p>Smart environments are a possible future application scenario for BCI. With the rise of the Internet of Things (IoT), BCI can be linked to a growing number of smart settings. For instance, an aiding robot may be used in a smart house (Zhang et al., <xref ref-type="bibr" rid="B130">2018c</xref>) in which the robot is controlled by brain impulses. In addition, Behncke et al. (<xref ref-type="bibr" rid="B22">2018</xref>) examined how to operate a robot using visually stimulated spontaneous EEG and fNIRS data. BCI-controlled exoskeletons might assist individuals with compromised lower limb motor control in walking and everyday activities (Kwak et al., <xref ref-type="bibr" rid="B68">2017</xref>). In future, research on brain-controlled equipment may be useful for developing smart homes and smart hospitals for the elderly and the crippled.</p>
<p>In comparison to other human&#x02013;machine interface approaches, the greatest benefit of BCI is that it allows patients who have lost most motor functions, such as speaking, to interact with the outside world (Nguyen and Chung, <xref ref-type="bibr" rid="B84">2018</xref>). Deep learning technology has considerably enhanced the efficiency of the brain&#x00027;s signal-based communications. The P300 speller is a common paradigm that allows individuals to type without a motor system, which can turn the user&#x00027;s intent into text (Cecotti and Graser, <xref ref-type="bibr" rid="B27">2010</xref>). In addition, Zhang et al. (<xref ref-type="bibr" rid="B129">2018b</xref>) suggested a hybrid model that combines RNN, CNN, and AE to extract relevant characteristics from MI EEG to detect the letter the user intends to write. The suggested interface consists of 27 characters (26 English alphabets and the space bar) split into three character blocks (each block containing nine characters) in the first interface. There are three possible choices, and each one leads to a separate sub-interface with nine characters.</p>
<p>A prominent topic of interest for BCI researchers is the security industry. A security issue may be broken down into authentication (also known as &#x0201C;verification&#x0201D;) and identity (also known as &#x0201C;recognition&#x0201D;) components (Arnau-Gonz&#x000E1;lez et al., <xref ref-type="bibr" rid="B17">2017</xref>; El-Fiqi et al., <xref ref-type="bibr" rid="B40">2018</xref>; Chen et al., <xref ref-type="bibr" rid="B30">2019b</xref>; Puengdang et al., <xref ref-type="bibr" rid="B93">2019</xref>; Maiorana, <xref ref-type="bibr" rid="B78">2020</xref>). The goal of the former, which is often a multi-class classification task, is to identify the test subject (Zhang et al., <xref ref-type="bibr" rid="B127">2017</xref>). This is usually a simple yes-or-no question that only looks at whether the test subject is allowed or not. Existing biometric identification/authentication systems rely primarily on the unique inherent physiological characteristics of people (e.g., face, iris, retina, voice, and fingerprint). Anti-surveillance prosthetic masks that may defy face recognition, contact lenses that can fool iris detection, vocoders that can compromise speech identification, and fingerprint films that can fool fingerprint sensors are all vulnerable. Due to their great attack resilience, EEG-based biometric person identification systems are emerging as attractive alternatives. Individual EEG waves are almost impossible for an impostor to replicate, making this method extremely resistant to spoofing assaults faced by other identification methods. Deep neural networks were used by Mao et al. (<xref ref-type="bibr" rid="B80">2017</xref>) to identify the user&#x00027;s ID based on EEG signals, and CNN was used for personal identification. Zhang et al. (<xref ref-type="bibr" rid="B127">2017</xref>) presented and analyzed an attention-based LSTM model on both public and local datasets. The researchers (Zhang et al., <xref ref-type="bibr" rid="B128">2018a</xref>) subsequently merged EEG signals with gait data to develop a dual-authentication system using a hybrid deep learning model.</p>
<p>Several articles simply aim to categorize the user&#x00027;s emotional state as a binary (positive/negative) or three-category (positive, neutral, and negative) issue using deep learning algorithms (Chen et al., <xref ref-type="bibr" rid="B30">2019b</xref>; Ozdemir et al., <xref ref-type="bibr" rid="B87">2019</xref>; Gao et al., <xref ref-type="bibr" rid="B44">2020a</xref>,<xref ref-type="bibr" rid="B45">b</xref>; Hwang et al., <xref ref-type="bibr" rid="B57">2020</xref>; Liu J. et al., <xref ref-type="bibr" rid="B74">2020</xref>; Liu Y. et al., <xref ref-type="bibr" rid="B75">2020</xref>; Sundaresan et al., <xref ref-type="bibr" rid="B105">2021</xref>). Diverse articles employed CNN and its modifications to identify emotional EEG data (Li et al., <xref ref-type="bibr" rid="B72">2016</xref>) and lie detection (Amber et al., <xref ref-type="bibr" rid="B13">2019</xref>). Most of the time, the CNN-RNN deep learning model is used to find hidden traits in spontaneous emotional EEG. Using EEG data, Xu and Plataniotis (<xref ref-type="bibr" rid="B119">2016</xref>) employed a deep belief network (DBN) as a particular feature extractor for the emotional state categorization task. Moreover, on a more basic level, some studies seek to identify a positive/negative condition for each emotional dimension. For identifying emotions, Yin et al. (<xref ref-type="bibr" rid="B121">2017</xref>) suggested a multiple-fusion-layer-based ensemble classifier of AE. Each AE is made up of three hidden layers that remove unwanted noise from the physiological data and give accurate representations of the features.</p>
<p>For traffic safety to be assured, a driver must be able to keep up their best performance and pay close attention. It has been shown that EEG signals may be beneficial in assessing people&#x00027;s cognitive status while doing certain activities (Almogbel et al., <xref ref-type="bibr" rid="B9">2018</xref>). A motorist is often considered alert if their response time is less than or equal to 0.7 s and weary if their reaction time is more than or equal to 2.1 s. By extracting the distinctive elements from the EEG data, Hajinoroozi et al. (<xref ref-type="bibr" rid="B48">2015</xref>) investigated the prediction of a driver&#x00027;s weariness. They investigated a DBN-based dimensionality reduction strategy. It is important to be able to tell when a driver is tired since that can make accidents more likely. Furthermore, it is practical to identify driver weariness in daily life. The technology that is used to record EEG data is easy to find and small enough to use in a car. In addition, the cost of an EEG headset is reasonable for the majority of individuals. Deep learning algorithms have greatly improved the accuracy of tiredness detection. In conclusion, driving sleepiness based on EEG may be identified with excellent accuracy (83&#x02013;98%) (Fahimi et al., <xref ref-type="bibr" rid="B42">2019</xref>; Ko et al., <xref ref-type="bibr" rid="B65">2020</xref>; Atilla and Alimardani, <xref ref-type="bibr" rid="B18">2021</xref>; Cai et al., <xref ref-type="bibr" rid="B25">2021</xref>). The self-driving situation is where driver fatigue monitoring will likely be used in future. Since the human driver is often expected to react correctly to a request to intervene in most self-driving scenarios, the driver must always be aware. As a result, we think that using BCI-based drive fatigue detection can help the development of autonomous vehicles.</p>
<p>Human operators play an important role in automation systems for decision-making and strategy formulation. Human functional states, unlike those of machines or computers, cannot always meet the needs of a task because working memory is limited, and psycho-physiological experience changes over time. A lot of researchers have concentrated on this subject. The mental effort may be calculated using spontaneous EEG. Bashivan et al. (<xref ref-type="bibr" rid="B21">2015</xref>) introduced a DBN model, a statistical technique for predicting cognitive load from single trial EEG.</p></sec>
<sec>
<title>5.4. Recommendation for future research</title>
<p>However, there are still plenty of deep learning premises and domains to be used in EEG-based BCI, which will not only improve the performance but also make them more generalizable. Here are a few suggestions for future researchers regarding where they can uncover novelty utilizing deep learning.</p>
<list list-type="bullet">
<list-item><p><bold>Graph Convolutional Networks (GCNs)</bold>: One of the fundamental functions of the BCI is controlling machines using only the MI and no physical motions. For the development of these BCI devices, it is very important to be able to classify MI brain activity in a reliable way. Even though previous research has shown promising results, there is still a need to improve classification accuracy to make BCI applications that are useful and cost-effective. One problem with making an EEG MI-based wheelchair is that it is still hard to make it flexible and resistant to differences between people. Traditional techniques to decipher EEG data do not include the topological link between electrodes. So, it is possible that the Euclidean structure of EEG electrodes does not give a good picture of how signals interact with each other. To solve the problem, graph convolutional neural networks (GCNs) are presented to decode EEG data. GCN is a semi-supervised model that is often used to get topological properties from data in non-Euclidean space. GCNs have been used successfully in a number of graph-based applications. Graphs can show complicated relationships between entities. GCN not only successfully extracts topological information from data but also it has interpretability and operability. Recently, researchers are shifting to GCN from CNNs for various applications as it can capture relational data better than CNNs. Though some studies have recently reported GCN in EEG-based BCI (Hou et al., <xref ref-type="bibr" rid="B53">2020</xref>; Jia et al., <xref ref-type="bibr" rid="B62">2020</xref>), it is mostly undiscovered. Any research in this domain using GCN might be the breakthrough needed to trigger deep learning-based BCI studies.</p></list-item>
<list-item><p><bold>Transfer Learning</bold>: The study of deep neural network-based methods for successfully transferring information from relevant disciplines is known as &#x0201C;deep transfer learning&#x0201D;. Transfer learning focuses on dealing with facts that defy this notion by utilizing knowledge acquired while completing one assignment for a different but related job. Transfer learning uses data that have already been used to increase the size of the dataset. This means that there is no need to calibrate from scratch, transferred information is less noisy, and TL can loosen BCI constraints. Session-to-session transfer learning in BCIs is based on the idea that features extracted by the training module and algorithms can be used to help a subject do the same task in a different session. To find the best way to divide decisions among the different training sections, it is important to look at what they all have in common. As TL has a lot more opportunities in BCI applications, we have a few recommendations for future researchers.</p>
<p>The majority of TL research has focused on inter-subject and intersession transfer. Cross-device transfers are beginning to gain interest, although cross-task transfers are mostly unexplored. Since 2016, there has, to the best of our knowledge, been only one similar research (He and Wu, <xref ref-type="bibr" rid="B50">2020</xref>). Transfers between devices and tasks would make EEG-based BCIs far more realistic.</p>
<p>Utilizing the transferability of adversarial cases, adversarial assaults&#x02013;one of the most recent advancements in EEG-based BCIs, may be carried out across several machine learning models. However, specifically considering TL across domains may boost the attack&#x00027;s performance further. In black box attacks, for example, TL can use publicly available datasets to reduce the number of queries to the victim model or better approximate the victim model with the same number of queries.</p>
<p>Regression issues and emotional BCI are two fresh uses of EEG-based BCIs that have been piquing curiosity among researchers. It is interesting that they are both passive BCIs. Although affective BCI may be used to create both classification and regression problems, the majority of past research has been on classification issues.</p></list-item>
<list-item><p><bold>Generative Deep Learning</bold> : The primary purpose of generative deep learning models is to produce training samples or supplement data. In other words, generative deep learning models help the BCI industry by making the training data better and giving it more of it. After augmenting the data, discriminative models will be used for classification. This method is meant to make trained deep learning networks more reliable and effective, especially when there is not a lot of training data. In short, the generative models use the input data to make a set of output data that is similar to the input data. This section will present two common generative deep learning models: variational autoencoder (VAE) and generative adversarial networks (GANs).</p>
<p>VAE is an important type of AE and one of the best algorithms for making things. The standard AE and its variations can be used for representation, but they cannot be used for generation since the learned code (or representation) might not be continuous. Therefore, it is impossible to make a random sample that is the same as the sample that was put in. In other words, interpolation is not supported by the standard AE. Therefore, we can duplicate the input sample but cannot construct one that is similar. This trait is what makes VAE so valuable for generative modeling: the latent spaces are meant to be continuous, which can make a huge contribution to capturing EEG data features for BCI applications (Lee et al., <xref ref-type="bibr" rid="B69">2022</xref>).</p>
<p>Machine learning and deep learning modules must be trained on a significant amount of real-world data to perform classification tasks; however, there may be restrictions on obtaining enough real data or the time and resources required may be simply too great. GANs, have seen an increase in activity in recent years, and are primarily used for data augmentation to address the issue of how to produce synthetic yet realistic-looking samples to mimic real-world data using generative models so that the training data sample number can be increased. In comparison to CNNs, GANs have, to the best of our knowledge, been studied much less in BCIs. This is primarily due to the incomplete evaluation of the viability of using a GAN to generate time sequence data. The spatial, spectral, and temporal properties of the EEG data produced by the GAN are comparable to those of actual EEG signals (Fahimi et al., <xref ref-type="bibr" rid="B41">2020</xref>). This opens up new avenues for future research on GANs in EEG-based BCIs.</p></list-item>
</list>
</sec></sec>
<sec sec-type="conclusions" id="s6">
<title>6. Conclusion</title>
<p>Deep learning (DL) has historically resulted in significant breakthroughs in supervised classification tasks, which were envisaged to be the concentration of the majority of research chosen for assessment. Remarkably, numerous studies spotlighted the new use cases facilitated by the study results. For example, generating visual effects based on EEG, deriving EEG, learning from other participants, and learning about attributes are all different ways to learn. One of the main reasons for using DL is that it can manage raw EEG data without mandating a substantial preprocessing step, which is alluded to in the literature as an &#x0201C;end-to-end structure.&#x0201D; Given that EEG is clearly linked to certain parts of the brain, we thought that RNNs would be much more widespread than models that do not explicitly take time into account.</p>
<p>Adding to its prospects is the willingness of deep learning in the EEG to extrapolate across respondents and facilitate transfer learning across activities and domains. Regardless of the fact that intra-subject models are still the most efficacious when only restricted evidence is accessible, ensemble learning may well be the best way to overcome this restriction given the obvious determining factor of the rate of EEG data. Using a predictive model, one can train a neural network on a sample of subjects before fine-tuning it on a single individual, which is likely to result in favorable results with less data from the individual. DNNs are typically regarded as &#x0201C;black boxes&#x0201D; when likened to more conventional means; therefore, it is crucial to scrutinize trained DL models. Indeed, simple model inspection techniques such as showing the weights of a linear classifier do not apply to deep neural networks, making their decisions far more difficult to comprehend.</p>
<p>This study presents an overview of EEG-based BCIs incorporating deep learning, with a concentration on the epistemological advantages and pitfalls, as well as the invaluable efforts in this area of study. This study shows that more research needs to be conducted on how much data are needed to use deep learning in EEG processing to its fullest potential. This type of research could look at the relationship between performance and data volume, effectiveness and data augmentation, performance, data volume, and network depth. For each BCI application, researchers have examined measurement techniques, control signals, EEG feature extraction, classification techniques, and performance evaluation metrics. Tuning hyper-parameters could have been the key to increasing the efficiency of deeper frameworks in deep learning mode by adjusting hyper-parameters. As mentioned earlier about the lack of hyper-parameter search in this domain, this issue should be addressed in future studies.</p></sec>
<sec sec-type="author-contributions" id="s7">
<title>Author contributions</title>
<p>KH, SH, and MI contributed the core writing and analysis. AN and MA edited and partially wrote the paper. All authors contributed to the article and approved the submitted version.</p></sec>
</body>
<back>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s8">
<title>Publisher&#x00027;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Abdulkader</surname> <given-names>S. N.</given-names></name> <name><surname>Atia</surname> <given-names>A.</given-names></name> <name><surname>Mostafa</surname> <given-names>M.-S. M.</given-names></name></person-group> (<year>2015</year>). <article-title>Brain computer interfacing: applications and challenges</article-title>. <source>Egyptian Inf. J</source>. <volume>16</volume>, <fpage>213</fpage>&#x02013;<lpage>230</lpage>. <pub-id pub-id-type="doi">10.1016/j.eij.2015.06.002</pub-id></citation>
</ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Abiri</surname> <given-names>R.</given-names></name> <name><surname>Borhani</surname> <given-names>S.</given-names></name> <name><surname>Sellers</surname> <given-names>E. W.</given-names></name> <name><surname>Jiang</surname> <given-names>Y.</given-names></name> <name><surname>Zhao</surname> <given-names>X.</given-names></name></person-group> (<year>2019</year>). <article-title>A comprehensive review of eeg-based brain-computer interface paradigms</article-title>. <source>J. Neural Eng</source>. <volume>16</volume>, <fpage>011001</fpage>. <pub-id pub-id-type="doi">10.1088/1741-2552/aaf12e</pub-id><pub-id pub-id-type="pmid">30523919</pub-id></citation></ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Acar</surname> <given-names>E.</given-names></name> <name><surname>Roald</surname> <given-names>M.</given-names></name> <name><surname>Hossain</surname> <given-names>K. M.</given-names></name> <name><surname>Calhoun</surname> <given-names>V. D.</given-names></name> <name><surname>Adali</surname> <given-names>T.</given-names></name></person-group> (<year>2022</year>). <article-title>Tracing evolving networks using tensor factorizations vs. ica-based approaches</article-title>. <source>Front. Neurosci</source>. <volume>16</volume>, <fpage>861402</fpage>. <pub-id pub-id-type="doi">10.3389/fnins.2022.861402</pub-id><pub-id pub-id-type="pmid">35546891</pub-id></citation></ref>
<ref id="B4">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Agrawal</surname> <given-names>R.</given-names></name> <name><surname>Bajaj</surname> <given-names>P.</given-names></name></person-group> (<year>2020</year>). <article-title>EEG based brain state classification technique using support vector machine-a design approach,</article-title> in <source>2020 3rd International Conference on Intelligent Sustainable Systems (ICISS)</source> (<publisher-loc>Thoothukudi</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>895</fpage>&#x02013;<lpage>900</lpage>.</citation>
</ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ak</surname> <given-names>A.</given-names></name> <name><surname>Topuz</surname> <given-names>V.</given-names></name> <name><surname>Midi</surname> <given-names>I.</given-names></name></person-group> (<year>2022</year>). <article-title>Motor imagery eeg signal classification using image processing technique over googlenet deep learning algorithm for controlling the robot manipulator</article-title>. <source>Biomed. Signal Process. Control</source>. <volume>72</volume>, <fpage>103295</fpage>. <pub-id pub-id-type="doi">10.1016/j.bspc.2021.103295</pub-id></citation>
</ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Akhter</surname> <given-names>T.</given-names></name> <name><surname>Islam</surname> <given-names>M. A.</given-names></name> <name><surname>Islam</surname> <given-names>S.</given-names></name></person-group> (<year>2020</year>). <article-title>Artificial neural network based COVID-19 suspected area identification</article-title>. <source>J. Eng. Adv</source>. <volume>1</volume>, <fpage>188</fpage>&#x02013;<lpage>194</lpage>. <pub-id pub-id-type="doi">10.38032/jea.2020.04.010</pub-id><pub-id pub-id-type="pmid">32984796</pub-id></citation></ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Alazrai</surname> <given-names>R.</given-names></name> <name><surname>Abuhijleh</surname> <given-names>M.</given-names></name> <name><surname>Alwanni</surname> <given-names>H.</given-names></name> <name><surname>Daoud</surname> <given-names>M. I.</given-names></name></person-group> (<year>2019</year>). <article-title>A deep learning framework for decoding motor imagery tasks of the same hand using eeg signals</article-title>. <source>IEEE Access</source> <volume>7</volume>, <fpage>109612</fpage>&#x02013;<lpage>109627</lpage>. <pub-id pub-id-type="doi">10.1109/ACCESS.2019.2934018</pub-id></citation>
</ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Aldayel</surname> <given-names>M.</given-names></name> <name><surname>Ykhlef</surname> <given-names>M.</given-names></name> <name><surname>Al-Nafjan</surname> <given-names>A.</given-names></name></person-group> (<year>2020</year>). <article-title>Deep learning for eeg-based preference classification in neuromarketing</article-title>. <source>Appl. Sci</source>. <volume>10</volume>, <fpage>1525</fpage>. <pub-id pub-id-type="doi">10.3390/app10041525</pub-id><pub-id pub-id-type="pmid">35634118</pub-id></citation></ref>
<ref id="B9">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Almogbel</surname> <given-names>M. A.</given-names></name> <name><surname>Dang</surname> <given-names>A. H.</given-names></name> <name><surname>Kameyama</surname> <given-names>W.</given-names></name></person-group> (<year>2018</year>). <article-title>EEG-signals based cognitive workload detection of vehicle driver using deep learning,</article-title> in <source>2018 20th International Conference on Advanced Communication Technology (ICACT)</source> (<publisher-loc>Chuncheon</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>256</fpage>&#x02013;<lpage>259</lpage>.</citation>
</ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Al-Saegh</surname> <given-names>A.</given-names></name> <name><surname>Dawwd</surname> <given-names>S. A.</given-names></name> <name><surname>Abdul-Jabbar</surname> <given-names>J. M.</given-names></name></person-group> (<year>2021</year>). <article-title>Deep learning for motor imagery eeg-based classification: a review</article-title>. <source>Biomed. Signal Process. Control</source> <volume>63</volume>, <fpage>102172</fpage>. <pub-id pub-id-type="doi">10.1016/j.bspc.2020.102172</pub-id><pub-id pub-id-type="pmid">35957360</pub-id></citation></ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Alzahab</surname> <given-names>N. A.</given-names></name> <name><surname>Apollonio</surname> <given-names>L.</given-names></name> <name><surname>Di Iorio</surname> <given-names>A.</given-names></name> <name><surname>Alshalak</surname> <given-names>M.</given-names></name> <name><surname>Iarlori</surname> <given-names>S.</given-names></name> <name><surname>Ferracuti</surname> <given-names>F.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>Hybrid deep learning (hdl)-based brain-computer interface (bci) systems: a systematic review</article-title>. <source>Brain Sci</source>. <volume>11</volume>, <fpage>75</fpage>. <pub-id pub-id-type="doi">10.3390/brainsci11010075</pub-id><pub-id pub-id-type="pmid">33429938</pub-id></citation></ref>
<ref id="B12">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Amarasinghe</surname> <given-names>K.</given-names></name> <name><surname>Wijayasekara</surname> <given-names>D.</given-names></name> <name><surname>Manic</surname> <given-names>M.</given-names></name></person-group> (<year>2014</year>). <article-title>EEG based brain activity monitoring using artificial neural networks,</article-title> in <source>2014 7th International Conference on Human System Interactions (HSI)</source> (<publisher-loc>Costa da Caparica</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>61</fpage>&#x02013;<lpage>66</lpage>.<pub-id pub-id-type="pmid">32545622</pub-id></citation></ref>
<ref id="B13">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Amber</surname> <given-names>F.</given-names></name> <name><surname>Yousaf</surname> <given-names>A.</given-names></name> <name><surname>Imran</surname> <given-names>M.</given-names></name> <name><surname>Khurshid</surname> <given-names>K.</given-names></name></person-group> (<year>2019</year>). <article-title>P300 based deception detection using convolutional neural network,</article-title> in <source>2019 2nd International Conference on Communication, Computing and Digital Systems (C-CODE)</source> (<publisher-loc>Islamabad</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>201</fpage>&#x02013;<lpage>204</lpage>.</citation>
</ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Amin</surname> <given-names>S. U.</given-names></name> <name><surname>Alsulaiman</surname> <given-names>M.</given-names></name> <name><surname>Muhammad</surname> <given-names>G.</given-names></name> <name><surname>Bencherif</surname> <given-names>M. A.</given-names></name> <name><surname>Hossain</surname> <given-names>M. S.</given-names></name></person-group> (<year>2019a</year>). <article-title>Multilevel weighted feature fusion using convolutional neural networks for EEG motor imagery classification</article-title>. <source>IEEE Access</source> <volume>7</volume>, <fpage>18940</fpage>&#x02013;<lpage>18950</lpage>. <pub-id pub-id-type="doi">10.1109/ACCESS.2019.2895688</pub-id></citation>
</ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Amin</surname> <given-names>S. U.</given-names></name> <name><surname>Alsulaiman</surname> <given-names>M.</given-names></name> <name><surname>Muhammad</surname> <given-names>G.</given-names></name> <name><surname>Mekhtiche</surname> <given-names>M. A.</given-names></name> <name><surname>Hossain</surname> <given-names>M. S.</given-names></name></person-group> (<year>2019b</year>). <article-title>Deep learning for EEG motor imagery classification based on multi-layer cnns feature fusion</article-title>. <source>Future Generat. Comput. Syst</source>. <volume>101</volume>, <fpage>542</fpage>&#x02013;<lpage>554</lpage>. <pub-id pub-id-type="doi">10.1016/j.future.2019.06.027</pub-id></citation>
</ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Antoniades</surname> <given-names>A.</given-names></name> <name><surname>Spyrou</surname> <given-names>L.</given-names></name> <name><surname>Martin-Lopez</surname> <given-names>D.</given-names></name> <name><surname>Valentin</surname> <given-names>A.</given-names></name> <name><surname>Alarcon</surname> <given-names>G.</given-names></name> <name><surname>Sanei</surname> <given-names>S.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>Deep neural architectures for mapping scalp to intracranial EEG</article-title>. <source>Int. J. Neural Syst</source>. <volume>28</volume>, <fpage>1850009</fpage>. <pub-id pub-id-type="doi">10.1142/S0129065718500090</pub-id><pub-id pub-id-type="pmid">29631503</pub-id></citation></ref>
<ref id="B17">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Arnau-Gonz&#x000E1;lez</surname> <given-names>P.</given-names></name> <name><surname>Katsigiannis</surname> <given-names>S.</given-names></name> <name><surname>Ramzan</surname> <given-names>N.</given-names></name> <name><surname>Tolson</surname> <given-names>D.</given-names></name> <name><surname>Arevalillo-Herrez</surname> <given-names>M.</given-names></name></person-group> (<year>2017</year>). <article-title>Es1d: a deep network for eeg-based subject identification,</article-title> in <source>2017 IEEE 17th International Conference on Bioinformatics and Bioengineering (BIBE)</source> (<publisher-loc>Washington, DC</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>81</fpage>&#x02013;<lpage>85</lpage>.</citation>
</ref>
<ref id="B18">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Atilla</surname> <given-names>F.</given-names></name> <name><surname>Alimardani</surname> <given-names>M.</given-names></name></person-group> (<year>2021</year>). <article-title>EEG-based classification of drivers attention using convolutional neural network,</article-title> in <source>2021 IEEE 2nd International Conference on Human-Machine Systems (ICHMS)</source> (<publisher-loc>Magdeburg</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x02013;<lpage>4</lpage>.</citation>
</ref>
<ref id="B19">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Aznan</surname> <given-names>N. K. N.</given-names></name> <name><surname>Bonner</surname> <given-names>S.</given-names></name> <name><surname>Connolly</surname> <given-names>J.</given-names></name> <name><surname>Al Moubayed</surname> <given-names>N.</given-names></name> <name><surname>Breckon</surname> <given-names>T.</given-names></name></person-group> (<year>2018</year>). <article-title>On the classification of ssvep-based dry-EEG signals via convolutional neural networks,</article-title> in <source>2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC)</source> (<publisher-loc>Miyazaki</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>3726</fpage>&#x02013;<lpage>3731</lpage>.</citation>
</ref>
<ref id="B20">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Aznan</surname> <given-names>N. K. N.</given-names></name> <name><surname>Yang</surname> <given-names>Y.-M.</given-names></name></person-group> (<year>2013</year>). <article-title>Applying kalman filter in EEG-based brain computer interface for motor imagery classification,</article-title> in <source>2013 International Conference on ICT Convergence (ICTC)</source> (<publisher-loc>Jeju</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>688</fpage>&#x02013;<lpage>690</lpage>.<pub-id pub-id-type="pmid">21683346</pub-id></citation></ref>
<ref id="B21">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Bashivan</surname> <given-names>P.</given-names></name> <name><surname>Yeasin</surname> <given-names>M.</given-names></name> <name><surname>Bidelman</surname> <given-names>G. M.</given-names></name></person-group> (<year>2015</year>). <article-title>Single trial prediction of normal and excessive cognitive load through eeg feature fusion,</article-title> in <source>2015 IEEE Signal Processing in Medicine and Biology Symposium (SPMB)</source> (<publisher-loc>Philadelphia, PA</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x02013;<lpage>5</lpage>.</citation>
</ref>
<ref id="B22">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Behncke</surname> <given-names>J.</given-names></name> <name><surname>Schirrmeister</surname> <given-names>R. T.</given-names></name> <name><surname>Burgard</surname> <given-names>W.</given-names></name> <name><surname>Ball</surname> <given-names>T.</given-names></name></person-group> (<year>2018</year>). <article-title>The signature of robot action success in eeg signals of a human observer: decoding and visualization using deep convolutional neural networks,</article-title> in <source>2018 6th International Conference on Brain-Computer Interface (BCI)</source> (<publisher-loc>Gangwon</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x02013;<lpage>6</lpage>.</citation>
</ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bi</surname> <given-names>L.</given-names></name> <name><surname>Fan</surname> <given-names>X.-A.</given-names></name> <name><surname>Liu</surname> <given-names>Y.</given-names></name></person-group> (<year>2013</year>). <article-title>EEG-based brain-controlled mobile robots: a survey</article-title>. <source>IEEE Trans. Hum. Mach. Syst</source>. <volume>43</volume>, <fpage>161</fpage>&#x02013;<lpage>176</lpage>. <pub-id pub-id-type="doi">10.1109/TSMCC.2012.2219046</pub-id></citation>
</ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bousseta</surname> <given-names>R.</given-names></name> <name><surname>El Ouakouak</surname> <given-names>I.</given-names></name> <name><surname>Gharbi</surname> <given-names>M.</given-names></name> <name><surname>Regragui</surname> <given-names>F.</given-names></name></person-group> (<year>2018</year>). <article-title>Eeg based brain computer interface for controlling a robot arm movement through thought</article-title>. <source>Irbm</source> <volume>39</volume>, <fpage>129</fpage>&#x02013;<lpage>135</lpage>. <pub-id pub-id-type="doi">10.1016/j.irbm.2018.02.001</pub-id></citation>
</ref>
<ref id="B25">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Cai</surname> <given-names>H.</given-names></name> <name><surname>Xia</surname> <given-names>M.</given-names></name> <name><surname>Nie</surname> <given-names>L.</given-names></name> <name><surname>Wu</surname> <given-names>Y.</given-names></name> <name><surname>Zhang</surname> <given-names>Y.</given-names></name></person-group> (<year>2021</year>). <article-title>Deep learning models with time delay embedding for eeg-based attentive state classification,</article-title> in <source>International Conference on Neural Information Processing</source> (<publisher-loc>Springer</publisher-loc>), <fpage>307</fpage>&#x02013;<lpage>314</lpage>.</citation>
</ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cao</surname> <given-names>Z.</given-names></name></person-group> (<year>2020</year>). <article-title>A review of artificial intelligence for eeg-based brain- computer interfaces and applications</article-title>. <source>Brain Sci. Adv</source>. <volume>6</volume>, <fpage>162</fpage>&#x02013;<lpage>170</lpage>. <pub-id pub-id-type="doi">10.26599/BSA.2020.9050017</pub-id></citation>
</ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cecotti</surname> <given-names>H.</given-names></name> <name><surname>Graser</surname> <given-names>A.</given-names></name></person-group> (<year>2010</year>). <article-title>Convolutional neural networks for p300 detection with application to brain-computer interfaces</article-title>. <source>IEEE Trans. Pattern Anal. Mach. Intell</source>. <volume>33</volume>, <fpage>433</fpage>&#x02013;<lpage>445</lpage>. <pub-id pub-id-type="doi">10.1109/TPAMI.2010.125</pub-id><pub-id pub-id-type="pmid">20567055</pub-id></citation></ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chakladar</surname> <given-names>D. D.</given-names></name> <name><surname>Dey</surname> <given-names>S.</given-names></name> <name><surname>Roy</surname> <given-names>P. P.</given-names></name> <name><surname>Dogra</surname> <given-names>D. P.</given-names></name></person-group> (<year>2020</year>). <article-title>Eeg-based mental workload estimation using deep blstm-lstm network and evolutionary algorithm</article-title>. <source>Biomed. Signal Process. Control</source>. <volume>60</volume>, <fpage>101989</fpage>. <pub-id pub-id-type="doi">10.1016/j.bspc.2020.101989</pub-id></citation>
</ref>
<ref id="B29">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>J.</given-names></name> <name><surname>Mao</surname> <given-names>Z.</given-names></name> <name><surname>Yao</surname> <given-names>W.</given-names></name> <name><surname>Huang</surname> <given-names>Y.</given-names></name></person-group> (<year>2019a</year>). <article-title>EEG-based biometric identification with convolutional neural network,</article-title> in <source>Multimedia Tools and Applications</source> (<publisher-loc>Dordrecht</publisher-loc>), <fpage>1</fpage>&#x02013;<lpage>21</lpage>. <pub-id pub-id-type="pmid">31281339</pub-id></citation></ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>J.</given-names></name> <name><surname>Zhang</surname> <given-names>P.</given-names></name> <name><surname>Mao</surname> <given-names>Z.</given-names></name> <name><surname>Huang</surname> <given-names>Y.</given-names></name> <name><surname>Jiang</surname> <given-names>D.</given-names></name> <name><surname>Zhang</surname> <given-names>Y.</given-names></name></person-group> (<year>2019b</year>). <article-title>Accurate eeg-based emotion recognition on combined features using deep convolutional neural networks</article-title>. <source>IEEE Access</source> <volume>7</volume>, <fpage>44317</fpage>&#x02013;<lpage>44328</lpage>. <pub-id pub-id-type="doi">10.1109/ACCESS.2019.2908285</pub-id></citation>
</ref>
<ref id="B31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cho</surname> <given-names>J.-H.</given-names></name> <name><surname>Jeong</surname> <given-names>J.-H.</given-names></name> <name><surname>Lee</surname> <given-names>S.-W.</given-names></name></person-group> (<year>2021</year>). <article-title>Neurograsp: Real-time eeg classification of high-level motor imagery tasks using a dual-stage deep learning framework</article-title>. <source>IEEE Trans. Cybern</source>. <volume>52</volume>, <fpage>13279</fpage>&#x02013;<lpage>13292</lpage>. <pub-id pub-id-type="doi">10.1109/TCYB.2021.3122969</pub-id><pub-id pub-id-type="pmid">34748509</pub-id></citation></ref>
<ref id="B32">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Craik</surname> <given-names>A.</given-names></name> <name><surname>He</surname> <given-names>Y.</given-names></name> <name><surname>Contreras-Vidal</surname> <given-names>J. L.</given-names></name></person-group> (<year>2019</year>). <article-title>Deep learning for electroencephalogram (eeg) classification tasks: a review</article-title>. <source>J. Neural Eng</source>. <volume>16</volume>, <fpage>031001</fpage>. <pub-id pub-id-type="doi">10.1088/1741-2552/ab0ab5</pub-id><pub-id pub-id-type="pmid">30808014</pub-id></citation></ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Crespo-Garcia</surname> <given-names>M.</given-names></name> <name><surname>Atienza</surname> <given-names>M.</given-names></name> <name><surname>Cantero</surname> <given-names>J. L.</given-names></name></person-group> (<year>2008</year>). <article-title>Muscle artifact removal from human sleep eeg by using independent component analysis</article-title>. <source>Ann. Biomed. Eng</source>. <volume>36</volume>, <fpage>467</fpage>&#x02013;<lpage>475</lpage>. <pub-id pub-id-type="doi">10.1007/s10439-008-9442-y</pub-id><pub-id pub-id-type="pmid">18228142</pub-id></citation></ref>
<ref id="B34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dai</surname> <given-names>M.</given-names></name> <name><surname>Zheng</surname> <given-names>D.</given-names></name> <name><surname>Na</surname> <given-names>R.</given-names></name> <name><surname>Wang</surname> <given-names>S.</given-names></name> <name><surname>Zhang</surname> <given-names>S.</given-names></name></person-group> (<year>2019</year>). <article-title>EEG classification of motor imagery using a novel deep learning framework</article-title>. <source>Sensors</source> <volume>19</volume>, <fpage>551</fpage>. <pub-id pub-id-type="doi">10.3390/s19030551</pub-id><pub-id pub-id-type="pmid">30699946</pub-id></citation></ref>
<ref id="B35">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dang</surname> <given-names>W.</given-names></name> <name><surname>Lv</surname> <given-names>D.</given-names></name> <name><surname>Rui</surname> <given-names>L.</given-names></name> <name><surname>Liu</surname> <given-names>Z.</given-names></name> <name><surname>Chen</surname> <given-names>G.</given-names></name> <name><surname>Gao</surname> <given-names>Z.</given-names></name></person-group> (<year>2021</year>). <article-title>Studying multi-frequency multilayer brain network via deep learning for eeg-based epilepsy detection</article-title>. <source>IEEE Sens. J</source>. <volume>21</volume>, <fpage>27651</fpage>&#x02013;<lpage>27658</lpage>. <pub-id pub-id-type="doi">10.1109/JSEN.2021.3119411</pub-id></citation>
</ref>
<ref id="B36">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Delorme</surname> <given-names>A.</given-names></name> <name><surname>Sejnowski</surname> <given-names>T.</given-names></name> <name><surname>Makeig</surname> <given-names>S.</given-names></name></person-group> (<year>2007</year>). <article-title>Enhanced detection of artifacts in EEG data using higher-order statistics and independent component analysis</article-title>. <source>Neuroimage</source> <volume>34</volume>, <fpage>1443</fpage>&#x02013;<lpage>1449</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2006.11.004</pub-id><pub-id pub-id-type="pmid">17188898</pub-id></citation></ref>
<ref id="B37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Deng</surname> <given-names>X.</given-names></name> <name><surname>Zhang</surname> <given-names>B.</given-names></name> <name><surname>Yu</surname> <given-names>N.</given-names></name> <name><surname>Liu</surname> <given-names>K.</given-names></name> <name><surname>Sun</surname> <given-names>K.</given-names></name></person-group> (<year>2021</year>). <article-title>Advanced tsgl-eegnet for motor imagery EEG-based brain-computer interfaces</article-title>. <source>IEEE Access</source> <volume>9</volume>, <fpage>25118</fpage>&#x02013;<lpage>25130</lpage>. <pub-id pub-id-type="doi">10.1109/ACCESS.2021.3056088</pub-id></citation>
</ref>
<ref id="B38">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dose</surname> <given-names>H.</given-names></name> <name><surname>M&#x000F8;ller</surname> <given-names>J. S.</given-names></name> <name><surname>Iversen</surname> <given-names>H. K.</given-names></name> <name><surname>Puthusserypady</surname> <given-names>S.</given-names></name></person-group> (<year>2018</year>). <article-title>An end-to-end deep learning approach to mi-eeg signal classification for bcis</article-title>. <source>Expert. Syst. Appl</source>. <volume>114</volume>, <fpage>532</fpage>&#x02013;<lpage>542</lpage>. <pub-id pub-id-type="doi">10.1016/j.eswa.2018.08.031</pub-id><pub-id pub-id-type="pmid">31341093</pub-id></citation></ref>
<ref id="B39">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Du</surname> <given-names>Y.</given-names></name> <name><surname>Liu</surname> <given-names>J.</given-names></name></person-group> (<year>2022</year>). <article-title>IENet: a robust convolutional neural network for eeg based brain-computer interfaces</article-title>. <source>J. Neural Eng</source>. <volume>19</volume>, <fpage>036031</fpage>. <pub-id pub-id-type="doi">10.1088/1741-2552/ac7257</pub-id><pub-id pub-id-type="pmid">35605585</pub-id></citation></ref>
<ref id="B40">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>El-Fiqi</surname> <given-names>H.</given-names></name> <name><surname>Wang</surname> <given-names>M.</given-names></name> <name><surname>Salimi</surname> <given-names>N.</given-names></name> <name><surname>Kasmarik</surname> <given-names>K.</given-names></name> <name><surname>Barlow</surname> <given-names>M.</given-names></name> <name><surname>Abbass</surname> <given-names>H.</given-names></name></person-group> (<year>2018</year>). <article-title>Convolution neural networks for person identification and verification using steady state visual evoked potential,</article-title> in <source>2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC)</source> (<publisher-loc>Miyazaki</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1062</fpage>&#x02013;<lpage>1069</lpage>.</citation>
</ref>
<ref id="B41">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fahimi</surname> <given-names>F.</given-names></name> <name><surname>Dosen</surname> <given-names>S.</given-names></name> <name><surname>Ang</surname> <given-names>K. K.</given-names></name> <name><surname>Mrachacz-Kersting</surname> <given-names>N.</given-names></name> <name><surname>Guan</surname> <given-names>C.</given-names></name></person-group> (<year>2020</year>). <article-title>Generative adversarial networks-based data augmentation for brain-computer interface</article-title>. <source>IEEE Trans. Neural Netw. Learn. Syst</source>. <volume>32</volume>, <fpage>4039</fpage>&#x02013;<lpage>4051</lpage>. <pub-id pub-id-type="doi">10.1109/TNNLS.2020.3016666</pub-id><pub-id pub-id-type="pmid">32841127</pub-id></citation></ref>
<ref id="B42">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fahimi</surname> <given-names>F.</given-names></name> <name><surname>Zhang</surname> <given-names>Z.</given-names></name> <name><surname>Goh</surname> <given-names>W. B.</given-names></name> <name><surname>Lee</surname> <given-names>T.-S.</given-names></name> <name><surname>Ang</surname> <given-names>K. K.</given-names></name> <name><surname>Guan</surname> <given-names>C.</given-names></name></person-group> (<year>2019</year>). <article-title>Inter-subject transfer learning with an end-to-end deep convolutional neural network for eeg-based bci</article-title>. <source>J. Neural Eng</source>. <volume>16</volume>, <fpage>026007</fpage>. <pub-id pub-id-type="doi">10.1088/1741-2552/aaf3f6</pub-id><pub-id pub-id-type="pmid">30524056</pub-id></citation></ref>
<ref id="B43">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fares</surname> <given-names>A.</given-names></name> <name><surname>Zhong</surname> <given-names>S.-H.</given-names></name> <name><surname>Jiang</surname> <given-names>J.</given-names></name></person-group> (<year>2019</year>). <article-title>EEG-based image classification via a region-level stacked bi-directional deep learning framework</article-title>. <source>BMC Med. Inform. Decis. Mak</source>. <volume>19</volume>, <fpage>1</fpage>&#x02013;<lpage>11</lpage>. <pub-id pub-id-type="doi">10.1186/s12911-019-0967-9</pub-id><pub-id pub-id-type="pmid">31856818</pub-id></citation></ref>
<ref id="B44">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gao</surname> <given-names>Z.</given-names></name> <name><surname>Li</surname> <given-names>Y.</given-names></name> <name><surname>Yang</surname> <given-names>Y.</given-names></name> <name><surname>Wang</surname> <given-names>X.</given-names></name> <name><surname>Dong</surname> <given-names>N.</given-names></name> <name><surname>Chiang</surname> <given-names>H.-D.</given-names></name></person-group> (<year>2020a</year>). <article-title>A gpso-optimized convolutional neural networks for eeg-based emotion recognition</article-title>. <source>Neurocomputing</source> <volume>380</volume>, <fpage>225</fpage>&#x02013;<lpage>235</lpage>. <pub-id pub-id-type="doi">10.1016/j.neucom.2019.10.096</pub-id></citation>
</ref>
<ref id="B45">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gao</surname> <given-names>Z.</given-names></name> <name><surname>Wang</surname> <given-names>X.</given-names></name> <name><surname>Yang</surname> <given-names>Y.</given-names></name> <name><surname>Li</surname> <given-names>Y.</given-names></name> <name><surname>Ma</surname> <given-names>K.</given-names></name> <name><surname>Chen</surname> <given-names>G.</given-names></name></person-group> (<year>2020b</year>). <article-title>A channel-fused dense convolutional network for EEG-based emotion recognition</article-title>. <source>IEEE Trans. Cognit. Dev. Syst</source>. <volume>13</volume>, <fpage>945</fpage>&#x02013;<lpage>954</lpage>. <pub-id pub-id-type="doi">10.1109/TCDS.2020.2976112</pub-id></citation>
</ref>
<ref id="B46">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gao</surname> <given-names>Z.</given-names></name> <name><surname>Wang</surname> <given-names>X.</given-names></name> <name><surname>Yang</surname> <given-names>Y.</given-names></name> <name><surname>Mu</surname> <given-names>C.</given-names></name> <name><surname>Cai</surname> <given-names>Q.</given-names></name> <name><surname>Dang</surname> <given-names>W.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>EEG-based spatio-temporal convolutional neural network for driver fatigue evaluation</article-title>. <source>IEEE Trans. Neural Netw. Learn. Syst</source>. <volume>30</volume>, <fpage>2755</fpage>&#x02013;<lpage>2763</lpage>. <pub-id pub-id-type="doi">10.1109/TNNLS.2018.2886414</pub-id><pub-id pub-id-type="pmid">30640634</pub-id></citation></ref>
<ref id="B47">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Guger</surname> <given-names>C.</given-names></name> <name><surname>Allison</surname> <given-names>B. Z.</given-names></name> <name><surname>Gunduz</surname> <given-names>A.</given-names></name></person-group> (<year>2021</year>). <article-title>Brain-computer interface research: a state-of-the-art summary 10,</article-title> in <source>Brain-Computer Interface Research</source> (<publisher-loc>Springer</publisher-loc>), <fpage>1</fpage>&#x02013;<lpage>11</lpage>.</citation>
</ref>
<ref id="B48">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Hajinoroozi</surname> <given-names>M.</given-names></name> <name><surname>Jung</surname> <given-names>T.-P.</given-names></name> <name><surname>Lin</surname> <given-names>C.-T.</given-names></name> <name><surname>Huang</surname> <given-names>Y.</given-names></name></person-group> (<year>2015</year>). <article-title>Feature extraction with deep belief networks for driver&#x00027;s cognitive states prediction from EEG data,</article-title> in <source>2015 IEEE China Summit and International Conference on Signal and Information Processing (ChinaSIP)</source> (<publisher-loc>Chengdu</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>812</fpage>&#x02013;<lpage>815</lpage>.</citation>
</ref>
<ref id="B49">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Hassanien</surname> <given-names>A. E.</given-names></name> <name><surname>Azar</surname> <given-names>A.</given-names></name></person-group> (<year>2015</year>). <source>Brain-Computer Interfaces</source>. <publisher-loc>Switzerland</publisher-loc>: <publisher-name>Springer 74</publisher-name>.</citation>
</ref>
<ref id="B50">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>He</surname> <given-names>H.</given-names></name> <name><surname>Wu</surname> <given-names>D.</given-names></name></person-group> (<year>2020</year>). <article-title>Different set domain adaptation for brain-computer interfaces: A label alignment approach</article-title>. <source>IEEE Trans. Neural Syst. Rehabil. Eng</source>. <volume>28</volume>, <fpage>1091</fpage>&#x02013;<lpage>1108</lpage>. <pub-id pub-id-type="doi">10.1109/TNSRE.2020.2980299</pub-id><pub-id pub-id-type="pmid">32167903</pub-id></citation></ref>
<ref id="B51">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Herman</surname> <given-names>P.</given-names></name> <name><surname>Prasad</surname> <given-names>G.</given-names></name> <name><surname>McGinnity</surname> <given-names>T. M.</given-names></name> <name><surname>Coyle</surname> <given-names>D.</given-names></name></person-group> (<year>2008</year>). <article-title>Comparative analysis of spectral approaches to feature extraction for eeg-based motor imagery classification</article-title>. <source>IEEE Trans. Neural Syst. Rehabil. Eng</source>. <volume>16</volume>, <fpage>317</fpage>&#x02013;<lpage>326</lpage>. <pub-id pub-id-type="doi">10.1109/TNSRE.2008.926694</pub-id><pub-id pub-id-type="pmid">18701380</pub-id></citation></ref>
<ref id="B52">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Hossain</surname> <given-names>K. M.</given-names></name> <name><surname>Bhinge</surname> <given-names>S.</given-names></name> <name><surname>Long</surname> <given-names>Q.</given-names></name> <name><surname>Calhoun</surname> <given-names>V. D.</given-names></name> <name><surname>Adali</surname> <given-names>T.</given-names></name></person-group> (<year>2022</year>). <article-title>Data-driven spatio-temporal dynamic brain connectivity analysis using falff: application to sensorimotor task data,</article-title> in <source>2022 56th Annual Conference on Information Sciences and Systems (CISS)</source> (<publisher-loc>Princeton, NJ</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>200</fpage>&#x02013;<lpage>205</lpage>.</citation>
</ref>
<ref id="B53">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hou</surname> <given-names>Y.</given-names></name> <name><surname>Jia</surname> <given-names>S.</given-names></name> <name><surname>Lun</surname> <given-names>X.</given-names></name> <name><surname>Shi</surname> <given-names>Y.</given-names></name> <name><surname>Li</surname> <given-names>Y.</given-names></name></person-group> (<year>2020</year>). <article-title>Deep feature mining via attention-based bilstm-gcn for human motor imagery recognition</article-title>. <source>arXiv preprint</source> arXiv:2005.00777. <pub-id pub-id-type="doi">10.48550/arXiv.2005.00777</pub-id><pub-id pub-id-type="pmid">35223807</pub-id></citation></ref>
<ref id="B54">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Huang</surname> <given-names>E.</given-names></name> <name><surname>Zheng</surname> <given-names>X.</given-names></name> <name><surname>Fang</surname> <given-names>Y.</given-names></name> <name><surname>Zhang</surname> <given-names>Z.</given-names></name></person-group> (<year>2021</year>). <article-title>Classification of motor imagery EEG based on time-domain and frequency-domain dual-stream convolutional neural network</article-title>. <source>IRBM</source> <volume>43</volume>, <fpage>107</fpage>&#x02013;<lpage>113</lpage>. <pub-id pub-id-type="doi">10.1016/j.irbm.2021.04.004</pub-id></citation>
</ref>
<ref id="B55">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Huang</surname> <given-names>W.</given-names></name> <name><surname>Chang</surname> <given-names>W.</given-names></name> <name><surname>Yan</surname> <given-names>G.</given-names></name> <name><surname>Yang</surname> <given-names>Z.</given-names></name> <name><surname>Luo</surname> <given-names>H.</given-names></name> <name><surname>Pei</surname> <given-names>H.</given-names></name></person-group> (<year>2022</year>). <article-title>EEG-based motor imagery classification using convolutional neural networks with local reparameterization trick</article-title>. <source>Expert. Syst. Appl</source>. <volume>187</volume>, <fpage>115968</fpage>. <pub-id pub-id-type="doi">10.1016/j.eswa.2021.115968</pub-id></citation>
</ref>
<ref id="B56">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hussein</surname> <given-names>R.</given-names></name> <name><surname>Palangi</surname> <given-names>H.</given-names></name> <name><surname>Ward</surname> <given-names>R. K.</given-names></name> <name><surname>Wang</surname> <given-names>Z. J.</given-names></name></person-group> (<year>2019</year>). <article-title>Optimized deep neural network architecture for robust detection of epileptic seizures using eeg signals</article-title>. <source>Clini. Neurophysiol</source>. <volume>130</volume>, <fpage>25</fpage>&#x02013;<lpage>37</lpage>. <pub-id pub-id-type="doi">10.1016/j.clinph.2018.10.010</pub-id><pub-id pub-id-type="pmid">30472579</pub-id></citation></ref>
<ref id="B57">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hwang</surname> <given-names>S.</given-names></name> <name><surname>Hong</surname> <given-names>K.</given-names></name> <name><surname>Son</surname> <given-names>G.</given-names></name> <name><surname>Byun</surname> <given-names>H.</given-names></name></person-group> (<year>2020</year>). <article-title>Learning cnn features from de features for EEG-based emotion recognition</article-title>. <source>Pattern Anal. Appl</source>. <volume>23</volume>, <fpage>1323</fpage>&#x02013;<lpage>1335</lpage>. <pub-id pub-id-type="doi">10.1007/s10044-019-00860-w</pub-id></citation>
</ref>
<ref id="B58">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ieracitano</surname> <given-names>C.</given-names></name> <name><surname>Morabito</surname> <given-names>F. C.</given-names></name> <name><surname>Hussain</surname> <given-names>A.</given-names></name> <name><surname>Mammone</surname> <given-names>N.</given-names></name></person-group> (<year>2021</year>). <article-title>A hybrid-domain deep learning-based bci for discriminating hand motion planning from EEG sources</article-title>. <source>Int. J. Neural Syst</source>. <volume>31</volume>, <fpage>2150038</fpage>. <pub-id pub-id-type="doi">10.1142/S0129065721500386</pub-id><pub-id pub-id-type="pmid">34376121</pub-id></citation></ref>
<ref id="B59">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Islam</surname> <given-names>M.</given-names></name> <name><surname>Shampa</surname> <given-names>M.</given-names></name> <name><surname>Alim</surname> <given-names>T.</given-names></name></person-group> (<year>2021</year>). <article-title>Convolutional neural network based marine cetaceans detection around the swatch of no ground in the bay of bengal</article-title>. <source>Int. J. Comput. Digit. Syst</source>. <volume>12</volume>, <fpage>173</fpage>. <pub-id pub-id-type="doi">10.12785/ijcds/120173</pub-id></citation>
</ref>
<ref id="B60">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Islam</surname> <given-names>M. A.</given-names></name> <name><surname>Hasan</surname> <given-names>M. R.</given-names></name> <name><surname>Begum</surname> <given-names>A.</given-names></name></person-group> (<year>2020</year>). <article-title>Improvement of the handover performance and channel allocation scheme using fuzzy logic, artificial neural network and neuro-fuzzy system to reduce call drop in cellular network</article-title>. <source>J. Eng. Adv</source>. <volume>1</volume>, <fpage>130</fpage>&#x02013;<lpage>138</lpage>. <pub-id pub-id-type="doi">10.38032/jea.2020.04.004</pub-id></citation>
</ref>
<ref id="B61">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Islam</surname> <given-names>S.</given-names></name> <name><surname>Reza</surname> <given-names>R.</given-names></name> <name><surname>Hasan</surname> <given-names>M. M.</given-names></name> <name><surname>Mishu</surname> <given-names>N. D.</given-names></name> <name><surname>Hossain</surname> <given-names>K. M.</given-names></name> <name><surname>Mahmood</surname> <given-names>Z. H.</given-names></name></person-group> (<year>2016</year>). <article-title>Effects of various filter parameters on the myocardial perfusion with polar plot image</article-title>. <source>Int. J. Eng. Res</source>. <volume>4</volume>, <fpage>1</fpage>&#x02013;<lpage>10</lpage>.</citation>
</ref>
<ref id="B62">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jia</surname> <given-names>S.</given-names></name> <name><surname>Hou</surname> <given-names>Y.</given-names></name> <name><surname>Shi</surname> <given-names>Y.</given-names></name> <name><surname>Li</surname> <given-names>Y.</given-names></name></person-group> (<year>2020</year>). <article-title>Attention-based graph resnet for motor intent detection from raw eeg signals</article-title>. <source>arXiv preprint</source> arXiv:2007.13484. <pub-id pub-id-type="doi">10.48550/arXiv.2007.13484</pub-id></citation>
</ref>
<ref id="B63">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kilicarslan</surname> <given-names>A.</given-names></name> <name><surname>Grossman</surname> <given-names>R. G.</given-names></name> <name><surname>Contreras-Vidal</surname> <given-names>J. L.</given-names></name></person-group> (<year>2016</year>). <article-title>A robust adaptive denoising framework for real-time artifact removal in scalp eeg measurements</article-title>. <source>J. Neural Eng</source>. <volume>13</volume>, <fpage>026013</fpage>. <pub-id pub-id-type="doi">10.1088/1741-2560/13/2/026013</pub-id><pub-id pub-id-type="pmid">26863159</pub-id></citation></ref>
<ref id="B64">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kline</surname> <given-names>J. E.</given-names></name> <name><surname>Huang</surname> <given-names>H. J.</given-names></name> <name><surname>Snyder</surname> <given-names>K. L.</given-names></name> <name><surname>Ferris</surname> <given-names>D. P.</given-names></name></person-group> (<year>2015</year>). <article-title>Isolating gait-related movement artifacts in electroencephalography during human walking</article-title>. <source>J. Neural Eng</source>. <volume>12</volume>, <fpage>046022</fpage>. <pub-id pub-id-type="doi">10.1088/1741-2560/12/4/046022</pub-id><pub-id pub-id-type="pmid">26083595</pub-id></citation></ref>
<ref id="B65">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ko</surname> <given-names>W.</given-names></name> <name><surname>Oh</surname> <given-names>K.</given-names></name> <name><surname>Jeon</surname> <given-names>E.</given-names></name> <name><surname>Suk</surname> <given-names>H.-I.</given-names></name></person-group> (<year>2020</year>). <article-title>Vignet: a deep convolutional neural network for EEG-based driver vigilance estimation,</article-title> in <source>2020 8th International Winter Conference on Brain-Computer Interface (BCI)</source> (<publisher-loc>Gangwon</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x02013;<lpage>3</lpage>.</citation>
</ref>
<ref id="B66">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Koelstra</surname> <given-names>S.</given-names></name> <name><surname>Muhl</surname> <given-names>C.</given-names></name> <name><surname>Soleymani</surname> <given-names>M.</given-names></name> <name><surname>Lee</surname> <given-names>J.-S.</given-names></name> <name><surname>Yazdani</surname> <given-names>A.</given-names></name> <name><surname>Ebrahimi</surname> <given-names>T.</given-names></name> <etal/></person-group>. (<year>2011</year>). <article-title>Deap: A database for emotion analysis; using physiological signals</article-title>. <source>IEEE Trans. Affect. Comput</source>. <volume>3</volume>, <fpage>18</fpage>&#x02013;<lpage>31</lpage>. <pub-id pub-id-type="doi">10.1109/T-AFFC.2011.15</pub-id></citation>
</ref>
<ref id="B67">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Korovesis</surname> <given-names>N.</given-names></name> <name><surname>Kandris</surname> <given-names>D.</given-names></name> <name><surname>Koulouras</surname> <given-names>G.</given-names></name> <name><surname>Alexandridis</surname> <given-names>A.</given-names></name></person-group> (<year>2019</year>). <article-title>Robot motion control via an eeg-based brain-computer interface by using neural networks and alpha brainwaves</article-title>. <source>Electronics</source> <volume>8</volume>, <fpage>1387</fpage>. <pub-id pub-id-type="doi">10.3390/electronics8121387</pub-id></citation>
</ref>
<ref id="B68">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kwak</surname> <given-names>N.-S.</given-names></name> <name><surname>M&#x000FC;ller</surname> <given-names>K.-R.</given-names></name> <name><surname>Lee</surname> <given-names>S.-W.</given-names></name></person-group> (<year>2017</year>). <article-title>A convolutional neural network for steady state visual evoked potential classification under ambulatory environment</article-title>. <source>PLoS ONE</source> <volume>12</volume>, <fpage>e0172578</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0172578</pub-id><pub-id pub-id-type="pmid">28225827</pub-id></citation></ref>
<ref id="B69">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lee</surname> <given-names>D.-Y.</given-names></name> <name><surname>Jeong</surname> <given-names>J.-H.</given-names></name> <name><surname>Lee</surname> <given-names>B.-H.</given-names></name> <name><surname>Lee</surname> <given-names>S.-W.</given-names></name></person-group> (<year>2022</year>). <article-title>Motor imagery classification using inter-task transfer learning via a channel-wise variational autoencoder-based convolutional neural network</article-title>. <source>IEEE Trans. Neural Syst. Rehabil. Eng</source>. <volume>30</volume>, <fpage>226</fpage>&#x02013;<lpage>237</lpage>. <pub-id pub-id-type="doi">10.1109/TNSRE.2022.3143836</pub-id><pub-id pub-id-type="pmid">35041605</pub-id></citation></ref>
<ref id="B70">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Le&#x000F3;n</surname> <given-names>J.</given-names></name> <name><surname>Escobar</surname> <given-names>J. J.</given-names></name> <name><surname>Ortiz</surname> <given-names>A.</given-names></name> <name><surname>Ortega</surname> <given-names>J.</given-names></name> <name><surname>Gonz&#x000E1;lez</surname> <given-names>J.</given-names></name> <name><surname>Mart&#x000ED;n-Smith</surname> <given-names>P.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Deep learning for eeg-based motor imagery classification: accuracy-cost trade-off</article-title>. <source>PLoS ONE</source> <volume>15</volume>, <fpage>e0234178</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0234178</pub-id><pub-id pub-id-type="pmid">32525885</pub-id></citation></ref>
<ref id="B71">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>F.</given-names></name> <name><surname>He</surname> <given-names>F.</given-names></name> <name><surname>Wang</surname> <given-names>F.</given-names></name> <name><surname>Zhang</surname> <given-names>D.</given-names></name> <name><surname>Xia</surname> <given-names>Y.</given-names></name> <name><surname>Li</surname> <given-names>X.</given-names></name></person-group> (<year>2020</year>). <article-title>A novel simplified convolutional neural network classification algorithm of motor imagery eeg signals based on deep learning</article-title>. <source>Appl. Sci</source>. <volume>10</volume>, <fpage>1605</fpage>. <pub-id pub-id-type="doi">10.3390/app10051605</pub-id></citation>
</ref>
<ref id="B72">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>J.</given-names></name> <name><surname>Zhang</surname> <given-names>Z.</given-names></name> <name><surname>He</surname> <given-names>H.</given-names></name></person-group> (<year>2016</year>). <article-title>Implementation of EEG emotion recognition system based on hierarchical convolutional neural networks,</article-title> in <source>International Conference on Brain Inspired Cognitive Systems</source> (<publisher-loc>Springer</publisher-loc>), <fpage>22</fpage>&#x02013;<lpage>33</lpage>.</citation>
</ref>
<ref id="B73">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>Y.</given-names></name> <name><surname>Yang</surname> <given-names>H.</given-names></name> <name><surname>Li</surname> <given-names>J.</given-names></name> <name><surname>Chen</surname> <given-names>D.</given-names></name> <name><surname>Du</surname> <given-names>M.</given-names></name></person-group> (<year>2020</year>). <article-title>EEG-based intention recognition with deep recurrent-convolution neural network: performance and channel selection by grad-cam</article-title>. <source>Neurocomputing</source> <volume>415</volume>, <fpage>225</fpage>&#x02013;<lpage>233</lpage>. <pub-id pub-id-type="doi">10.1016/j.neucom.2020.07.072</pub-id></citation>
</ref>
<ref id="B74">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>J.</given-names></name> <name><surname>Wu</surname> <given-names>G.</given-names></name> <name><surname>Luo</surname> <given-names>Y.</given-names></name> <name><surname>Qiu</surname> <given-names>S.</given-names></name> <name><surname>Yang</surname> <given-names>S.</given-names></name> <name><surname>Li</surname> <given-names>W.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>EEG-based emotion classification using a deep neural network and sparse autoencoder</article-title>. <source>Front. Syst. Neurosci</source>. <volume>14</volume>, <fpage>43</fpage>. <pub-id pub-id-type="doi">10.3389/fnsys.2020.00043</pub-id><pub-id pub-id-type="pmid">32982703</pub-id></citation></ref>
<ref id="B75">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>Y.</given-names></name> <name><surname>Ding</surname> <given-names>Y.</given-names></name> <name><surname>Li</surname> <given-names>C.</given-names></name> <name><surname>Cheng</surname> <given-names>J.</given-names></name> <name><surname>Song</surname> <given-names>R.</given-names></name> <name><surname>Wan</surname> <given-names>F.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Multi-channel eeg-based emotion recognition via a multi-level features guided capsule network</article-title>. <source>Comput. Biol. Med</source>. <volume>123</volume>, <fpage>103927</fpage>. <pub-id pub-id-type="doi">10.1016/j.compbiomed.2020.103927</pub-id><pub-id pub-id-type="pmid">32768036</pub-id></citation></ref>
<ref id="B76">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lotte</surname> <given-names>F.</given-names></name> <name><surname>Bougrain</surname> <given-names>L.</given-names></name> <name><surname>Cichocki</surname> <given-names>A.</given-names></name> <name><surname>Clerc</surname> <given-names>M.</given-names></name> <name><surname>Congedo</surname> <given-names>M.</given-names></name> <name><surname>Rakotomamonjy</surname> <given-names>A.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>A review of classification algorithms for eeg-based brain-computer interfaces: a 10 year update</article-title>. <source>J. Neural Eng</source>. <volume>15</volume>, <fpage>031005</fpage>. <pub-id pub-id-type="doi">10.1088/1741-2552/aab2f2</pub-id><pub-id pub-id-type="pmid">29488902</pub-id></citation></ref>
<ref id="B77">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Mai</surname> <given-names>N.-D.</given-names></name> <name><surname>Long</surname> <given-names>N. M. H.</given-names></name> <name><surname>Chung</surname> <given-names>W.-Y.</given-names></name></person-group> (<year>2021</year>). <article-title>1D-CNN-based bci system for detecting emotional states using a wireless and wearable 8-channel custom-designed EEG headset,</article-title> in <source>2021 IEEE International Conference on Flexible and Printable Sensors and Systems (FLEPS)</source> (<publisher-loc>Manchester</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x02013;<lpage>4</lpage>.</citation>
</ref>
<ref id="B78">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Maiorana</surname> <given-names>E.</given-names></name></person-group> (<year>2020</year>). <article-title>Deep learning for eeg-based biometric recognition</article-title>. <source>Neurocomputing</source> <volume>410</volume>, <fpage>374</fpage>&#x02013;<lpage>386</lpage>. <pub-id pub-id-type="doi">10.1016/j.neucom.2020.06.009</pub-id></citation>
</ref>
<ref id="B79">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Mammone</surname> <given-names>N.</given-names></name> <name><surname>Ieracitano</surname> <given-names>C.</given-names></name> <name><surname>Morabito</surname> <given-names>F. C.</given-names></name></person-group> (<year>2021</year>). <article-title>Mpnnet: a motion planning decoding convolutional neural network for EEG-based brain computer interfaces,</article-title> in <source>2021 International Joint Conference on Neural Networks (IJCNN)</source> (<publisher-loc>Shenzhen</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x02013;<lpage>8</lpage>.</citation>
</ref>
<ref id="B80">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Mao</surname> <given-names>Z.</given-names></name> <name><surname>Yao</surname> <given-names>W. X.</given-names></name> <name><surname>Huang</surname> <given-names>Y.</given-names></name></person-group> (<year>2017</year>). <article-title>EEG-based biometric identification with deep learning,</article-title> in <source>2017 8th International IEEE/EMBS Conference on Neural Engineering (NER)</source> (<publisher-loc>Shanghai</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>609</fpage>&#x02013;<lpage>612</lpage>.</citation>
</ref>
<ref id="B81">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mattioli</surname> <given-names>F.</given-names></name> <name><surname>Porcaro</surname> <given-names>C.</given-names></name> <name><surname>Baldassarre</surname> <given-names>G.</given-names></name></person-group> (<year>2022</year>). <article-title>A 1D CNN for high accuracy classification and transfer learning in motor imagery eeg-based brain-computer interface</article-title>. <source>J. Neural Eng</source>. <volume>18</volume>, <fpage>066053</fpage>. <pub-id pub-id-type="doi">10.1088/1741-2552/ac4430</pub-id><pub-id pub-id-type="pmid">34920443</pub-id></citation></ref>
<ref id="B82">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Miao</surname> <given-names>M.</given-names></name> <name><surname>Hu</surname> <given-names>W.</given-names></name> <name><surname>Yin</surname> <given-names>H.</given-names></name> <name><surname>Zhang</surname> <given-names>K.</given-names></name></person-group> (<year>2020</year>). <article-title>Spatial-frequency feature learning and classification of motor imagery eeg based on deep convolution neural network</article-title>. <source>Comput. Math. Methods Med</source>. <volume>2020</volume>, <fpage>1981728</fpage>. <pub-id pub-id-type="doi">10.1155/2020/1981728</pub-id><pub-id pub-id-type="pmid">32765639</pub-id></citation></ref>
<ref id="B83">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nathan</surname> <given-names>K.</given-names></name> <name><surname>Contreras-Vidal</surname> <given-names>J. L.</given-names></name></person-group> (<year>2016</year>). <article-title>Negligible motion artifacts in scalp electroencephalography (eeg) during treadmill walking</article-title>. <source>Front. Hum. Neurosci</source>. <volume>9</volume>, <fpage>708</fpage>. <pub-id pub-id-type="doi">10.3389/fnhum.2015.00708</pub-id><pub-id pub-id-type="pmid">26793089</pub-id></citation></ref>
<ref id="B84">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nguyen</surname> <given-names>T.-H.</given-names></name> <name><surname>Chung</surname> <given-names>W.-Y.</given-names></name></person-group> (<year>2018</year>). <article-title>A single-channel ssvep-based bci speller using deep learning</article-title>. <source>IEEE Access</source> <volume>7</volume>, <fpage>1752</fpage>&#x02013;<lpage>1763</lpage>. <pub-id pub-id-type="doi">10.1109/ACCESS.2018.2886759</pub-id></citation>
</ref>
<ref id="B85">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Oh</surname> <given-names>S. L.</given-names></name> <name><surname>Hagiwara</surname> <given-names>Y.</given-names></name> <name><surname>Raghavendra</surname> <given-names>U.</given-names></name> <name><surname>Yuvaraj</surname> <given-names>R.</given-names></name> <name><surname>Arunkumar</surname> <given-names>N.</given-names></name> <name><surname>Murugappan</surname> <given-names>M.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>A deep learning approach for parkinson&#x00027;s disease diagnosis from EEG signals</article-title>. <source>Neural Comput. Appl</source>. <volume>32</volume>, <fpage>10927</fpage>&#x02013;<lpage>10933</lpage>. <pub-id pub-id-type="doi">10.1007/s00521-018-3689-5</pub-id></citation>
</ref>
<ref id="B86">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Olivas-Padilla</surname> <given-names>B. E.</given-names></name> <name><surname>Chacon-Murguia</surname> <given-names>M. I.</given-names></name></person-group> (<year>2019</year>). <article-title>Classification of multiple motor imagery using deep convolutional neural networks and spatial filters</article-title>. <source>Appl. Soft Comput</source>. <volume>75</volume>, <fpage>461</fpage>&#x02013;<lpage>472</lpage>. <pub-id pub-id-type="doi">10.1016/j.asoc.2018.11.031</pub-id></citation>
</ref>
<ref id="B87">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ozdemir</surname> <given-names>M. A.</given-names></name> <name><surname>Degirmenci</surname> <given-names>M.</given-names></name> <name><surname>Guren</surname> <given-names>O.</given-names></name> <name><surname>Akan</surname> <given-names>A.</given-names></name></person-group> (<year>2019</year>). <article-title>EEG based emotional state estimation using 2-d deep learning technique,</article-title> in <source>2019 Medical Technologies Congress (TIPTEKNO)</source> (<publisher-loc>Izmir</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x02013;<lpage>4</lpage>.</citation>
</ref>
<ref id="B88">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Pan</surname> <given-names>J.</given-names></name> <name><surname>Li</surname> <given-names>Y.</given-names></name> <name><surname>Wang</surname> <given-names>J.</given-names></name></person-group> (<year>2016</year>). <article-title>An EEG-based brain-computer interface for emotion recognition,</article-title> in <source>2016 International Joint Conference on Neural Networks (IJCNN)</source> (<publisher-loc>Vancouver, BC</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>2063</fpage>&#x02013;<lpage>2067</lpage>.</citation>
</ref>
<ref id="B89">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pedroni</surname> <given-names>A.</given-names></name> <name><surname>Bahreini</surname> <given-names>A.</given-names></name> <name><surname>Langer</surname> <given-names>N.</given-names></name></person-group> (<year>2019</year>). <article-title>Automagic: standardized preprocessing of big eeg data</article-title>. <source>Neuroimage</source> <volume>200</volume>, <fpage>460</fpage>&#x02013;<lpage>473</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2019.06.046</pub-id><pub-id pub-id-type="pmid">31233907</pub-id></citation></ref>
<ref id="B90">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Penchina</surname> <given-names>B.</given-names></name> <name><surname>Sundaresan</surname> <given-names>A.</given-names></name> <name><surname>Cheong</surname> <given-names>S.</given-names></name> <name><surname>Grace</surname> <given-names>V.</given-names></name> <name><surname>Valero-Cabr&#x000E9;</surname> <given-names>A.</given-names></name> <name><surname>Martel</surname> <given-names>A.</given-names></name></person-group> (<year>2020</year>). <article-title>Evaluating deep learning EEG-based anxiety classification in adolescents with autism for breathing entrainment BCI</article-title>. <source>Brain Inform</source>. <volume>8</volume>, <fpage>13</fpage>. <pub-id pub-id-type="doi">10.21203/rs.3.rs-112880/v1</pub-id><pub-id pub-id-type="pmid">34255197</pub-id></citation></ref>
<ref id="B91">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Petoku</surname> <given-names>E.</given-names></name> <name><surname>Capi</surname> <given-names>G.</given-names></name></person-group> (<year>2021</year>). <article-title>Object movement motor imagery for EEG based BCI system using convolutional neural networks,</article-title> in <source>2021 9th International Winter Conference on Brain-Computer Interface (BCI)</source> (<publisher-loc>Gangwon</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x02013;<lpage>5</lpage>.</citation>
</ref>
<ref id="B92">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Polat</surname> <given-names>H.</given-names></name> <name><surname>&#x000D6;zerdem</surname> <given-names>M. S.</given-names></name></person-group> (<year>2020</year>). <article-title>Automatic detection of cursor movements from the eeg signals via deep learning approach,</article-title> in <source>2020 5th International Conference on Computer Science and Engineering (UBMK)</source> (<publisher-loc>Diyarbakir</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>327</fpage>&#x02013;<lpage>332</lpage>.</citation>
</ref>
<ref id="B93">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Puengdang</surname> <given-names>S.</given-names></name> <name><surname>Tuarob</surname> <given-names>S.</given-names></name> <name><surname>Sattabongkot</surname> <given-names>T.</given-names></name> <name><surname>Sakboonyarat</surname> <given-names>B.</given-names></name></person-group> (<year>2019</year>). <article-title>EEG-based person authentication method using deep learning with visual stimulation,</article-title> in <source>2019 11th International Conference on Knowledge and Smart Technology (KST)</source> (<publisher-loc>Phuket</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>6</fpage>&#x02013;<lpage>10</lpage>.</citation>
</ref>
<ref id="B94">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Qiao</surname> <given-names>W.</given-names></name> <name><surname>Bi</surname> <given-names>X.</given-names></name></person-group> (<year>2019</year>). <article-title>Deep spatial-temporal neural network for classification of eeg-based motor imagery,</article-title> in <source>Proceedings of the 2019 International Conference on Artificial Intelligence and Computer Science</source>, <fpage>265</fpage>&#x02013;<lpage>272</lpage>. <pub-id pub-id-type="pmid">35914032</pub-id></citation></ref>
<ref id="B95">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rahman</surname> <given-names>M. M.</given-names></name> <name><surname>Sarkar</surname> <given-names>A. K.</given-names></name> <name><surname>Hossain</surname> <given-names>M. A.</given-names></name> <name><surname>Hossain</surname> <given-names>M. S.</given-names></name> <name><surname>Islam</surname> <given-names>M. R.</given-names></name> <name><surname>Hossain</surname> <given-names>M. B.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>Recognition of human emotions using eeg signals: a review</article-title>. <source>Comput. Biol. Med</source>. <volume>136</volume>, <fpage>104696</fpage>. <pub-id pub-id-type="doi">10.1016/j.compbiomed.2021.104696</pub-id><pub-id pub-id-type="pmid">34388471</pub-id></citation></ref>
<ref id="B96">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Rammy</surname> <given-names>S. A.</given-names></name> <name><surname>Abrar</surname> <given-names>M.</given-names></name> <name><surname>Anwar</surname> <given-names>S. J.</given-names></name> <name><surname>Zhang</surname> <given-names>W.</given-names></name></person-group> (<year>2020</year>). <article-title>Recurrent deep learning for eeg-based motor imagination recognition,</article-title> in <source>2020 3rd International Conference on Advancements in Computational Sciences (ICACS)</source> (<publisher-loc>Lahore</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x02013;<lpage>6</lpage>.</citation>
</ref>
<ref id="B97">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Reddy</surname> <given-names>T. K.</given-names></name> <name><surname>Arora</surname> <given-names>V.</given-names></name> <name><surname>Gupta</surname> <given-names>V.</given-names></name> <name><surname>Biswas</surname> <given-names>R.</given-names></name> <name><surname>Behera</surname> <given-names>L.</given-names></name></person-group> (<year>2021</year>). <article-title>EEG-based drowsiness detection with fuzzy independent phase-locking value representations using lagrangian-based deep neural networks</article-title>. <source>IEEE Trans. Syst. Man Cybern. Syst</source>. <volume>52</volume>, <fpage>101</fpage>&#x02013;<lpage>111</lpage>. <pub-id pub-id-type="doi">10.1109/TSMC.2021.3113823</pub-id></citation>
</ref>
<ref id="B98">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Roy</surname> <given-names>S.</given-names></name> <name><surname>McCreadie</surname> <given-names>K.</given-names></name> <name><surname>Prasad</surname> <given-names>G.</given-names></name></person-group> (<year>2019</year>). <article-title>Can a single model deep learning approach enhance classification accuracy of an EEG-based brain-computer interface?</article-title> in <source>2019 IEEE International Conference on Systems, Man and Cybernetics (SMC)</source> (<publisher-loc>Bari</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1317</fpage>&#x02013;<lpage>1321</lpage>.</citation>
</ref>
<ref id="B99">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Saha</surname> <given-names>P.</given-names></name> <name><surname>Fels</surname> <given-names>S.</given-names></name> <name><surname>Abdul-Mageed</surname> <given-names>M.</given-names></name></person-group> (<year>2019</year>). <article-title>Deep learning the eeg manifold for phonological categorization from active thoughts,</article-title> in <source>ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</source> (<publisher-loc>Brighton, UK</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>2762</fpage>&#x02013;<lpage>2766</lpage>.</citation>
</ref>
<ref id="B100">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sakkalis</surname> <given-names>V.</given-names></name></person-group> (<year>2011</year>). <article-title>Review of advanced techniques for the estimation of brain connectivity measured with eeg/meg</article-title>. <source>Comput. Biol. Med</source>. <volume>41</volume>, <fpage>1110</fpage>&#x02013;<lpage>1117</lpage>. <pub-id pub-id-type="doi">10.1016/j.compbiomed.2011.06.020</pub-id><pub-id pub-id-type="pmid">21794851</pub-id></citation></ref>
<ref id="B101">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schalk</surname> <given-names>G.</given-names></name> <name><surname>McFarland</surname> <given-names>D. J.</given-names></name> <name><surname>Hinterberger</surname> <given-names>T.</given-names></name> <name><surname>Birbaumer</surname> <given-names>N.</given-names></name> <name><surname>Wolpaw</surname> <given-names>J. R.</given-names></name></person-group> (<year>2004</year>). <article-title>Bci2000: a general-purpose brain-computer interface (BCI) system</article-title>. <source>IEEE Trans. Biomed. Eng</source>. <volume>51</volume>, <fpage>1034</fpage>&#x02013;<lpage>1043</lpage>. <pub-id pub-id-type="doi">10.1109/TBME.2004.827072</pub-id><pub-id pub-id-type="pmid">15188875</pub-id></citation></ref>
<ref id="B102">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shoeibi</surname> <given-names>A.</given-names></name> <name><surname>Khodatars</surname> <given-names>M.</given-names></name> <name><surname>Ghassemi</surname> <given-names>N.</given-names></name> <name><surname>Jafari</surname> <given-names>M.</given-names></name> <name><surname>Moridian</surname> <given-names>P.</given-names></name> <name><surname>Alizadehsani</surname> <given-names>R.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>Epileptic seizures detection using deep learning techniques: a review</article-title>. <source>Int. J. Environ. Res. Public Health</source> <volume>18</volume>, <fpage>5780</fpage>. <pub-id pub-id-type="doi">10.3390/ijerph18115780</pub-id><pub-id pub-id-type="pmid">34072232</pub-id></citation></ref>
<ref id="B103">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Song</surname> <given-names>Y.</given-names></name> <name><surname>Wang</surname> <given-names>D.</given-names></name> <name><surname>Yue</surname> <given-names>K.</given-names></name> <name><surname>Zheng</surname> <given-names>N.</given-names></name> <name><surname>Shen</surname> <given-names>Z.-J. M.</given-names></name></person-group> (<year>2019</year>). <article-title>EEG-based motor imagery classification with deep multi-task learning,</article-title> in <source>2019 International Joint Conference on Neural Networks (IJCNN)</source> (<publisher-loc>Budapest</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x02013;<lpage>8</lpage>.</citation>
</ref>
<ref id="B104">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Sulaiman</surname> <given-names>N.</given-names></name> <name><surname>Taib</surname> <given-names>M. N.</given-names></name> <name><surname>Lias</surname> <given-names>S.</given-names></name> <name><surname>Murat</surname> <given-names>Z. H.</given-names></name> <name><surname>Aris</surname> <given-names>S. A. M.</given-names></name> <name><surname>Hamid</surname> <given-names>N. H. A.</given-names></name></person-group> (<year>2011</year>). <article-title>EEG-based stress features using spectral centroids technique and k-nearest neighbor classifier,</article-title> in <source>2011 UkSim 13th International Conference on Computer Modelling and Simulation</source> (<publisher-loc>Cambridge, UK</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>69</fpage>&#x02013;<lpage>74</lpage>.</citation>
</ref>
<ref id="B105">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sundaresan</surname> <given-names>A.</given-names></name> <name><surname>Penchina</surname> <given-names>B.</given-names></name> <name><surname>Cheong</surname> <given-names>S.</given-names></name> <name><surname>Grace</surname> <given-names>V.</given-names></name> <name><surname>Valero-Cabr&#x000E9;</surname> <given-names>A.</given-names></name> <name><surname>Martel</surname> <given-names>A.</given-names></name></person-group> (<year>2021</year>). <article-title>Evaluating deep learning eeg-based mental stress classification in adolescents with autism for breathing entrainment bci</article-title>. <source>Brain Inform</source>. <volume>8</volume>, <fpage>1</fpage>&#x02013;<lpage>12</lpage>. <pub-id pub-id-type="doi">10.1186/s40708-021-00133-5</pub-id><pub-id pub-id-type="pmid">34255197</pub-id></citation></ref>
<ref id="B106">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tabar</surname> <given-names>Y. R.</given-names></name> <name><surname>Halici</surname> <given-names>U.</given-names></name></person-group> (<year>2016</year>). <article-title>A novel deep learning approach for classification of eeg motor imagery signals</article-title>. <source>J. Neural Eng</source>. <volume>14</volume>, <fpage>016003</fpage>. <pub-id pub-id-type="doi">10.1088/1741-2560/14/1/016003</pub-id><pub-id pub-id-type="pmid">27900952</pub-id></citation></ref>
<ref id="B107">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Tang</surname> <given-names>X.</given-names></name> <name><surname>Zhao</surname> <given-names>J.</given-names></name> <name><surname>Fu</surname> <given-names>W.</given-names></name> <name><surname>Pan</surname> <given-names>J.</given-names></name> <name><surname>Zhou</surname> <given-names>H.</given-names></name></person-group> (<year>2019</year>). <article-title>A novel classification algorithm for MI-EEG based on deep learning,</article-title> in <source>2019 IEEE 8th Joint International Information Technology and Artificial Intelligence Conference (ITAIC)</source> (<publisher-loc>Chongqing</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>606</fpage>&#x02013;<lpage>611</lpage>. <pub-id pub-id-type="pmid">31341093</pub-id></citation></ref>
<ref id="B108">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tang</surname> <given-names>Z.</given-names></name> <name><surname>Li</surname> <given-names>C.</given-names></name> <name><surname>Sun</surname> <given-names>S.</given-names></name></person-group> (<year>2017</year>). <article-title>Single-trial eeg classification of motor imagery using deep convolutional neural networks</article-title>. <source>Optik</source> <volume>130</volume>, <fpage>11</fpage>&#x02013;<lpage>18</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijleo.2016.10.117</pub-id><pub-id pub-id-type="pmid">30978978</pub-id></citation></ref>
<ref id="B109">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tayeb</surname> <given-names>Z.</given-names></name> <name><surname>Fedjaev</surname> <given-names>J.</given-names></name> <name><surname>Ghaboosi</surname> <given-names>N.</given-names></name> <name><surname>Richter</surname> <given-names>C.</given-names></name> <name><surname>Everding</surname> <given-names>L.</given-names></name> <name><surname>Qu</surname> <given-names>X.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Validating deep neural networks for online decoding of motor imagery movements from eeg signals</article-title>. <source>Sensors</source> <volume>19</volume>, <fpage>210</fpage>. <pub-id pub-id-type="doi">10.3390/s19010210</pub-id><pub-id pub-id-type="pmid">30626132</pub-id></citation></ref>
<ref id="B110">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tiwari</surname> <given-names>S.</given-names></name> <name><surname>Goel</surname> <given-names>S.</given-names></name> <name><surname>Bhardwaj</surname> <given-names>A.</given-names></name></person-group> (<year>2021</year>). <article-title>Midnn-a classification approach for the eeg based motor imagery tasks using deep neural network</article-title>. <source>Appl. Intell</source>. <volume>52</volume>, <fpage>4824</fpage>&#x02013;<lpage>4843</lpage>. <pub-id pub-id-type="doi">10.1007/s10489-021-02622-w</pub-id></citation>
</ref>
<ref id="B111">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tortora</surname> <given-names>S.</given-names></name> <name><surname>Ghidoni</surname> <given-names>S.</given-names></name> <name><surname>Chisari</surname> <given-names>C.</given-names></name> <name><surname>Micera</surname> <given-names>S.</given-names></name> <name><surname>Artoni</surname> <given-names>F.</given-names></name></person-group> (<year>2020</year>). <article-title>Deep learning-based bci for gait decoding from eeg with lstm recurrent neural network</article-title>. <source>J. Neural Eng</source>. <volume>17</volume>, <fpage>046011</fpage>. <pub-id pub-id-type="doi">10.1088/1741-2552/ab9842</pub-id><pub-id pub-id-type="pmid">32480381</pub-id></citation></ref>
<ref id="B112">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Turner</surname> <given-names>J.</given-names></name> <name><surname>Page</surname> <given-names>A.</given-names></name> <name><surname>Mohsenin</surname> <given-names>T.</given-names></name> <name><surname>Oates</surname> <given-names>T.</given-names></name></person-group> (<year>2014</year>). <article-title>Deep belief networks used on high resolution multichannel electroencephalography data for seizure detection,</article-title> in <source>2014 AAAI Spring Symposium Series</source>.</citation>
</ref>
<ref id="B113">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vallabhaneni</surname> <given-names>R. B.</given-names></name> <name><surname>Sharma</surname> <given-names>P.</given-names></name> <name><surname>Kumar</surname> <given-names>V.</given-names></name> <name><surname>Kulshreshtha</surname> <given-names>V.</given-names></name> <name><surname>Reddy</surname> <given-names>K. J.</given-names></name> <name><surname>Kumar</surname> <given-names>S. S.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>Deep learning algorithms in eeg signal decoding application: a review</article-title>. <source>IEEE Access</source> <volume>9</volume>, <fpage>125778</fpage>&#x02013;<lpage>125786</lpage>. <pub-id pub-id-type="doi">10.1109/ACCESS.2021.3105917</pub-id></citation>
</ref>
<ref id="B114">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Vilamala</surname> <given-names>A.</given-names></name> <name><surname>Madsen</surname> <given-names>K. H.</given-names></name> <name><surname>Hansen</surname> <given-names>L. K.</given-names></name></person-group> (<year>2017</year>). <article-title>Deep convolutional neural networks for interpretable analysis of EEG sleep stage scoring,</article-title> in <source>2017 IEEE 27th International Workshop on Machine Learning For Signal Processing (MLSP)</source> (<publisher-loc>Tokyo</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x02013;<lpage>6</lpage>.</citation>
</ref>
<ref id="B115">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>V&#x000F6;lker</surname> <given-names>M.</given-names></name> <name><surname>Schirrmeister</surname> <given-names>R. T.</given-names></name> <name><surname>Fiederer</surname> <given-names>L. D.</given-names></name> <name><surname>Burgard</surname> <given-names>W.</given-names></name> <name><surname>Ball</surname> <given-names>T.</given-names></name></person-group> (<year>2018</year>). <article-title>Deep transfer learning for error decoding from non-invasive EEG,</article-title> in <source>2018 6th International Conference on Brain-Computer Interface (BCI)</source> (<publisher-loc>Gangwon</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x02013;<lpage>6</lpage>.</citation>
</ref>
<ref id="B116">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>B.</given-names></name> <name><surname>Wong</surname> <given-names>C. M.</given-names></name> <name><surname>Wan</surname> <given-names>F.</given-names></name> <name><surname>Mak</surname> <given-names>P. U.</given-names></name> <name><surname>Mak</surname> <given-names>P. I.</given-names></name> <name><surname>Vai</surname> <given-names>M. I.</given-names></name></person-group> (<year>2009</year>). <article-title>Comparison of different classification methods for eeg-based brain computer interfaces: a case study,</article-title> in <source>2009 International Conference on Information and Automation</source> (<publisher-loc>Zhuhai; Macau</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1416</fpage>&#x02013;<lpage>1421</lpage>.</citation>
</ref>
<ref id="B117">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>J.</given-names></name> <name><surname>Wang</surname> <given-names>M.</given-names></name></person-group> (<year>2021</year>). <article-title>Review of the emotional feature extraction and classification using eeg signals</article-title>. <source>Cognit. Rob</source>. <volume>1</volume>, <fpage>29</fpage>&#x02013;<lpage>40</lpage>. <pub-id pub-id-type="doi">10.1016/j.cogr.2021.04.001</pub-id><pub-id pub-id-type="pmid">34388471</pub-id></citation></ref>
<ref id="B118">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wilaiprasitporn</surname> <given-names>T.</given-names></name> <name><surname>Ditthapron</surname> <given-names>A.</given-names></name> <name><surname>Matchaparn</surname> <given-names>K.</given-names></name> <name><surname>Tongbuasirilai</surname> <given-names>T.</given-names></name> <name><surname>Banluesombatkul</surname> <given-names>N.</given-names></name> <name><surname>Chuangsuwanich</surname> <given-names>E.</given-names></name></person-group> (<year>2019</year>). <article-title>Affective eeg-based person identification using the deep learning approach</article-title>. <source>IEEE Trans. Cognit. Dev. Syst</source>. <volume>12</volume>, <fpage>486</fpage>&#x02013;<lpage>496</lpage>. <pub-id pub-id-type="doi">10.1109/TCDS.2019.2924648</pub-id></citation>
</ref>
<ref id="B119">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Xu</surname> <given-names>H.</given-names></name> <name><surname>Plataniotis</surname> <given-names>K. N.</given-names></name></person-group> (<year>2016</year>). <article-title>Affective states classification using EEG and semi-supervised deep learning approaches,</article-title> in <source>2016 IEEE 18th International Workshop on Multimedia Signal Processing (MMSP)</source> (<publisher-loc>Montreal, QC</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x02013;<lpage>6</lpage>.</citation>
</ref>
<ref id="B120">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yang</surname> <given-names>J.</given-names></name> <name><surname>Ma</surname> <given-names>Z.</given-names></name> <name><surname>Wang</surname> <given-names>J.</given-names></name> <name><surname>Fu</surname> <given-names>Y.</given-names></name></person-group> (<year>2020</year>). <article-title>A novel deep learning scheme for motor imagery EEG decoding based on spatial representation fusion</article-title>. <source>IEEE Access</source> <volume>8</volume>, <fpage>202100</fpage>&#x02013;<lpage>202110</lpage>. <pub-id pub-id-type="doi">10.1109/ACCESS.2020.3035347</pub-id></citation>
</ref>
<ref id="B121">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yin</surname> <given-names>Z.</given-names></name> <name><surname>Zhao</surname> <given-names>M.</given-names></name> <name><surname>Wang</surname> <given-names>Y.</given-names></name> <name><surname>Yang</surname> <given-names>J.</given-names></name> <name><surname>Zhang</surname> <given-names>J.</given-names></name></person-group> (<year>2017</year>). <article-title>Recognition of emotions using multimodal physiological signals and an ensemble deep learning model</article-title>. <source>Comput. Methods Programs Biomed</source>. <volume>140</volume>, <fpage>93</fpage>&#x02013;<lpage>110</lpage>. <pub-id pub-id-type="doi">10.1016/j.cmpb.2016.12.005</pub-id><pub-id pub-id-type="pmid">28254094</pub-id></citation></ref>
<ref id="B122">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zeng</surname> <given-names>H.</given-names></name> <name><surname>Yang</surname> <given-names>C.</given-names></name> <name><surname>Dai</surname> <given-names>G.</given-names></name> <name><surname>Qin</surname> <given-names>F.</given-names></name> <name><surname>Zhang</surname> <given-names>J.</given-names></name> <name><surname>Kong</surname> <given-names>W.</given-names></name></person-group> (<year>2018</year>). <article-title>Eeg classification of driver mental states by deep learning</article-title>. <source>Cogn. Neurodyn</source>. <volume>12</volume>, <fpage>597</fpage>&#x02013;<lpage>606</lpage>. <pub-id pub-id-type="doi">10.1007/s11571-018-9496-y</pub-id><pub-id pub-id-type="pmid">30483367</pub-id></citation></ref>
<ref id="B123">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Zgallai</surname> <given-names>W.</given-names></name> <name><surname>Brown</surname> <given-names>J. T.</given-names></name> <name><surname>Ibrahim</surname> <given-names>A.</given-names></name> <name><surname>Mahmood</surname> <given-names>F.</given-names></name> <name><surname>Mohammad</surname> <given-names>K.</given-names></name> <name><surname>Khalfan</surname> <given-names>M.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Deep learning ai application to an EEG driven BCI smart wheelchair,</article-title> in <source>2019 Advances in Science and Engineering Technology International Conferences (ASET)</source> (<publisher-loc>Dubai</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x02013;<lpage>5</lpage>.</citation>
</ref>
<ref id="B124">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>C.</given-names></name> <name><surname>Kim</surname> <given-names>Y.-K.</given-names></name> <name><surname>Eskandarian</surname> <given-names>A.</given-names></name></person-group> (<year>2021</year>). <article-title>Eeg-inception: an accurate and robust end-to-end neural network for EEG-based motor imagery classification</article-title>. <source>J. Neural Eng</source>. <volume>18</volume>, <fpage>046014</fpage>. <pub-id pub-id-type="doi">10.1088/1741-2552/abed81</pub-id><pub-id pub-id-type="pmid">33691299</pub-id></citation></ref>
<ref id="B125">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>H.</given-names></name> <name><surname>Yang</surname> <given-names>H.</given-names></name> <name><surname>Guan</surname> <given-names>C.</given-names></name></person-group> (<year>2013</year>). <article-title>Bayesian learning for spatial filtering in an EEG-based brain-computer interface</article-title>. <source>IEEE Trans. Neural Netw. Learni. Syst</source>. <volume>24</volume>, <fpage>1049</fpage>&#x02013;<lpage>1060</lpage>. <pub-id pub-id-type="doi">10.1109/TNNLS.2013.2249087</pub-id><pub-id pub-id-type="pmid">24808520</pub-id></citation></ref>
<ref id="B126">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>R.</given-names></name> <name><surname>Xu</surname> <given-names>P.</given-names></name> <name><surname>Guo</surname> <given-names>L.</given-names></name> <name><surname>Zhang</surname> <given-names>Y.</given-names></name> <name><surname>Li</surname> <given-names>P.</given-names></name> <name><surname>Yao</surname> <given-names>D.</given-names></name></person-group> (<year>2013</year>). <article-title>Z-score linear discriminant analysis for eeg based brain-computer interfaces</article-title>. <source>PLoS ONE</source> <volume>8</volume>, <fpage>e74433</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0074433</pub-id><pub-id pub-id-type="pmid">24058565</pub-id></citation></ref>
<ref id="B127">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>X.</given-names></name> <name><surname>Yao</surname> <given-names>L.</given-names></name> <name><surname>Huang</surname> <given-names>C.</given-names></name> <name><surname>Gu</surname> <given-names>T.</given-names></name> <name><surname>Yang</surname> <given-names>Z.</given-names></name> <name><surname>Liu</surname> <given-names>Y.</given-names></name></person-group> (<year>2017</year>). <article-title>Deepkey: an EEG and gait based dual-authentication system</article-title>. <source>arXiv preprint</source> arXiv:1706.01606. <pub-id pub-id-type="doi">10.48550/arXiv.1706.01606</pub-id></citation>
</ref>
<ref id="B128">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>X.</given-names></name> <name><surname>Yao</surname> <given-names>L.</given-names></name> <name><surname>Kanhere</surname> <given-names>S. S.</given-names></name> <name><surname>Liu</surname> <given-names>Y.</given-names></name> <name><surname>Gu</surname> <given-names>T.</given-names></name> <name><surname>Chen</surname> <given-names>K.</given-names></name></person-group> (<year>2018a</year>). <article-title>Mindid: Person identification from brain waves through attention-based recurrent neural network</article-title>. <source>Proc. ACM Interact. Mobile Wearable Ubiquitous Technol</source>. <volume>2</volume>, <fpage>1</fpage>&#x02013;<lpage>23</lpage>. <pub-id pub-id-type="doi">10.1145/3264959</pub-id></citation>
</ref>
<ref id="B129">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>X.</given-names></name> <name><surname>Yao</surname> <given-names>L.</given-names></name> <name><surname>Sheng</surname> <given-names>Q. Z.</given-names></name> <name><surname>Kanhere</surname> <given-names>S. S.</given-names></name> <name><surname>Gu</surname> <given-names>T.</given-names></name> <name><surname>Zhang</surname> <given-names>D.</given-names></name></person-group> (<year>2018b</year>). <article-title>Converting your thoughts to texts: Enabling brain typing via deep feature learning of EEG signals,</article-title> in <source>2018 IEEE International Conference on Pervasive Computing and Communications (PerCom)</source> (<publisher-loc>Athens</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x02013;<lpage>10</lpage>.</citation>
</ref>
<ref id="B130">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>X.</given-names></name> <name><surname>Yao</surname> <given-names>L.</given-names></name> <name><surname>Zhang</surname> <given-names>S.</given-names></name> <name><surname>Kanhere</surname> <given-names>S.</given-names></name> <name><surname>Sheng</surname> <given-names>M.</given-names></name> <name><surname>Liu</surname> <given-names>Y.</given-names></name></person-group> (<year>2018c</year>). <article-title>Internet of things meets brain-computer interface: A unified deep learning framework for enabling human-thing cognitive interactivity</article-title>. <source>IEEE Internet Things J</source>. <volume>6</volume>, <fpage>2084</fpage>&#x02013;<lpage>2092</lpage>. <pub-id pub-id-type="doi">10.1109/JIOT.2018.2877786</pub-id></citation>
</ref>
<ref id="B131">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhao</surname> <given-names>X.</given-names></name> <name><surname>Zhang</surname> <given-names>H.</given-names></name> <name><surname>Zhu</surname> <given-names>G.</given-names></name> <name><surname>You</surname> <given-names>F.</given-names></name> <name><surname>Kuang</surname> <given-names>S.</given-names></name> <name><surname>Sun</surname> <given-names>L.</given-names></name></person-group> (<year>2019</year>). <article-title>A multi-branch 3d convolutional neural network for EEG-based motor imagery classification</article-title>. <source>IEEE Trans. Neural Syst. Rehabil. Eng</source>. <volume>27</volume>, <fpage>2164</fpage>&#x02013;<lpage>2177</lpage>. <pub-id pub-id-type="doi">10.1109/TNSRE.2019.2938295</pub-id><pub-id pub-id-type="pmid">31478864</pub-id></citation></ref>
<ref id="B132">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhu</surname> <given-names>H.</given-names></name> <name><surname>Forenzo</surname> <given-names>D.</given-names></name> <name><surname>He</surname> <given-names>B.</given-names></name></person-group> (<year>2022</year>). <article-title>On the deep learning models for EEG-based brain-computer interface using motor imagery</article-title>. <source>IEEE Trans. Neural Syst. Rehabil. Eng</source>. <volume>30</volume>, <fpage>2283</fpage>&#x02013;<lpage>2291</lpage>. <pub-id pub-id-type="doi">10.1109/TNSRE.2022.3198041</pub-id><pub-id pub-id-type="pmid">35951573</pub-id></citation></ref>
</ref-list> 
</back>
</article> 