Original Research ARTICLE
A New Method to Generate Artificial Frames Using the Empirical Mode Decomposition for an EEG-Based Motor Imagery BCI
- 1Data and Signal Processing Research Group, Department of Engineering, University of Vic-Central University of Catalonia, Barcelona, Spain
- 2g.tec Medical Engineering Spain SL, Barcelona, Spain
- 3g.tec Medical Engineering GmbH, Schiedlberg, Austria
EEG-based Brain-Computer Interfaces (BCIs) are becoming a new tool for neurorehabilitation. BCIs are used to help stroke patients to improve the functional capability of the impaired limbs, and to communicate and assess the level of consciousness in Disorder of Consciousness (DoC) patients. BCIs based on a motor imagery paradigm typically require a training period to adapt the system to each user's brain, and the BCI then creates and uses a classifier created with the acquired EEG. The quality of this classifier relies on amount of data used for training. More data can improve the classifier, but also increases the training time, which can be especially problematic for some patients. Training time might be reduced by creating new artificial frames by applying Empirical Mode Decomposition (EMD) on the EEG frames and mixing their Intrinsic Mode Function (IMFs). The purpose of this study is to explore the use of artificial EEG frames as replacements for some real ones by comparing classifiers trained with some artificial frames to classifiers trained with only real data. Results showed that, in some subjects, it is possible to replace up to 50% of frames with artificial data, which reduces training time from 720 to 360 s. In the remaining subjects, at least 12.5% of the real EEG frames could be replaced, reducing the training time by 90 s. Moreover, the method can be used to replace EEG frames that contain artifact, which reduces the impact of rejecting data with artifact. The method was also tested on an out of sample scenario with the best subjects from a public database, who yielded very good results using a frame collection with 87.5% artificial frames. These initial results with healthy users need to be further explored with patients' data, along with research into alternative IMF mixing strategies and using other BCI paradigms.
Brain-Computer Interfaces (BCI) are systems capable of controlling external devices using direct measures of the brain signals (Wolpaw et al., 2002; Wolpaw and Wolpaw, 2012). A BCI has three main parts:
1. Brain signals acquisition system.
2. Processing system.
3. Device/feedback control.
The selection of the brain signal acquisition system relies on the intended BCI application (Wolpaw et al., 2002; Shih et al., 2012; Wolpaw and Wolpaw, 2012). EEG is a non-invasive approach with a high temporal resolution that is suited for real-time application (Shih et al., 2012). EEG signals are electrical potential differences from different areas of the scalp caused by the firing of different neurons, often in response to an external stimulus. The resulting synchronized activity across large groups of neurons leads to electrical changes over different brain regions that can be recorded and sent to the processing system.
In a BCI system (Figure 1), EEG signals are processed by a computer or processing unit (processing system). These signals are highly noisy, and the use of filtering and pattern recognition techniques are needed to acquire useful information from them (Wolpaw et al., 2002; Wolpaw and Wolpaw, 2012). Paradigms are instructions that the BCI user must follow to elicit known brain responses that the processing system can detect and use to control an external device. Many BCIs are designed to control monitors, but BCIs have been used with other external devices, such as functional electrical stimulator (FES) or orthosis as part of a BCI-based motor rehab system.
Figure 1. Block diagram of a generic EEG-based BCI system. The BCI gets EEG data from the subject, processes it and generates the proper signals to control the external device and give feedback to the subject.
Recently, EEG-based BCIs have been extended to new tools for neurorehabilitation patients who have upper limb impairment due to a stroke (Ramos-Murguialday et al., 2013; Cho et al., 2016). They are also being used for patients with disorders of consciousness to assess their mental state and provide communication (Guger et al., 2013, 2017).
Different BCIs have used different paradigms (Farwell and Donchin, 1988; Pfurtscheller, 2001; Oehler et al., 2008), and one of the most widely used involves Motor Imagery or MI (Guger et al., 2015). In an MI BCI paradigm, the user is asked to imagine specific movements, such as left or right hand movements. This movement imagination activates areas of the motor cortex, much like the activation resulting from real movement. Thus, MI BCIs may determine whether a user is imaging left vs. right hand movement to provide a “yes” or “no” reply to a question or move a cursor horizontally.
In the MI paradigm, a trial is the time period which the user imagines movement, as well as any additional time needed for instructions, cues, or other delays. The BCI presents real-time feedback to the user that indicates how well the MI task is being performed and classified. This feedback might be visual information displayed on a screen, auditory feedback through headphones or proprioceptive or other feedback from other devices.
When using the MI BCI approach to help patients regain movement, the feedback often includes an avatar presented on a monitor that performs simulated hand/arm movements and FES electrodes placed over the affected limb. In conventional therapy, the patient is asked to imagine performing a movement such as wrist dorsiflexion while a therapist provides instructions and manages an FES device that triggers wrist dorsiflexion. By adding the MI BCI into the control loop, rewarding feedback such as avatar movement and FES activation is only possible when the patient performs the correct MI. This BCI-based feedback is much more tightly coupled to each patient's MI than conventional means, which should increase the functional improvement from therapy training (Remsik et al., 2016; Sabathiel et al., 2016).
BCIs, especially MI BCIs, usually require calibration for each user for at least two reasons. First, classifiers need time to learn the unique features of each new user's EEG activity, such as ERD/S used in MI BCIs. Second, these features may change within or across sessions or runs due to fatigue, medication, motivation, different cap placement, or other factors. Different cap placement from one session to another could be especially problematic if BCIs gain wider clinical adoption. Many therapists and other staff are not trained in precise cap positioning, and this process can require a few additional minutes. Calibration at the start of a session can lead to better classifier performance, but also requires additional time. Since MI BCIs typically require more calibration time than other BCIs, and patients with stroke may have limited time and motivation, new approaches to reduce calibration time with MI BCIs are needed.
In a typical BCI, a new EEG data frame is obtained from each trial. The quality of the classifier is directly proportional to the number of frames from each type of MI (such as left vs. right hand; Ramoser et al., 2000). This paper explores a new approach that creates artificial frames, which the classifier can use like real frames to reduce the need for calibration data. Because of the non-linear and non-stationary aspects of EEG signals, a new processing method based on the EMD decomposition (Huang et al., 1998) is proposed to generate those new artificial frames (Hawley et al., 2008; Huang et al., 2013; Riaz et al., 2015).
Materials and Methods
The experiment was performed on 7 healthy men aged 29.8 ± 5.76 years. All subjects reported no history of stroke or other cause of movement disability and signed an informed consent document prior to participating in the study.
The paradigm was implemented using a closed-loop system that provides real-time feedback to the user and saves the data for later analysis. This system uses a 16 EEG channel cap (g.SCARABEO, g.tec medical engineering GmbH) with the electrodes placed over the sensorimotor cortex according to the 10/10 international system: FC5, FC1, FCz, FC2, FC6, C5 C3, C1, Cz, C2, C4, C6, CP5, CP1, CP2, CP6. The Fpz electrode is connected to the ground and a reference electrode is placed on the right earlobe. The EEG cap is connected to a biomedical amplifier (g.USBamp, g.tec medical engineering GmbH), which is connected to a computer using a USB cable. The system provides two kinds of real-time feedback: a visual feedback through an avatar displayed on a screen, and proprioceptive feedback through FES electrodes placed on the extensor digitorum communis muscles of each subject's left and right arms.
At the beginning of each session, each subject was seated in a comfortable chair about 1 m in front of a monitor. The EEG cap was mounted and FES electrodes were affixed to both arms to stimulate wrist dorsiflexion. The experimenter visually inspected the subject's real-time EEG to check data quality and calibrated the FES parameters (pulse width and current) for each subject. Each subject was then asked to sit in front of the monitor and follow the instructions provided by the system.
Each subject completed one session with two runs. A short break was provided between these two runs, during which the subjects remained seated with the cap and FES electrodes in place. Each run presented 80 trials (40 for each side) to each subject. During the first 2 s of each trial, the subject rested, after which an acoustic signal (beep) indicated whether the subject should imagine left or right wrist dorsiflexion. The subject imagined the movement from seconds 3 to 8 while the system provided real-time feedback through the monitor and FES electrodes. After second 8, the trial ended and a new trial began (Figure 2). There were an equal number of cues to the left vs. right wrist during each run, and the order was chosen pseudorandomly. Data were stored for later offline analysis.
Figure 2. Motor imagery paradigm trial. During the first 2 s, the user is asked to relax. After 2 s, a beep is played and then an auditory cue indicates whether the user should imagine left or right movement. One frame consists of the data resulting from one trial.
Empirical Mode Decomposition
Common analytical tools like FFT and wavelets would not be adequate to process EEG signals in this scenario because they are non-linear and non-stationary. The Empirical Mode Decomposition (EMD) method is based on an algorithm that allows users to conduct a data-driven analysis that is more fitting with non-stationary signals that have changes in the frequency structure within a short period of time.
The algorithm decomposes the original signal into a finite number of functions called IMFs (Intrinsic Mode Function) that each of which represents a non-linear oscillation of the signal (Huang et al., 1998). Theses intrinsic functions fulfill two conditions:
1. In the whole signal, the number of maxima is the same as the number of zero-crossing, or differs by at most one.
2. For any sample, the mean value between the envelope of the local maxima and the envelope of the local minima is zero.
The process to obtain the IMFs from a signal x(t) is:
1. Set s(t) = ri−1(t). Initially, i = 1 and r0(t) = x(t).
2. Detect the local maxima and the local minima of s(t).
3. Interpolate all local maxima to generate the upper envelope.
4. Interpolate all local minima to generate the lower envelope.
5. Obtain the local mean m(t) by averaging the upper and lower envelopes.
6. Get a candidate for IMF by subtracting the local mean m(t) from the signal: h(t) = s(t) − m(t).
7. If h(t) does not satisfy the IMF's conditions, begin a new loop from step 2, setting s(t) = h(t).
8. Otherwise, h(t) is defined as an IMF: IMFi(t) = h(t).
9. ri(t) = ri−1(t) − IMFi(t).
10. If ri(t) is a monotonic function or does not have enough extrema to calculate the upper and lower envelopes, then IMFi(t) is the last IMF function of x(t) and the decomposition ends.
11. Otherwise, set s(t) = ri(t) and start a new loop from step 2 in order to obtain IMFi+1(t).
Once all the IMFs have been calculated, the signal can be recovered using its IMFs (1) and the final residue rn(t), where n is the number of extracted IMFs (Figure 3).
The number of IMFs depends on the structure of the EEG signal, and may vary among different EEG data samples. An EEG signal is completely restored by adding all its IMFs and the final residue. Likewise, if a single one of these IMFs is replaced with another IMF from other previously decomposed EEG signal, using the formula (1), then a different EEG signal is obtained.
New Artificial EEG Frames
Prior work has created EEG artificial frames using some stationary approaches that use Gaussian noise as a source into an FFT-based system (Paris et al., 2017), but this approach lacks the temporal features of the natural EEG signals. Otherwise, in some studies the artificial EEG is created by mixing different parts of different temporal EEG signals (Lotte, 2011). In this case, the method keeps the temporal features of the signal, but without control of its frequency features.
Using the EMD approach, the new artificial EEG signals can be created by combining some IMFs from different real EEG signals. Although those new EEG signals will be different from the real ones, they will exhibit similar features and the same underlying structure. Unlike the other approaches described above, the EMD analysis can keep the features within temporal and frequency domains, because each IMF is a representation in the temporal domain of a specific non-linear oscillation of the signal.
In the paradigm used in this study, each MI frame is composed of 16 EEG signals, meaning that any new artificial frame needs 16 new artificial EEG signals.
Starting from a real frame collection, the new frame collection containing artificial frames is built following these steps:
1. Define the number of frames to be replaced. This requires replacing the same number of frames from each class (right-side and left-side) with a maximum of 40 frames.
2. Randomly select the frames to be replaced in the original frame collection. The rest of the frames contribute with their IMFs to build the new artificial frames.
3. The selected frames are split in two sets of frames according to their class (left vs. right).
4. To create an artificial frame of a specific class, a number of N frames are selected randomly from the set of frames belonging to the same class (Figure 4). The first selected frame contributes with all its first IMFs (16 IMFs, one per channel), the second one with its second IMFs, and successively until the nth frame, which contributes with its nth IMFs.
5. Add up all the IMFs corresponding to the same channel to build each new EEG channel of the new artificial frame.
Figure 4. A new frame collection containing artificial frames is created using an original frame collection and randomly selecting the removed frames. The IMFs of the non-selected frames are randomly mixed to create the artificial frames that will replace the removed ones.
Repeat step 4 for each new artificial frame necessary to complete the frame collection.
As explained in section Empirical Mode Decomposition, different EEG signals might have different numbers of IMFs, and it is necessary to establish beforehand the number of IMFs of the new artificial frames. In this study, we considered that an EEG signal to be completely represented using their first 15 IMFs, because none of the decomposed signals had more than 12 IMFs. Thus, in every real decomposed EEG signal with <15 IMFs, additional zero value IMFs were added, reaching 15 IMFs for every decomposed signal.
We used this procedure to create new frame collections for each subject's data. Each of these new frame collections contained a different number of artificial frames: 2 (2.5%), 4 (5%), 6 (7.5%), 8 (10%), 10 (12.5%), 20 (25%), 30 (37.5%), or 40 (50%). This process created 9 frame collections: the original data with 0 artificial frames, and eight collections with artificial frames. For each of those 9 frame collections, we constructed a classifier and determined the error rate.
Classifier Training and Implementation
The classifier is based on Linear Discriminant Analysis (LDA). Initially, the frame collection is divided in two groups of frames according to their class (right or left wrist movement). Next, every signal is bandpass filtered (8–30 Hz) and then artifact rejection is applied. With the non-rejected frames, a spatial CSP filter is calculated (Koles et al., 1990; Wang et al., 2005), keeping only the 2 first and 2 last result vectors as the spatial filter. Therefore, the 16 EEG signals of a frame are spatially filtered resulting in four signals. A 1.5 s window variance is calculated over each of these signals. Finally, these variances are normalized and scaled logarithmically, then used as features to build the LDA classifier (Cho et al., 2016).
A frame collection and classifier are needed to calculate the error rate. Each frame is passed through the classifier, which outputs a value indicating the estimation of that frame's class for each one of its 2,048 samples (256 samples a second). This result is then compared to the true class and marked as correct if they match, and incorrect otherwise. After determining the error of every single sample of a frame collection, a percentage of the incorrect samples is calculated over the feedback period of each trial (from second 3.5 to second 8), providing the global error rate for that classifier. The error rate is expressed as two different percentage values: right-side error rate and left-side error rate.
Data from each subject's first run were used to build all the classifiers, and data from the second run were used to assess the performance of these classifiers with out-of-sample data (Figure 5). The out-of-sample error rate of the classifiers without artificial frames were also calculated.
Figure 5. The paradigm provided two datasets. The first dataset was used to build the classifier. Next, the classifier was assessed with both datasets: in-sample (dataset 1) and out-of-sample (dataset 2). Left-side and right-side error rate (ER) can then be determined to assess classifier performance.
The new frame creation process relies on the random selection of the removed frames and the IMFs. Repeating the experiment with a different random seed leads to different frame collections and very likely to slightly different results. Hence, the frame creation procedure in section New Artificial EEG Frames and classification process described in this section were repeated 100 times for each subject.
Median Absolute Deviation
The MAD (Median Absolute Deviation) is a method to detect outliers from a statistical sample when the sample is small and has a non-normal distribution (Leys et al., 2013); instead of using the mean values to fix the boundaries it uses the median value. Usually, the upper boundary is defined as three times the MAD above the median, and the lower one as three times below (2). All samples outside those boundaries are considered as outliers, and all inside ones as inliers (3).
We used the MAD approach to validate the performance of each classifier with a specific number of artificial frames. We used the MAD and the sample's median to calculate a ratio (4), and two values of this ratio were obtained using the error rates of the classifiers built without artificial frames.
For example, after 100 repetitions of the experiment for a specific subject, 100 classifiers with N artificial frames were created (using a frame collection with N artificial frames), and their right and left error rates were calculated. From this sample, the median and the MAD values were obtained. Then, the ratio R was calculated using the error rates of the classifier created with the frame collection without artificial frames.
This process sought to determine whether the original classifier could be considered as an inlier of the sample of the classifiers with N artificial frames. Thus, values of R below 3 meant that the original classifier was not an outlier and the replacement of the real frame collection with artificial ones is similar for this specific subject and with a maximum of N artificial frames.
In Sample Results
A classifier with a specific number of artificial frames is considered similar to its original if its right and left ratios are both below 3 (section Classifier Training and Implementation). Across all subjects and all classifiers, only one of the classifiers with 37.5% of artificial frames of subject S02 is considered as dissimilar (Table 1). From the same subject, the classifiers with 25.0 and 50.0% are just below 3. Using lower maximum ratios applied stricter conditions to test the classifiers. If we apply a ratio threshold of 2.6 instead of 3, these two outcomes from S02 would be considered an outlier. Further, subjects S03 and S06 also have high ratio values (above 2.6), but below 3. If a maximum ratio of 2 is applied, all the classifiers for all subjects were acceptable if the frames collection used at most 12.5% of artificial frames. All classifiers were statistically similar to their corresponding original classifiers for subjects S01, S04, S05, and S07.
Classifiers with more than 37.5% of artificial frames for subjects S01 and S06 showed a smaller ratio in the right-side class than the classifiers with fewer artificial frames. However, the left-side class of the same classifiers increased considerably.
Out of Sample Results
The previously created classifiers and the second recorded dataset were used to analyze performance with out-of-sample data. First, we calculated the error rate of the classifiers built without artificial frames. We only designated the classifiers with an error rate below of 33% in both sides as useful. Under these conditions, only subject S01 and S03 had valid error rates in both sides (Table 2).
Table 3 presents additional details from subjects S01 and S03. Subject S01 showed very good results, with very small and similar error rates between the original classifiers and the rest of his classifiers. Subject S03 showed higher error rates than subject S01, and the error rates increased slightly with the number of the artificial frames in the frame collection (Table 3). Nonetheless, the classifiers built with at most 37.5% of artificial frames had error rates in both sides below the 33% threshold. However, the right-side error rate of classifier with 50% of artificial frames is 34.06%, meaning that this classifier should not be considered as valid.
Considering that only 2 out of 7 subjects were considered valid to be analyzed in an out of sample scenario, and that an error rate below 33% can still lead to a valid classifier, we also used an external EEG MI dataset (Cho et al., 2017) to increase the number of subjects. We selected the four subjects with best accuracies and split their dataset in two different sets of data. The first dataset was used to create the classifier, and the second dataset was used to calculate the out of sample error rate. Table 4 show the experimental results, which are very close to the results from the subjects recorded in the present study. Results are especially good for subjects E01 and E02. Subject E03 (only) showed a non-valid value in the classifier built with a density of 50%, meaning that all his other classifiers should be considered useful. On the other hand, subject E04 has no value below 33% and any classifier should be considered valid.
Additional Out of Sample Results
In the previous experiments we used a maximum density of artificial frames of 50%. Here we present new experiments increasing this density above 50% in order to determine the subject-specific maximum density possible that can still yield valid classifiers (both mean error rates below 33%). The experiment was repeated for densities of 62.5, 75, and 87.5%. As shown in Table 5, subjects S01, E01, and E02 had error rates below 33% with a frame collection composed of 87.5% of artificial frames and below. Subject E04 has no valid classifier, and the other two subjects (S03 and E03) showed error rates above 33% with densities above 50%. However, data from subject E04 had not yielded any valid classifier in the latter results with densities up to 50%.
This paper introduced a new method to create EEG artificial data frames to reduce the calibration time required for a MI BCI paradigm. The results suggest that the maximum number of artificial frames that are advisable in a frame collection varies substantially across different people. This could occur because the subject's MI varies within and across each trial, meaning that the mixing of different IMFs might produce a less helpful artificial frame. Longer training should help subjects learn to generate more consistent and distinct MI activity, and shorter trials and improved feedback could also be helpful.
The in-sample results demonstrate that the method is useful when creating similar classifiers for four out of seven subjects when the frame collection has at most 50% of artificial frames, which allows halving the training time for these subjects. This could reduce fatigue, stress and discouragement associated with the training, when feedback is often inaccurate. Additional research might identify methods to identify priori which subjects could tolerate frame collections with 50% or even more artificial frames.
While in-sample results are used to assess the capability of the neurorehabilitation patient or other users to control the BCI, out-of-sample processing is used to send the feedback to the patient. Typically, the BCI uses a classifier created from the preceding session from the patient. Reducing the error rate in out-of-sample data results in more accurate feedback, which should improve the closed-loop synergy between the user and the BCI. Out-of-sample results showed that subjects whose classifiers based on real data yielded acceptable error rates (below 33%) also had acceptable error rates when using the classifiers with artificial frames. However, only 2 out of the 7 subjects had original classifier error rates below 33%, which is insufficient to thoroughly validate this method on an out-of-sample environment.
Our study also included four subjects with good MI accuracy from an external database. Their out-of-sample error rates were very close to the ones achieved with the subjects of our study. Seeing these good out-of-sample results, we extend the experiment with densities beyond 50%. In 3 of these 6 subjects, the results showed that classifiers built with 87.5% of artificial frames still led to error rates below 33%. Additional research will be needed to explore whether the slight increase in error rate resulting from the increase of artificial frames in the frame collection is worth the reduced training time. Further research could also enlarge the density of artificial frames, which may help increase the generalization of the classifiers and thereby decrease their out of sample error rates.
The study showed a similar in-sample behavior in all subjects' classifiers created with a maximum of 12.5% of artificial frames in their frame collections and a strict ratio threshold of 2. Using 12.5% artificial frames would improve a motor imagery BCI system in two ways. First, it would reduce the training time from 720 to 630 s. Second, the method could be used to replace an artifacted frames with artificial ones. In the CSP calculation, the number of frames for each side must be exactly the same, and if there are some artifacted frames in one class, the number of frames in the other class must be reduced accordingly. This can reduce classifier accuracy and may necessitate additional training runs. Instead, up to 12.5% of artifacted frames could simply be replaced.
This study used an LDA classifier due to its widespread use in MI BCI paradigms. Further studies could explore test the artificial frame creation method using different classifiers. Another interesting direction is the mixing strategy of the IMF to obtain the artificial frames. The described method mixes 15 IMF from different 15 randomly chosen real frame to build a new artificial frame. Mixing only the most significant IMFs (instead of fifteen), or even reducing the number of real frames to three or four, might both be worth exploring.
This approach might also be extended to other types of BCIs. For example, some passive approaches for evaluating alertness or fatigue might benefit. BCIs based on the P300 complex, steady-state evoked potentials, and similar BCI paradigms that require focused attention typically require much less training than MI and most other BCIs. However, this approach could still be useful for countering artifact or to improve classifier accuracy in some users, such as patients using a vibrotactile P300 system.
Most importantly, this new BCI method needs additional research with more subjects, especially to validate the out-of-sample behavior. These subjects should include target patients, including persons with stroke and other persons seeking rehabilitation. New paradigms could provide training of other types of rehabilitation, such as lower-limb training. Patients with locked-in syndrome (LIS) may also benefit from this approach for communication or other goals.
This study was carried out in accordance with the recommendations of the Ethics Committee of the Kepler Universitätsklinikum (Kepler University Hospital), Austria. The protocol was approved by the Ethikkommission des Landes OberÖsterreich (Ethics Committee of the Province of Upper Austria). All subjects gave written informed consent in accordance with the Declaration of Helsinki.
JS-C conceived and organized the artificial frame generation protocol and its application; JD-F collected the experimental data, implemented the protocol and the classification algorithm, and performed the statistical data analysis; RO contributed to the signal processing section; JS-C and CG had theoretical contributions on the analysis of the results; JD-F wrote the first draft of the paper. All authors reviewed the draft of the paper and approved the final manuscript.
Conflict of Interest Statement
CG is the co-CEO of g.tec Medical Engineering. JD-F and RO are all employed by g.tec Medical Engineering. JS-C declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
We thank Brendan Z. Allison for the critical comments and suggestions on the draft of the paper. We also thank the Ministry of Business and Knowledge of the Government of Catalonia that partially supported this study (ACCIO RD15-1-0020 project and the Industrial Doctorates Plan), and the SME Phase II Instrument recoveriX (No. 693928).
Cho, W., Sabathiel, N., Ortner, R., Lechner, A., Irimia, D. C., Guger, C., et al. (2016). Paired associative stimulation using brain-computer interfaces for stroke rehabilitation: a pilot study. Eur. J. Transl. Myol. 26:6132. doi: 10.4081/ejtm.2016.6132
Farwell, L. A., and Donchin, E. (1988). Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials. Electroencephalogr. Clin. Neurophysiol. 70, 510–523. doi: 10.1016/0013-4694(88)90149-6
Guger, C., Sorger, B., Noirhomme, Q., Naci, L., Monti, M. M., Real, R., et al. (2013). “Brain-computer interfaces for assessment and communication in disorders of consciousness,” in Emerging Theory and Practice in Neuroprosthetics, eds G. Naik and Y. Guo (Hershey, PA: IGI Global), 181–214. doi: 10.4018/978-1-4666-6094-6.ch010
Guger, C., Spataro, R., Allison, B., Heilinger, A., Ortner, R., Cho, W., et al. (2017). Complete locked-in and locked-in patients: command following assessment and communication with vibro-tactile P300 and motor imagery brain-computer interface tools. Front. Neurosci. 11:251. doi: 10.3389/FNINS.2017.00251
Hawley, S. D., Atlas, L. E., and Chizeck, H. J. (2008). “Some properties of an empirical mode type signal decomposition algorithm,” in 2008 IEEE International Conference on Acoustics, Speech and Signal Processing Vol. 3 (Las Vegas: IEEE), 3625–3628.
Huang, J. R., Fan, S. Z., Abbod, M. F., Jen, K. K., Wu, J. F., and Shieh, J. S. (2013). Application of multivariate empirical mode decomposition and sample entropy in EEG signals via artificial neural networks for interpreting depth of anesthesia. Entropy 15, 3325–3339. doi: 10.3390/e15093325
Huang, N. E., Shen, Z., Long, S. R., Wu, M. C., Shih, H. H., Liu, H. H., et al. (1998). The empirical mode decomposition and the hilbert spectrum for nonlinear and non-stationary time series analysis. Proc. R. Soc. A Mathem. Phys. Eng. Sci. 454, 903. doi: 10.1098/rspa.1998.0193
Leys, C., Ley, C., Klein, O., Bernard, P., and Licata, L. (2013). Detecting outliers: Do not use standard deviation around the mean, use absolute deviation around the median. J. Exp. Soc. Psychol. 49, 764–766. doi: 10.1016/j.jesp.2013.03.013
Oehler, M., Neumann, P., Becker, M., Curio, G., and Schilling, M. (2008). “Extraction of SSVEP signals of a capacitive EEG helmet for human machine interface,” in Conference Proceedings :…Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Conference 2008 (Vancouver), 4495–4498.
Paris, A., Atia, G. K., Vosoughi, A., and Berman, S. A. (2017). A new statistical model of electroencephalogram noise spectra for real-time brain-computer interfaces. IEEE Trans. Biomed. Eng. 64, 1688–1700. doi: 10.1109/TBME.2016.2606595
Ramoser, H., Muller-Gerking, J., and Pfurtscheller, G. (2000). Optimal spatial filtering of single trial EEG during imagined hand movement. IEEE Trans. Rehabil. Eng. 8, 441–446. doi: 10.1109/86.895946
Ramos-Murguialday, A., Broetz, D., Rea, M., Läer, L., Yilmaz, O. L., Brasil, F., et al. (2013). Brain-machine interface in chronic stroke rehabilitation: a controlled study. Ann. Neurol. 74, 100–108. doi: 10.1002/ana.23879
Remsik, A., Young, B., Vermilyea, R., Kiekhoefer, L., Abrams, J., Elmore, S., et al. (2016). A review of the progression and future implications of brain-computer interface therapies for restoration of distal upper extremity motor function after stroke. Expert Rev. Med. Devices 13, 445–454. doi: 10.1080/17434440.2016.1174572
Riaz, F., Hassan, A., Rehman, S., Niazi, I. K., and Dremstrup, K. (2015). EMD based temporal and spectral features for the classification of EEG signals using supervised learning. IEEE Trans. Neural Syst. Rehabil. Eng. 24, 28–35. doi: 10.1109/TNSRE.2015.2441835
Sabathiel, N., Irimia, D. C., Allison, B. Z., Guger, C., and Edlinger, G. (2016). “Paired associative stimulation with brain-computer interfaces: a new paradigm for stroke rehabilitation,” in Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Vol. 9743 (Cham: Springer), 261–272.
Wang, Y., Gao, S., and Gao, X. (2005). “Common spatial pattern method for channel selelction in motor imagery based brain-computer interface,” in 2005 IEEE Engineering in Medicine and Biology Society 27th Annual Conference (Shanghai), 5392–5395. doi: 10.1109/IEMBS.2005.1615701
Wolpaw, J. R., Birbaumer, N., McFarland, D. J., Pfurtscheller, G., and Vaughan, T. M. (2002). Brain computer interfaces for communication and control. Front. Neurosci. 4, 767–791. doi: 10.3389/conf.fnins.2010.05.00007
Keywords: brain-computer interface, motor imagery, empirical mode decomposition, artificial frames, EEG
Citation: Dinarès-Ferran J, Ortner R, Guger C and Solé-Casals J (2018) A New Method to Generate Artificial Frames Using the Empirical Mode Decomposition for an EEG-Based Motor Imagery BCI. Front. Neurosci. 12:308. doi: 10.3389/fnins.2018.00308
Received: 05 February 2018; Accepted: 20 April 2018;
Published: 11 May 2018.
Edited by:Ioan Opris, University of Miami, United States
Copyright © 2018 Dinarès-Ferran, Ortner, Guger and Solé-Casals. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Jordi Solé-Casals, email@example.com