Event Abstract

Common Spatial Pattern Patches - an Optimized Spatial Filter for Adaptive BCIs

  • 1 Berlin Institute of Technology, Machine Learning Laboratory, Germany
  • 2 Berlin Institute of Technology, Bernstein Focus: Neurotechnology, Germany
  • 3 Fraunhofer FIRST, IDA, Germany

Laplacian filters are commonly used in Brain Computer Interfacing (BCI). When only data from few channels are available, or when, like at the beginning of an experiment, no previous data from the same user are available, complex features cannot be used. In this case band power features calculated from Laplacian filtered channels represent an easy, robust and general signal to control a BCI, since its calculation does not involve any class information. For the same reason, the performance obtained with Laplacian features is poor in comparison to subject-specific optimized spatial filters, such as Common Spatial Patterns (CSP) analysis, which, on the other hand, can be used just in a later phase of the experiment, since they require a considerable amount of training data in order to enroll a stable and good performance (overfitting problem). This drawback is particularly evident in case of poor performing BCI users, whose data are highly non-stationary and contain little class relevant information. Therefore, Laplacian filtering is preferred to CSP, e.g., in the initial period of co-adaptive calibration [1], a novel BCI paradigm which eliminates the calibration session and counteracts the problem of BCI illiteracy.

Here, the use of an ensemble of local CSP patches (CSPP) is proposed, which can be considered as a compromise between Laplacians and CSP. A “small” Laplacian derivation for one channel, results from the subtraction with equal weights of the activity of the four surrounding channels from the activity of the channel itself [2]. The filter proposed here is, instead, a “small” CSP patch, i.e. the linear derivation of the surrounding channels plus the central one, where the weights are determined by CSP analysis on the five channels. Two out of the five resulting CSP filters are automatically chosen [3], one representative for each class.

In order to demonstrate that CSPP is particularly useful for the co-adaptive calibration design, the new algorithm is tested on off-line data of the first three runs from a previous co-adaptive BCI motor imagery study. The experiment starts directly with feedback using a pre-defined subject-independent classifier calculated from previous BCI sessions. The classifier is then adapted after each trial to the newly acquired user's data. In the original design, the features are small Laplacian derivations calculated on C3, Cz and C4 in the alpha and beta frequency bands, resulting in six features. In the proposed design, the features are an ensemble of six CSP features, obtained by three CSP patches on C3, Cz and C4 in a broad frequency band (8-35 Hz), using. i.e. the same 15 of channels.

In Fig. 1 the performance of the two methods is shown, for three categories of users established depending on their performance in previously acquired standard BCI session: Cat. I) users with performance ≥70% in the feedback session, II) users with performance ≥70% in the calibration session but poor feedback performance <70%, and III) users with poor performance <70% already in the calibration session. The threshold accuracy of 70% is assumed to be required for binary application for a successful BCI control.

CSPP outperforms the Laplacian filter in all runs for all users, except for the third run for users of Cat. I. One possible explanation is that these users are able to modulate their sensorimotor rhythm depending on the feedback obtained during the experiment. The improvement is more visible for users of Cat. II and III.

Since CSPP offers a better classification capacity than Laplacian channels from the very first run, it represents a successful alternative to Laplacian filters in all those applications where few channels and/or trials are available, even in combination with a subject independent classifier.

Figure 1. Results for all three runs, averaged over user categories. Each point indicates the mean over 20 trials. Each bar indicates the mean over one run. Blue: Laplacian features. Pink: CSPP features.

Figure 1

Acknowledgements

This work was supported in part by grants of DFG (MU 987/3-1), BMBF (FKZ 01IB001A/B and 01GQ0850), and EU (ICT-2008-224631).

References

1. C. Vidaurre and B. Blankertz. Towards a cure for BCI illiteracy: Machine-learning based co-adaptive learning. Brain Topography, 2010, 23:194-198. Open Access.
2. D. J. McFarland, L. M. McCane, S. V. David, J. R. Wolpaw, Spatial filter selection for EEG-based communication, Electroencephalography and Clinical Neurophysiology, 1997, 103(3): 386-394.
3. B. Blankertz, R. Tomioka, S. Lemm, M. Kawanabe, and K.-R. Müller. Optimizing spatial filters for robust EEG single-trial analysis. IEEE Signal Processing Magazine, 2008, 25(1):41-56.

Keywords: computational neuroscience

Conference: Bernstein Conference on Computational Neuroscience, Berlin, Germany, 27 Sep - 1 Oct, 2010.

Presentation Type: Presentation

Topic: Bernstein Conference on Computational Neuroscience

Citation: Sannelli C, Vidaurre C, Mueller KR and Blankertz B (2010). Common Spatial Pattern Patches - an Optimized Spatial Filter for Adaptive BCIs. Front. Comput. Neurosci. Conference Abstract: Bernstein Conference on Computational Neuroscience. doi: 10.3389/conf.fncom.2010.51.00107

Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.

The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.

Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.

For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.

Received: 07 Sep 2010; Published Online: 23 Sep 2010.

* Correspondence: Ms. Claudia Sannelli, Berlin Institute of Technology, Machine Learning Laboratory, Berlin, Germany, claudia.sannelli@tu-berlin.de