Event Abstract

Zero-Training for Brain-Computer Interfaces

  • 1 TU-Berlin, Germany

In a Brain-Computer Interface (BCI) the computer must correctly interpret the neural control signals generated by the user. In the early days, the user was trained to generate the correct control signals which required a tedious training program spanning several days to weeks. The introduction of machine learning to the field has shifted the training process from the user to the computer. Current systems rely on the recording of a calibration dataset. During this recording the user is instructed to perform a specific mental task at a specific point in time. In turn, this allows us to label the EEG with the user’s intention. The calibration dataset can then be used to teach the computer how to decode the user’s EEG. This machine learning based workflow has significantly reduced the training time and depending on the paradigm, a user can utilise a BCI within 10-30 minutes after the EEG cap has been set up. In spite of this vast reduction in training time, the calibration of the BCI remains a major hindrance. This is true for a patient, which typically has a limited attention span, and for healthy users, who expect plug and play devices. For this reason, we developed a zero-training approach to BCI. Our approach, that is tailored to ERP based BCI, comprises an unsupervised learning component and a transfer learning component. The transfer learning component ensures that we have a usable but suboptimal user-independent decoder. The unsupervised learning component on the other hand has the ability to transform this user-independent model into a high quality subject-specific decoder. In this presentation, I will discuss how zero-training BCI is possible, how transfer learning can be integrated directly into the unsupervised model and what makes transfer learning possible in ERP based BCI.

Acknowledgements

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement NO 657679. This work was also supported in part by BMBF (01GQ1115).

References

[1] Kindermans PJ, Verstraeten D, Schrauwen B. A Bayesian model for exploiting application constraints to enable unsupervised training of a P300-based BCI. PLoS ONE 2012b;7(4):e33758. doi:10.1371/journal.pone.0033758.
[2] Kindermans PJ, Verschore H, Verstraeten D, Schrauwen B. A P300 BCI for the masses: Prior information enables instant unsupervised spelling. In: Advances in Neural Information Processing Systems (NIPS). 2012a. p. 719–27.
[3] Kindermans PJ, Tangermann M, Müller KR, Schrauwen B. Integrating dynamic stopping, transfer learning and language models in an adaptive zero- training erp speller. Journal of Neural Engineering 2014;11(3):035005.
[4] Kindermans PJ, Tangermann M, Schreuder M, Braun M, Müller KR. Towards understanding transfer learning during unsupervised brain-machine interaction. submitted

Keywords: Brain-computer interface, zero-training, machine learning, unsupervised learning, Event-Related Potentials, P300

Conference: German-Japanese Adaptive BCI Workshop, Kyoto, Japan, 28 Oct - 29 Oct, 2015.

Presentation Type: Oral presentation (Invited speakers)

Topic: Adaptive BCI

Citation: Kindermans P (2015). Zero-Training for Brain-Computer Interfaces. Front. Comput. Neurosci. Conference Abstract: German-Japanese Adaptive BCI Workshop. doi: 10.3389/conf.fncom.2015.56.00017

Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.

The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.

Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.

For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.

Received: 01 Oct 2015; Published Online: 04 Nov 2015.

* Correspondence: Dr. Pieter-Jan Kindermans, TU-Berlin, Berlin, Germany, p.kindermans@tu-berlin.de