Event Abstract

Towards gaze-independent visual brain-computer interfaces

  • 1 Berlin Institute of Technology, Machine Learning Laboratory, Germany
  • 2 Fraunhofer FIRST, IDA, Germany
  • 3 Berlin Institute of Technology, Bernstein Focus: Neurotechnology, Germany

BACKGROUND
The designated goal of a brain-computer interface (BCI) is to establish a direct link from the brainto a machine, thereby circumventing motor behavior. A popular approach to BCIs is the use of the eventrelated potential (ERP). There is now ample evidence that conventional brain-computer interfaces based on event-related potentials can be operated e ciently only when targets ffi are fixated with the eye [1,2].


Critically, this limits the scope of visual spellers to the scope of eyetrackers, that is, it is basically a means to recover eye gaze. Furthermore, users with impaired control of eye movements cannot use conventional visual BCIs in an efficient way. To overcome these limitations, the aim of the present study was to develop visual spellers that are highly accurate, fast-paced, and independent of eye gaze. Three different variants of a visual speller based on the ERP Hex-o-Spell [1] were tested in an online experiment with 13 healthy participants. Participants could use both covert spatial attention and feature attention (i.e., attention to color and form) to attend to the targets.


METHOD AND RESULTS
For each kind of speller, participants traversed a calibration phase (on basis of a classifier was trained), a copy-spelling phase (wherein they spelled a pre-defined sentence), and a free-spelling phase (wherein they spelled a self-invented phrase). For classification, we used linear discriminant analysis with shrinkage of the covariance matrix [3]. For all participants, high-accuracy BCI control was achieved, see Figure 1. Selecting one out of thirty symbols (chance level 3.3%) for the three spellers yielded mean accuracies of 90.43%, 87.95%, and 97.03%, respectively.


CONCLUSION
The present results are a proof of concept: It is possible to realize fast-paced, high-accuracy visual spellers that have a large vocabulary and that are independent of eye gaze. The most significant implication is that the scope of visual spellers is broadened from simply recovering eye gaze to recovering attentional focus independent of eye gaze.

Figure 1: Symbol selection accuracies for the three types of spellers. Bars depict mean accuracy for all participants and colored dots represent accuracies obtained for individual participants.

Figure 1

References

[1] Treder MS, Blankertz B: (C)overt attention and visual speller design in an ERP-based braincomputer interface. Behav Brain Funct, 2010, 6:28, [http://www.behavioralandbrainfunctions.com/content/6/1/28].
[2] Brunner P, Joshi S, Briskin S, Wolpaw JR, Bischof H, Schalk G: Does the ”P300” Speller Depend on Eye Gaze? J Neural Eng 2010, in press.
[3] Blankertz B, Lemm S, Treder MS, Haufe S, Müller KR, Single-trial analysis and classification of ERP components – a tutorial, NeuroImage 2010, in press.

Keywords: computational neuroscience

Conference: Bernstein Conference on Computational Neuroscience, Berlin, Germany, 27 Sep - 1 Oct, 2010.

Presentation Type: Presentation

Topic: Bernstein Conference on Computational Neuroscience

Citation: Treder MS, Schmidt NM and Blankertz B (2010). Towards gaze-independent visual brain-computer interfaces. Front. Comput. Neurosci. Conference Abstract: Bernstein Conference on Computational Neuroscience. doi: 10.3389/conf.fncom.2010.51.00117

Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.

The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.

Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.

For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.

Received: 09 Sep 2010; Published Online: 23 Sep 2010.

* Correspondence: Dr. Matthias S Treder, Berlin Institute of Technology, Machine Learning Laboratory, Berlin, Germany, matthias.treder@tu-berlin.de