Skip to main content

OPINION article

Front. Neurosci., 25 April 2022
Sec. Neural Technology
This article is part of the Research Topic Frontiers in Neural Technology: how far have we gone? View all 7 articles

Active Brain-Computer Interfacing for Healthy Users

  • MEG Center, Moscow State University of Psychology and Education, Moscow, Russia

Introduction

Brain-computer interface (BCI) research and development continues to grow. In particular, BCI patent applications have been increasing exponentially in a few recent years (Greenberg et al., 2021). The situation is, however, different for different kinds of BCI: invasive and non-invasive, active and passive, especially regarding possible use by healthy users. Invasive BCIs provide best performance, and even may provide access to early stages of motor decision formation, enabling faster interaction compared to usual input devices (Mirabella and Lebedev, 2017), but they are associated with high risk and cost, and will unlikely be available for healthy users in near future. Existing non-invasive BCIs have low bandwidth, speed, and accuracy, and this is why only passive, not active BCIs have been considered as a prospective technology for healthy users in the roadmap of brain/neural-computer interaction (BNCI Horizon 2020, 2015; Brunner et al., 2015). Passive BCIs are those that use “brain activity arising without the purpose of voluntary control” (Zander and Kothe, 2011). As they do not claim the user's attention, their low speed of interaction can be acceptable (Current Research in Neuroadaptive Technology, 2021).

In contrast, a user of an active BCI controls an application explicitly, via conscious control of his or her brain activity (Zander and Kothe, 2011)1. These BCIs have to compete with the manual input devices (keyboard, mouse, touchscreen) and emerging touchless alternatives (voice-, gesture- and gaze-based), as playing the same role in human-computer interaction (HCI) (Lance et al., 2012; van Erp et al., 2012). Although some attempts were announced to dramatically improve performance of the non-invasive BCIs by advancing brain sensor technology (most noticeably, Facebook's plans to enable fast text input “directly from your brain”—Constine, 2017), the electroencephalography (EEG) remains the only widely used technology and performance is still below from what is provided by electromechanical input devices. For example, the best reported average time of activation of a non-invasive asynchronous “brain switch” (a BCI requiring low false positive rate but enabling detection of only one discrete command) is about 1.5 s (Zheng et al., 2022). Moreover, while some non-medical active BCIs use well-established non-invasive BCI paradigms—the motor imagery BCI, the P300 BCI, the steady-state visual evoked potential (SSVEP) BCI and the code-modulated visual evoked potential (c-VEP) BCI—many projects rely on even less precise control based on learned changing EEG rhythms (Nijholt, 2019; Prpa and Pasquier, 2019; Vasiljevic and de Miranda, 2020). Due to low performance, active BCIs are still affordable mainly for people who cannot use other input, such as paralyzed individuals.

Nevertheless, attempts to develop active BCIs for healthy people continue. In this Opinion, I briefly overview the application areas for which they are currently developed, then try to figure out what motivates these attempts, and what is the near perspective.

Applications

What types of non-medical applications of active BCIs have been developed and studied in recent years? In my view, most of them fall into one of the several groups:

1. Games—BCI gaming remains the most studied application of active BCI for healthy users (Vasiljevic and de Miranda, 2020). In this application, input imprecision inherent to non-invasive BCIs is not always as critical as in most real-life applications, and even can serve as a part of intentionally constructed uncertainty within the gameplay (Nijholt et al., 2009). Commercial EEG devices for gaming have been produced for more than 10 years, and games developed for them are becoming increasingly user-friendly (Vasiljevic and de Miranda, 2020). Both active and passive BCIs are studied as means to interact with games, but both are still far from becoming a widely accepted input for games, which is partly due to low performance. Low popularity of the BCI games in the gamer community can also be related to insufficient attention to studying interaction in BCI games, developing relevant game design and software and hardware solutions (Vasiljevic and de Miranda, 2020; Cattan, 2021).

2. Art—Another BCI application for healthy users is the use of BCI by enthusiast artists in performances and creating pieces of art, i.e., “brain art” (Nijholt, 2019) or “BCI art” (Prpa and Pasquier, 2019). These projects are very diverse (Brain Art, 2019; Bernal et al., 2021), but, unfortunately, rarely documented in the scientific literature (Prpa and Pasquier, 2019; Friedman, 2020). Of 61 BCI art projects surveyed by Prpa and Pasquier (2019), mostly described in non-science sources such as YouTube videos, 18 used active or reactive control (Table 3.4 in Prpa and Pasquier, 2019). For brain art, like for the BCI games, robustness and efficiency may be considered less important than experience (Nijholt et al., 2022).

3. Autonomous-driving vehicles—BCI control of autonomous vehicles is increasingly considered for healthy users (Rehman et al., 2018; Chai et al., 2021; Hekmatmanesh et al., 2021). Such BCI presented by Mercedes-Benz in their concept car (Rosso, 2021) enabled “selecting the navigation destination by thought control, switching the ambient light in the interior or changing the radio station” (Mercedes-Benz VISION AVTR, 2021).

4. Augmented and virtual reality (AR/VR)—While these technologies are quickly improving, input in AR/VR is still far from perfect. Therefore, active BCIs have some chances to compete, either as a general-purpose AR/VR input mean or in connection with BCI games and BCI art (Putze, 2019; Cattan et al., 2020; Paszkiel, 2020; Wen et al., 2021). Noticeably, NextMind, the company that provided their BCI for the above-mentioned Mercedes car (Rosso, 2021), was recently purchased by an AR developer (Heath, 2022).

Attempts were also made to develop BCIs which could be used to enable additional input when the two arms are busy (“third arm”; Penaloza and Nishio, 2018), or even replacing normal input devices in some tasks by providing more effortless and fluent control (“wish mouse,” Shishkin et al., 2016). In these areas BCI performance remains significantly lower than what is acceptable for practical applications.

Motivations

Why do some BCI developers expect that healthy users would prefer BCIs over other, more accurate, faster, and robust input technologies?

1. Practical reasons—AR/VR and, less obviously, autonomous-driving cars are special cases where traditional input means do not fit the technology well. Here, BCIs compete with emerging control approaches based on the movements of the head, body, hands (gestures), and gaze, each of which has its own shortcomings. Moreover, if a user wears a head-mounted display, adding BCI control to it is not necessarily associated with significant inflation of the price and increased inconvenience. In an autonomous-driving car, the increase of price would be even less noticeable; in this case, there is a range of tasks where response time and accuracy are not critical issues as well (see above the Mercedes example). However, in almost all applications productivity and efficiency are not what non-invasive BCIs are valued for (I refrain here from discussing neurofeedback-based training, which is typically based on technologies somewhat different from BCI—the only exception, to my knowledge, is Arvaneh et al., 2019).

2. Experience—In HCI, not only productivity and efficiency are valuable, but also, increasingly, various aspects of interaction experience, such as “affect, comfort, family, community, or playfulness,” where BCI technologies have certain advantages (Bernal et al., 2021; Nijholt et al., 2022). In some cases, BCI-based interaction brings highly paradoxical experience: for example, the long-known feature of control based on alpha rhythm is “the more you try, the less likely is to succeed” (Lucier and Simon, 1980, cited by Prpa and Pasquier, 2019, p. 102). User experience is especially important for BCI art (Nijholt, 2019; Nijholt et al., 2022) but also for BCI games and AR/VR (Vasiljevic and de Miranda, 2020; Cattan, 2021; Nijholt et al., 2022), and even for autonomous driving (where the goal for a BCI is “to further enhance driving comfort in the future” and to open up “revolutionary possibilities for intuitive interaction with the vehicle,” Mercedes-Benz VISION AVTR, 2021).

Unique BCI experience in BCI art and in some BCI games can be partly associated with one interesting feature of BCI-based control, not found in computer inputs which exclude passive interaction: an active BCI makes possible passive BCI control, and vice versa. As Anton Nijholt explained: “Obviously, when a subject is told to wear a BCI cap he or she can become aware and learn how changes are related to a mental state and can turn passive BCI into active BCI by producing different mental states. A subject's active and reactive BCI performance can be dependent on his or her mental state” (Nijholt, 2019, p. 6). It is tempting to hypothesize that this “fuzziness” of the conscious control may open the door for the user's unconsciousness to cause desirable but suppressed actions. This can help artists to express something that is difficult to express in other ways, and possibly may lead to unusual engaging experiences in games. To my knowledge, such “fuzziness” has never been addressed in experimental research.

Moreover, the experience of healthy users of active BCI control was very little studied so far (Vasiljevic and de Miranda, 2020; Cattan, 2021). The most systematic study, to my knowledge, was conducted by Schmid and Jox (2021), who engaged (apart from professional BCI researchers and developers) only three participants with regular BCI use experience (BCI gamers).

Perspectives

As the previous two sections suggest, the development of active BCIs for healthy users continued in recent years, but the focus was on applications for which user experience was more valuable than productivity and efficiency. More attention of researchers and developers to experience-related issues can therefore help strongly improve affordability of these BCIs in the near future (Vasiljevic and de Miranda, 2020; Cattan, 2021).

Even though the unique experience of interaction mediated by active BCIs provides certain advantages in their competition with traditional input means, improvement of BCI performance is still highly desirable. One possible way is the use of deep neural networks as BCI classifiers (Craik et al., 2019; Roy et al., 2019). However, such classifiers often have many parameters, and therefore rarely can be well-trained on single-session data. The current trend of the increased availability of large datasets, on which more advanced classifiers can be learned, therefore may make possible significant improvement of performance. Further development of transfer learning (e.g., Zanini et al., 2017; Fahimi et al., 2019; Dehghani et al., 2021) and more recent meta-learning (Li et al., 2021; Bhosale et al., 2022; Wei et al., 2022) approaches make possible applying a classifier trained on large multisubject datasets to the data from new users. Additional opportunities can be found in combining different BCI modalities and creating hybrid systems based on joint use of a BCI and other input devices (Wen et al., 2021).

Improved performance may make feasible modifications of existing BCI paradigms that provide more intensive experience. In BCI games, for example, better classification may help to turn the P300 paradigm into single-trial (Finke et al., 2009; Ganin et al., 2013) and single-stimulus (Fedorova et al., 2014) modifications, enabling higher integration with gameplay and higher immersion (Kaplan et al., 2013); “quasi-movement” paradigm (Nikulin et al., 2008) may offer easier training and, possibly, more intensive experience than traditional motor imagery BCI.

If passive BCIs will become widely used by healthy users, their hardware could be used for active BCIs. Similarly, wide use of gaze-based control by healthy users may make hybrid interfaces using gaze and EEG also more affordable.

In summary, while non-invasive active BCIs for healthy users are not currently a mature technology, further efforts of researchers and developers may soon lead to creation of affordable products.

Author Contributions

The author confirms being the sole contributor of this work and has approved it for publication.

Funding

This work was supported by the Russian Science Foundation, grant 22-29-01361.

Conflict of Interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Footnotes

1. ^Zander and Kothe (2011) suggested a distinction between active and reactive BCIs, the latter depending on “brain activity arising in reaction to external stimulation, which is indirectly modulated by the user”. Here, I use the term “active BCI” for both these BCIs, as they both enable explicit, intentional control, with active role of the user.

References

Arvaneh, M., Robertson, I. H., and Ward, T. E. (2019). A P300-based brain-computer interface for improving attention. Front. Hum. Neurosci. 12, 524. doi: 10.3389/fnhum.2018.00524

PubMed Abstract | CrossRef Full Text | Google Scholar

Bernal, G., Montgomery, S. M., and Maes, P. (2021). Brain-computer interfaces, open-source, and democratizing the future of augmented consciousness. Front. Comput. Sci. 3, 661300. doi: 10.3389/fcomp.2021.661300

CrossRef Full Text | Google Scholar

Bhosale, S., Chakraborty, R., and Kopparapu, S. K. (2022). Calibration free meta learning based approach for subject independent EEG emotion recognition. Biomed. Signal Process. Control 72, 103289. doi: 10.1016/j.bspc.0.2021.103289

CrossRef Full Text | Google Scholar

BNCI Horizon 2020. (2015). Roadmap - The Future in Brain/Neural-Computer Interaction: Horizon 2020. Available online at: http://bnci-horizon-2020.eu/roadmap (accessed February 21, 2022).

Google Scholar

Brain Art: Brain-Computer Interfaces for Artistic Expression. (2019). Ed. by A. Nijholt. Springer Nature Switzerland AG.

Brunner, C., Birbaumer, N., Blankertz, B., Guger, C., Kübler, A., Mattia, D., et al. (2015). BNCI Horizon 2020: towards a roadmap for the BCI community. Brain Comput. Interfaces 2, 1–10. doi: 10.1080/2326263X.2015.1008956

CrossRef Full Text | Google Scholar

Cattan, G. (2021). The use of brain–computer interfaces in games is not ready for the general public. Front. Comput. Sci. 3, 628773. doi: 10.3389/fcomp.2021.628773

CrossRef Full Text | Google Scholar

Cattan, G., Andreev, A., and Visinoni, E. (2020). Recommendations for integrating a P300-based brain–computer interface in virtual reality environments for gaming: an update. Computers 9, 92. doi: 10.3390/computers9040092

CrossRef Full Text | Google Scholar

Chai, Z., Nie, T., and Becker, J. (2021). “Top ten challenges facing autonomous driving,” in Autonomous Driving Changes the Future (Singapore: Springer). doi: 10.1007/978-981-15-6728-5

CrossRef Full Text | Google Scholar

Constine, J. (2017). Facebook Is Building Brain-Computer Interfaces for Typing and Skin-Hearing, TechCrunch, April 19, 2017. Available online at: https://techcrunch.com/2017/04/19/facebook-brain-interface/ (accessed January 20, 2022).

Google Scholar

Craik, A., He, Y., and Contreras-Vidal, J. L. (2019). Deep learning for electroencephalogram (EEG) classification tasks: a review. J. Neural Eng. 16, 031001. doi: 10.1088/1741-2552/ab0ab5

PubMed Abstract | CrossRef Full Text | Google Scholar

Current Research in Neuroadaptive Technology. (2021). Ed. by S.H. Fairclough, T.O. Zander. Elsevier.

Google Scholar

Dehghani, M., Mobaien, A., and Boostani, R. (2021). A deep neural network-based transfer learning to enhance the performance and learning speed of BCI systems. Brain Comput. Interfaces 8, 14–25. doi: 10.1080/2326263X.2021.1943955

CrossRef Full Text | Google Scholar

Fahimi, F., Zhang, Z., Goh, W. B., Lee, T. S., Ang, K. K., and Guan, C. (2019). Inter-subject transfer learning with an end-to-end deep convolutional neural network for EEG-based BCI. J. Neural Eng. 16, 026007. doi: 10.1088/1741-2552/aaf3f6

PubMed Abstract | CrossRef Full Text | Google Scholar

Fedorova, A. A., Shishkin, S. L., Nuzhdin, Y. O., Faskhiev, M. N., Vasilyevskaya, A. M., Ossadtchi, A. E., et al. (2014). “A fast “single-stimulus” brain switch,” in Proc. 6th Int. Brain-Computer Interface Conference, eds G. Muller-Putz, J. Huggins, and D. Steyrl (Graz: Verlag der Technischen Universitat Graz). doi: 10.3217/978-3-85125-378-8-52

CrossRef Full Text | Google Scholar

Finke, A., Lenhardt, A., and Ritter, H. (2009). The MindGame: a P300-based brain-computer interface game. Neural Netw. 22, 1329–1333. doi: 10.1016/j.neunet.2009.07.003

PubMed Abstract | CrossRef Full Text | Google Scholar

Friedman, D. (2020). Brain art: brain-computer interfaces for artistic expression. Brain Comput. Interfaces 7, 36–37. doi: 10.1080/2326263X.2020.1756573

CrossRef Full Text | Google Scholar

Ganin, I. P., Shishkin, S. L., and Kaplan, A. Y. (2013). A P300-based brain-computer interface with stimuli on moving objects: four-session single-trial and triple-trial tests with a game-like task design. PLoS ONE 8, e77755. doi: 10.1371/journal.pone.0077755

PubMed Abstract | CrossRef Full Text | Google Scholar

Greenberg, A., Cohen, A., and Grewal, M. (2021). Patent landscape of brain–machine interface technology. Nat. Biotechnol. 39, 1194–1199. doi: 10.1038/s41587-021-01071-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Heath, A. (2022). Snap buys brain-computer interface startup for future AR glasses. The Verge, Mar 23, 2022, 9:00am EDT. Available online at: https://www.theverge.com/2022/3/23/22991667/snap-buys-nextmind-brain-computer-interface-spectacles-ar-glasses (accessed March 24, 2022).

Google Scholar

Hekmatmanesh, A., Nardelli, P. H., and Handroos, H. (2021). Review of the state-of-the-art of brain-controlled vehicles. IEEE Access 9, 110173–110193. doi: 10.1109/ACCESS.2021.3100700

PubMed Abstract | CrossRef Full Text | Google Scholar

Kaplan, A. Y., Shishkin, S. L., Ganin, I. P., Basyul, I. A., and Zhigalov, A. Y. (2013). Adapting the P300-based brain-computer interface for gaming: a review. IEEE Trans. Comput. Intellig. AI Games 5, 141–149. doi: 10.1109/TCIAIG.2012.2237517

CrossRef Full Text | Google Scholar

Lance, B. J., Kerick, S. E., Ries, A. J., Oie, K. S., and McDowell, K. (2012). Brain–computer interface technologies in the coming decades. Proc. IEEE 100, 1585–1599. doi: 10.1109/JPROC.2012.2184830

CrossRef Full Text | Google Scholar

Li, D., Ortega, P., Wei, X., and Faisal, A. (2021). “Model-agnostic meta-learning for EEG motor imagery decoding in brain-computer-interfacing,” in 10th International IEEE/EMBS Conference on Neural Engineering (NER) 2021 May 4, 527–530. doi: 10.1109/NER49283.2021.9441077

CrossRef Full Text | Google Scholar

Lucier, A., and Simon, D. (1980). Chambers. Scores by Alvin Lucier, interviews with the composer by Douglas Simon. Weslayan University Press.

Google Scholar

Mercedes-Benz VISION AVTR: Operating the User Interface With the Power of Thought. (2021). Available online at: https://media.daimler.com/marsMediaSite/en/instance/ko/Mercedes-Benz-VISION-AVTR-operating-the-user-interface-with-the-power-of-thought.xhtml?oid=51228086 (accessed January 20, 2022).

Google Scholar

Mirabella, G., and Lebedev, M.A. (2017). Interfacing to the brain's motor decisions. J. Neurophysiol. 117, 1305–1319. doi: 10.1152/jn.00051.2016

PubMed Abstract | CrossRef Full Text | Google Scholar

Nijholt, A. (2019). “Introduction: brain-computer interfaces for artistic expression,” in Brain Art: Brain-Computer Interfaces for Artistic Expression, ed A. Nijholt (Cham: Springer Nature Switzerland AG), 1–29. doi: 10.1007/978-3-030-14323-7_1

CrossRef Full Text | Google Scholar

Nijholt, A., Bos, D. P., and Reuderink, B. (2009). Turning shortcomings into challenges: brain–computer interfaces for games. Entertain. Comput. 1, 85–94. doi: 10.1016/j.entcom.2009.09.007

CrossRef Full Text | Google Scholar

Nijholt, A., Contreras-Vidal, J. L., Jeunet, C., and Väljamäe, A. (2022). Editorial: brain-computer interfaces for non-clinical (home, sports, art, entertainment, education, well-being) applications. Front. Comput. Sci. 4, 860619. doi: 10.3389/fcomp.2022.860619

CrossRef Full Text | Google Scholar

Nikulin, V. V., Hohlefeld, F. U., Jacobs, A. M., and Curio, G. (2008). Quasi-movements: a novel motor–cognitive phenomenon. Neuropsychologia 46, 727–742. doi: 10.1016/j.neuropsychologia.2007.10.008

PubMed Abstract | CrossRef Full Text | Google Scholar

Paszkiel, S. (2020). “Using BCI and VR technology in neurogaming,” in Analysis and Classification of EEG Signals for Brain–Computer Interfaces (Cham: Springer), 93–99. Available online at: https://link.springer.com/chapter/10.1007/978-3-030-30581-9_11 (accessed February 21, 2022).

Google Scholar

Penaloza, C. I., and Nishio, S. (2018). BMI control of a third arm for multitasking. Sci. Robot. 3, eaat1228. doi: 10.1126/scirobotics.aat1228

PubMed Abstract | CrossRef Full Text | Google Scholar

Prpa, M., and Pasquier, P. (2019). “Brain-computer interfaces in contemporary art: a state of the art and taxonomy,” in Brain Art: Brain-Computer Interfaces for Artistic Expression, ed A. Nijholt (Cham: Springer Nature Switzerland AG), 1–29. doi: 10.1007/978-3-030-14323-7_3

CrossRef Full Text | Google Scholar

Putze, F. (2019). “Methods and tools for using BCI with augmented and virtual reality,” in Brain Art: Brain-Computer Interfaces for Artistic Expression, ed A. Nijholt (Cham: Springer Nature Switzerland AG), 433–446. doi: 10.1007/978-3-030-14323-7_16

CrossRef Full Text | Google Scholar

Rehman, A. U., Ghaffarianhoseini, A., Naismith, N., Zhang, T., Doan, D. T., Tookey, J., et al. (2018). “A review: harnessing immersive technologies prowess for autonomous vehicles,” in Proceedings of the 18th International Conference on Construction Applications of Virtual Reality (CONVR2018), eds R. Amor, and J. Dimyad (Auckland: The University of Auckland), 545–555.

Google Scholar

Rosso, C. (2021). Autos to Integrate AI-Based Brain-Computer Interfaces (BCIs). Psychology Today. Available online at: https://www.psychologytoday.com/us/blog/the-future-brain/202109/autos-integrate-ai-based-brain-computer-interfaces-bcis-0 (accessed January 20, 2022).

Google Scholar

Roy, Y., Banville, H., Albuquerque, I., Gramfort, A., Falk, T. H., and Faubert, J. (2019). Deep learning-based electroencephalography analysis: a systematic review. J. Neural Eng. 16, 051001. doi: 10.1088/1741-2552/ab260c

PubMed Abstract | CrossRef Full Text | Google Scholar

Schmid, J. R., and Jox, R. J. (2021). “The power of thoughts: a qualitative interview study with healthy users of brain-computer interfaces,” in Clinical Neurotechnology Meets Artificial Intelligence (Cham: Springer), 117–126. doi: 10.1007/978-3-030-64590-8_9

CrossRef Full Text | Google Scholar

Shishkin, S. L., Nuzhdin, Y. O., Svirin, E. P., Trofimov, A. G., Fedorova, A. A., Kozyrskiy, B. L., et al. (2016). EEG negativity in fixations used for gaze-based control: toward converting intentions into actions with an eye-brain-computer interface. Front. Neurosci. 10, 528. doi: 10.3389/fnins.2016.00528

PubMed Abstract | CrossRef Full Text | Google Scholar

van Erp, J., Lotte, F., and Tangermann, M. (2012). Brain-computer interfaces: beyond medical applications. Computer 45, 26–34. doi: 10.1109/MC.2012.107

CrossRef Full Text | Google Scholar

Vasiljevic, G. A., and de Miranda, L. C. (2020). Brain–computer interface games based on consumer-grade EEG devices: a systematic literature review. Int. J. Hum. Comput. Interact. 36, 105–142. doi: 10.1080/10447318.2019.1612213

CrossRef Full Text | Google Scholar

Wei, W., Qiu, S., Zhang, Y., Mao, J., and He, H. (2022). ERP prototypical matching net: a meta-learning method for zero-calibration RSVP-based image retrieval. J. Neural Eng. 19, 026028. doi: 10.1088/1741-2552/ac5eb7

PubMed Abstract | CrossRef Full Text | Google Scholar

Wen, D., Liang, B., Zhou, Y., Chen, H., and Jung, T.- P. (2021). The current research of combining multi-modal brain-computer interfaces with virtual reality. IEEE J. Biomed. Health Informatics 25, 3278–3287. doi: 10.1109/JBHI.2020.3047836

PubMed Abstract | CrossRef Full Text | Google Scholar

Zander, T. O., and Kothe, C. (2011). Towards passive brain-computer interfaces: applying brain-computer interface technology to human-machine systems in general. J. Neural Eng. 8, 025005. doi: 10.1088/1741-2560/8/2/025005

PubMed Abstract | CrossRef Full Text | Google Scholar

Zanini, P., Congedo, M., Jutten, C., Said, S., and Berthoumieu, Y. (2017). Transfer learning: a Riemannian geometry framework with applications to brain–computer interfaces. IEEE Trans. Biomed. Eng. 65, 1107–1116. doi: 10.1109/TBME.2017.2742541

PubMed Abstract | CrossRef Full Text | Google Scholar

Zheng, L., Pei, W., Gao, X., Zhang, L., and Wang, Y. (2022). A high-performance brain switch based on code-modulated visual evoked potentials. J. Neural Eng. 19, 016002. doi: 10.1088/1741-2552/ac494f

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: brain-computer interfaces, active BCI, human-computer interaction, human-machine interfaces, healthy users

Citation: Shishkin SL (2022) Active Brain-Computer Interfacing for Healthy Users. Front. Neurosci. 16:859887. doi: 10.3389/fnins.2022.859887

Received: 21 January 2022; Accepted: 30 March 2022;
Published: 25 April 2022.

Edited by:

Giovanni Mirabella, University of Brescia, Italy

Reviewed by:

Luca Falciati, University of Brescia, Italy

Copyright © 2022 Shishkin. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Sergei L. Shishkin, sergshishkin@mail.ru

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.