Skip to main content

PERSPECTIVE article

Front. Hum. Neurosci., 01 June 2012
Sec. Brain-Computer Interfaces
Volume 6 - 2012 | https://doi.org/10.3389/fnhum.2012.00114

Brain-computer interfaces: a neuroscience paradigm of social interaction? A matter of perspective

  • 1 INSERM U1028, CNRS UMR5292, Brain Dynamics and Cognition Team, Lyon Neuroscience Research Center, Lyon, France
  • 2 University Lyon 1, Lyon, France

A number of recent studies have put human subjects in true social interactions, with the aim of better identifying the psychophysiological processes underlying social cognition. Interestingly, this emerging Neuroscience of Social Interactions (NSI) field brings up challenges which resemble important ones in the field of Brain-Computer Interfaces (BCI). Importantly, these challenges go beyond common objectives such as the eventual use of BCI and NSI protocols in the clinical domain or common interests pertaining to the use of online neurophysiological techniques and algorithms. Common fundamental challenges are now apparent and one can argue that a crucial one is to develop computational models of brain processes relevant to human interactions with an adaptive agent, whether human or artificial. Coupled with neuroimaging data, such models have proved promising in revealing the neural basis and mental processes behind social interactions. Similar models could help BCI to move from well-performing but offline static machines to reliable online adaptive agents. This emphasizes a social perspective to BCI, which is not limited to a computational challenge but extends to all questions that arise when studying the brain in interaction with its environment.

At first sight, the recent field of Brain-Computer Interfaces (BCI) and the even more recent field of Neuroscience of Social Interactions (NSI) do not have much in common and at first glance appear totally unrelated. The aim of the former is to create an interface between a brain and an artificial agent, while the latter is exclusively interested in the interaction between two or more human minds. They have also emerged from different scientific communities. BCI developed thanks to the efforts of a few adventurous engineers (Vidal, 1973), clinicians, and physiologists (Birbaumer et al., 1999), while social neuroscience has built on ethology, sociobiology, social psychology, and philosophy (Adolphs, 2003). Nevertheless, both have recently attracted neuroscientists, and while BCI rely on explicit, real-time, and often closed-loop connections, an emerging trend in the study of social cognition is the move toward online experiments, with realistic interactions between a subject and a social (human or human-like) environment (Redcay et al., 2010; Schilbach et al., 2010).

In BCI, the human brain is typically connected to a non-social (artificial) device, whose aim is to reinstate behavior, including social behavior. However, even though it is not only this ultimate objective but the strongest link between the two might rather lie in the nature of the interaction itself. Indeed, both are essentially concerned with the instantiation and the study of a dynamical exchange between two agents. This shared core aspect provides strong ground for possible cross-fertilization between the two fields in the near future. This becomes particularly striking when looking at the main challenges faced by BCI.

What is BCI?

In a broad sense, a BCI refer to some direct interface between the brain and the outside world, bypassing the usual sensory or motor pathways. BCI provide the brain with a new way of interacting with the environment, where the latter refers to the user’s own body (Moritz et al., 2008) or to other people (Birbaumer, 2006).

Although one might categorize as BCI, those artificial systems that directly stimulate the brain (implants or deep brain stimulators), BCI usually refer to devices that enable the brain to act upon or via a machine (Nicolelis, 2001). Here I will focus on the latter, in which feedback from the machine or the environment is usually obtained through normal sensation, although it could also be delivered to the brain directly (O’Doherty et al., 2011).

Essentially, such BCI rely on online decoding and conversion of brain activity into reliable commands or understandable information. As such, electrophysiology techniques are usually favored, although fMRI has been used successfully in real-time (DeCharms, 2008). EEG is by far the most widely used BCI technique, either with patients or healthy volunteers, simply because it is cheap, portable, and non-invasive and it offers a high temporal resolution (Millan and Carmena, 2009).

Brain-Computer Interfaces developments are mostly driven by clinical applications, to replace or restore lost communication or locomotion abilities in patients suffering from severe neuromuscular disorders. Another promising line of research is the use of BCI techniques in disorders of consciousness, to better diagnose non-responsive patients (Kübler and Kotchoubey, 2007) and possibly to communicate with those in a minimally conscious state (Cruse et al., 2011). Furthermore, in various pathologies (as diverse as attention disorders and hyperactivity, depression, motor deficits, tinnitus…), BCI could also prove useful in devising new therapies based upon neurophysiological training (Johnston et al., 2010).

Finally, BCI are also being investigated for general public applications such as gaming (Plass-Oude Bos et al., 2010). Altogether, BCI applications have been particularly efficient in promoting the development of new, wireless, and gel-free EEG technologies (Zander et al., 2011). Such systems are very useful and are almost essential for data acquisition outside the laboratory, not only for clinical trials but also for ecological NSI experiments involving several brain-scanned participants (so-called hyperscanning; Dumas et al., 2011).

What is Social About BCI?

Brain-Computer Interfaces clearly overlap with social neuroscience, at least in as much as the two fields share common objectives. Even though they have not yet contributed to new therapies, BCIs aim to improve the quality of life of patients who suffer due to an inability to interact with the environment and whose interactions with others are thereby severely limited. A successful BCI would enable such patients to recover social abilities, namely interacting, communicating, exchanging, and even playing with others. However, despite tremendous efforts and partial success, BCI research is yet to establish and produce such a routine application. Even the widely explored P300-based (Perrin et al., 2011) and motor imagery protocols (Pfurtscheller et al., 2009) have proven limited in their robustness and efficiency, despite the fact that they rely on fairly reproducible neurophysiological markers associated with simple mental tasks. The reason for this might be that these markers do not directly reflect the user’s precise intention. Indeed, the P300-speller, for instance, exploits the EEG response evoked by an expected but rare stimulus (item) presented in a sequence of undesired events (other items). Hence the machine does not infer the intended words from their direct and transient neuronal representations but rather detects and compares the automated, unspecific, and time-locked responses to a sequence of proposed items. Similarly, although the sensorimotor rhythms (SMR) elicited by mental imagery do reflect a motor related activity that is usually coherent with the intended movement (e.g., imagery of a right hand movement to move to the right), this activity can hardly be used online to infer all the fine parameters of the movement plan.

This incomplete or non-ecological mapping between the actual command and the ultimate action might contribute to the sub-optimality of BCI and could partially explain the high inter-subject performance variability and the so-called BCI illiteracy observed in healthy volunteers and patients (Vidaurre and Blankertz, 2010; Maby et al., 2011).

To overcome this lack of reliability, BCI research faces at least three crucial challenges:

• To deal with the complex, multidimensional, dynamical, non-linear, and highly distributed nature of the neural code;

• To endow the machine with adaptive behavior;

• To make use of rich, multidimensional, and robust feedback that favors learning and cooperation with the user.

Interestingly, each of these challenges points to a different part or perspective of the Brain-Computer interaction. As expounded below, taken together, these perspectives emphasize the fundamental and technical challenges that BCI share with the field of NSI.

The Machine’s Perspective

In BCI, the machine or computer is the one that transforms brain activity into actions. It has to select relevant brain signals and decode them online. Although this decoding challenge is often circumvented by making use of substitution strategies (e.g., frequency tagging to create a “brain switch”; Pfurtscheller et al., 2010), it is reasonable to assume that decoding should improve as we progress in our ability to decipher the neural code in real-time. In other words, provided that one can measure the relevant signals, the performance of BCI should increase with our knowledge of how intentions, ensuing behaviors, and even perception of the consequences of our own actions map onto brain dynamics (Serences and Saproo, 2012).

In that respect, the future of BCI depends heavily on our ability to reveal and to interpret the neuronal mechanisms and mental processes underlying human perception, action, learning, and decision making but also imagination, prediction, and attention. Such processes are all core components of social behavior (Frith and Frith, 2012). It has even been suggested that the most complex forms of these processes emerged in human beings because of our very social nature (Dunbar, 2011). From this first point of view alone, BCI should benefit from future NSI studies.

However, beyond studies that aim at identifying the neural correlates of human mental processes, NSI protocols, and BCI should take into consideration studies that incorporate and validate computational models of how the brain implements relevant cognitive and motor tasks (Wolpert et al., 2003). This suggests a paradigm switch and comes with methodological and technical challenges. Fortunately, such models and methods have recently emerged from computational neuroscience and have been used to shed light on neuroimaging data (Friston and Dolan, 2010), including experiments on social neuroscience (Behrens et al., 2009).

Importantly, for BCI and for NSI protocols, these models have a twofold interest:

• In NSI protocols they can be used to explain and question the specificity of social behavior in terms of underlying brain mechanisms. An elegant example is the work of Behrens and collaborators who showed that, although instantiated in different brain regions, reward, and social information are processed with similar cognitive and neuronal mechanisms in order to optimize behavior (Behrens et al., 2008). Importantly for BCI, this so-called model-based fMRI approach has recently been applied successfully with non-invasive electrophysiology data (Philiastides et al., 2010). Alternatively, these models could also be used to emulate an avatar (or a robot Wolpert and Flanagan, 2010) and to test subjects involved in a true social interaction with a well-controlled human-like environment.

• Similarly, in BCI, they could be used to refine the online decoding of brain activity. A promising example is the work by Brodersen and collaborators who used a computational model of neurodynamics and thus improved decoding by restricting the relevant feature space to a sparse and biologically meaningful representation (Brodersen et al., 2011). Alternatively, computational models could also be used to endow the machine with (human-inspired) artificial intelligence, namely to relieve the strain on the user and implement shared control of continuous actions. Such models could be informed online by complementary neurophysiological markers. In a recent study for instance, we demonstrated that the user’s electrophysiological responses to the machine’s decisions reflected human learning and could also be used by the BCI system to distinguish between error and correct decisions (Perrin et al., 2011).

The Experimenter’s Perspective

In BCI, the experimenter is the one who designs the whole interface. It is the experimenter who is in charge of endowing the machine with signal feature selection, classification or hidden-state inference as well as decision-making algorithms. An emerging trend in the BCI field is the design of adaptive methods in order to avoid the need for cumbersome initial calibrations and to accommodate the slow fluctuations of brain activity, due to physical drifts, drowsiness, or learning phenomena (Vidaurre et al., 2006). This is particularly relevant for applications in which BCI is used for monitoring (Blankertz et al., 2010). In this respect, model-based decoding approaches like those mentioned above can be thought of as adaptive methods. Relying on cognitive and neuronal generative models of relevant brain signals, they are adaptive in nature since they aim at mimicking the dynamics of mind and brain.

This puts the BCI experimenter into a rather new situation. Instead of considering the BCI user’s brain as a black box and instead of taking a static machine’s perspective, the experimenter is forced to adopt a systemic view and to consider the human and artificial agents as a whole. From a practical viewpoint, this means that he or she is now faced with two inter-dependent choices. The first one is about the model of the user’s brain activity that the machine should be endowed with. The second choice concerns the learning and decision-making algorithms that will generate actions from the machine, based on its ongoing perception and inference of the user’s mental states.

These choices will be guided by the targeted BCI application and the signal at hand. But most importantly, this procedure amounts to endowing the interacting computer in the BCI, with some degree of theory of mind or mentalizing properties, a core and well documented concept in social neuroscience (Frith and Frith, 2012). This brings BCI and NSI even closer; the latter being directly committed to studying, modeling and testing the computational and neuronal mechanisms of mentalizing.

As a consequence, developing a mechanistic account of socially relevant processes such as reward learning and intention tracking (implicit mentalizing) will likely benefit BCI design in the long term.

Luckily enough, recent experimental and theoretical work has shed light on such mechanisms. Some have even paved the way toward generic frameworks that could be used to formalize, implement, test, and compare alternative models of such mechanisms. Just to mention a few, the predictive coding and Bayesian brain hypotheses are supported by a growing body of evidence from studies examining cognitive functions relevant to social neuroscience (e.g., Kilner et al., 2007; Peters and Büchel, 2010). Furthermore, a meta-Bayesian framework has been proposed to implement and test models of learning and decision making (Daunizeau et al., 2010). Hierarchical models have also been suggested as an optimal tool to incorporate constraints and to implement flexible and efficient control (Todorov, 2009). Finally, the free-energy principle proposed by Friston has been shown to enable the online inference of states and parameters of hierarchical dynamical models that can be used to either prescribe or recognize actions and intentions (Friston, 2010; Friston et al., 2011).

To sum up, the explicit need for decoding models in BCI on the one hand, and the promising experimental and theoretical findings about mechanisms and processes relevant to social neuroscience, on the other hand, speak in favor of a new generation of BCI based on such advances and whose development might parallel that of NSI.

The Human’s Perspective

In BCI, the human is the end-user, the one who will benefit from the interaction and the one to whom it should adapt. The user will eventually validate the interface and adopt this new way of interacting with the world. This emphasizes a crucial need: the full cooperation of the adaptive interacting machine. Thus, while not all social interactions are relevant to BCI cooperative ones are definitely relevant. There is no real symmetry between the two agents here, and the user knows it. Nevertheless, the more sophisticated the machine, the more it might be perceived as helpful and the more the user might engage the interaction (Krach et al., 2008). Note that in this context, sophistication could be understood as complexity in a broad and common sense, but could also refer to the degree of recursion in the machine’s representation of the user’s representation, that is the order of the mentalizing machine (Yoshida et al., 2008).

Whether endowing the machine with advanced decoding and adaptive capacities based on mentalizing as well as human-inspired learning and decision-making models will be successful and sufficient to significantly improve current BCI is an open question that can only be answered with online experiments. As such, BCI could well become a peculiar but useful neuroscience paradigm of social interactions (Obbink, 2011), enabling researchers to tackle questions such as: how much control should the machine take over? What degree of sophistication would provoke a perceptual switch in the user and transform the machine or tool into an agent or partner (Johnson, 2003)? When does the interface turn into a dyadic interaction? What would be the condition of optimal joint-decision making and would it compare to known social situations, in animal models (Seeley et al., 2012) or in humans (Bahrami et al., 2010)?

Conclusion

The aim of both BCI and social neuroscience is to conceive and implement real-time interaction protocols, whether they involve online decoding of neural activity or simply make use of classical behavioral responses from the actor. They both call for computational models of an interacting mind, whether with an artificial but adaptive agent or with another human being. They will both benefit from uncovering the neural mechanisms of such an interaction to establish and later implement an optimal shared control that differs depending on the context of the interaction. They also both motivate the coupling of electrophysiology and neuroimaging techniques with advanced technologies such as robotics and immersive virtual environments. Therefore it is likely that BCI and NSI protocols will be mutually beneficial in the near future, with this unlikely collaboration answering diverse questions related to theoretical, technical, methodological, but also clinical and even ethical issues (Blanke and Aspell, 2009).

Central to these common needs and objectives are models of the brain as a computational machine, as well as models of neuronal dynamics (Friston and Dolan, 2010). Crucially, and especially in NSI and BCI protocols, our ability to use them online could yield new experimental paradigms and applications (Kelso et al., 2009).

In NSI protocols, these models would help in the study and characterization, from a neuronal and psychological point of view, of the dynamics of true interactions. Such NSI experiments would help identify realistic and efficient models of social interactions that BCI could then use to instantiate more productive interactions, between an adaptive machine and a patient. In one category of clinical applications, the patient would perceive or even incorporate the adaptive BCI as a means to communicate with people or to act upon the world. This is typically the aim of neuroprosthetics. In the other category, the adaptive machine itself would be perceived as the world or agent to exchange with. This could be the case in future forms of Neurofeedback training.

The latter is of particular interest with respect to NSI protocols. Indeed, it is a typical situation where the BCI is not meant to be fully cooperative but should trigger adaptation or learning from the patient in order to bring him up to a stable and non-pathological state. This considerably widens the putative clinical scope of BCI. It could potentially even be used with patients with deficits in social interactions such as people with autism. Indeed, whereas the existing evidence does not support the use of neurofeedback in the treatment of autism spectrum disorder (Holtmann et al., 2011), a new generation of adaptive and biologically informed systems could well prove reliable and efficient in treating such patients as it is well-known that these patients favor predictable or slowly varying agents, such as machines to interact with and learn from (Qian and Lipkin, 2011).

Conflict of Interest Statement

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

Jérémie Mattout was funded by the French ANR-DEFIS program under the grant ANR-09-EMER-002 in the context of the Co-Adapt BCI project. I would like to thank Christina Schmitz, Karen Reilly, Perrine Ruby, and Marie-Anne Henaff for insightful discussions. I am also grateful to Jennie Gallaine for the original illustration.

References

Adolphs, R. (2003). Cognitive neuroscience of human social behaviour. Nat. Rev. Neurosci. 4, 165–178.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Bahrami, B., Karsten, O., Latham, P. E., Roepstorff, A., Rees, G., and Frith, C. D. (2010). Optimally interacting minds. Science 329, 1081–1085.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Behrens, T. E., Hunt, L. T., and Rushworth, M. F. (2009). The computation of social behavior. Science 324, 1160–1164.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Behrens, T. E., Hunt, L. T., Woolrich, M. W., and Rushworth, M. F. (2008). Associative learning of social value. Nature 13, 245–249.

CrossRef Full Text

Birbaumer, N. (2006). Breaking the silence: brain-computer interfaces (BCI) for communication and motor control. Psychophysiology 43, 517–532.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Birbaumer, N., Ghanayim, N., Hinterberger, T., Iversen, I., Kotchoubey, B., Kübler, A., Perelmouter, J., Taub, E., and Flor, H. (1999). A spelling device for the paralyzed. Nature 398, 297–298.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Blanke, O., and Aspell, J. E. (2009). Brain technologies raise unprecedented ethical challenges. Nature 458, 703.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Blankertz, B., Tangermann, M., Vidaurre, C., Fazli, S., Sannelli, C., Haufe, S., Maeder, C., Ramsey, L., Sturm, I., Curio, G., and Müller, K.-R. (2010). The Berlin brain–computer interface: non-medical uses of BCI technology. Front. Neurosci. 4:198. doi:10.3389/fnins.2010.00198

CrossRef Full Text

Brodersen, K. H., Haiss, F., Soon Ong, C., Jung, F., Tittgemeyer, M., Buhmann, J. M., Weber, B., and Stephan, K. E. (2011). Model-based feature construction for multivariate decoding. Neuroimage 56, 601–615.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Cruse, D., Chennu, S., Chatelle, C., Bekinschtein, T. A., Fernendez-Espejo, D., Pickard, J., Laureys, S., and Owen, A. (2011). Bedside detection of awareness in the vegetative state: a cohort study. Lancet 378, 2088–2094.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Daunizeau, J., den Ouden, H. E. M., Pessiglione, M., Kiebel, S. J., Stephan, K. E., and Friston, K. J. (2010). Observing the observer (I): meta-Bayesian models of learning and decision-making. PLoS ONE 5, e15554. doi:10.1371/journal.pone.0015554

CrossRef Full Text

DeCharms, R. C. (2008). Applications of real-time fMRI. Nat. Rev. Neurosci. 9, 720–729.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Dumas, G., Lachat, F., Martinerie, J., Nadel, J., and George, N. (2011). From social behaviour to brain synchronization: review and perspectives in hyperscanning. IRBM 32, 48–53.

CrossRef Full Text

Dunbar, R. I. M. (2011). The social brain meets neuroimaging. Trends Cogn. Sci. (Regul. Ed.) 16, 101–102.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Friston, K. (2010). The free-energy principle: a unified brain theory? Nat. Rev. Neurosci. 11, 127–138.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Friston, K., and Dolan, R. (2010). Computational and dynamic models in neuroimaging. Neuroimage 52, 752–765.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Friston, K., Mattout, J., and Kilner, J. (2011). Action understanding and active inference. Biol. Cybern. 104, 137–160.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Frith, C. D., and Frith, U. (2012). Mechanisms of social cognition. Annu. Rev. Psychol. 63, 8.1–8.27.

CrossRef Full Text

Holtmann, M., Steiner, S., Hohmann, S., Luise, P., Banaschewski, T., and Bölte, S. (2011). Neurofeedback in autism spectrum disorder. Dev. Med. Child Neurol. 53, 986–993.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Johnson, S. C. (2003). Detecting agents. Philos. Trans. R. Soc. Lond. B Biol. Sci. 358, 549–559.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Johnston, S. J., Boehm, S. G., Healy, D., Goebel, R., and Linden, D. E. J. (2010). Neurofeedback: a promising tool for the self-regulation of emotion networks. Neuroimage 49, 1066–1072.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Kelso, J. A. S., de Guzman, G. C., Reveley, C., and Tognoli, E. (2009). Virtual partner interaction (VPI): exploring novel behaviors via coordination dynamics. PLoS ONE 4, e5749. doi:10.1371/journal.pone.0005749

CrossRef Full Text

Kilner, J. M., Friston, K. J., and Frith, C. D. (2007). Predictive coding: an account of the mirror neuron system. Cogn. Process. 8, 159–166.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Krach, S., Hegel, F., Wrede, B., Sagerer, G., Binkofski, F., and Kircher, T. (2008). Can machines think? Interaction and perspective taking with robots investigated via fMRI. PLoS ONE 3, e2597. doi:10.1371/journal.pone.0002597

CrossRef Full Text

Kübler, A., and Kotchoubey, B. (2007). Brain-computer interfaces in the continuum of consciousness. Curr. Opin. Neurol. 20, 643–649.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Maby, E., Perrin, M., Morlet, D., Ruby, P., Bertrand, O., Ciancia, S., Gallifet, N., Luaute, J., and Mattout, J. (2011). “Evaluation in a locked-in patient of the OpenViBE P300-speller,” in Proceedings of the 5th International Brain-Computer Interface, Graz, 272–275.

Millan, J. R., and Carmena, J. M. (2009). Invasive or noninvasive: understanding brain-machine interface technology. IEEE Eng. Med. Biol. Mag. 29, 16–22.

CrossRef Full Text

Moritz, C. T., Perlmutter, S. I., and Fetz, E. E. (2008). Direct control of paralysed muscles by cortical neurons. Nature 456, 639–642.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Nicolelis, M. A. L. (2001). Actions from thoughts. Nature 409, 403–407.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Obbink, M. (2011). Social Interaction in a Cooperative Brain-Computer Interface Game. Ph.D. thesis. Available at: http://purl.utwente.nl/essays/61043

O’Doherty, J. E., Lebedev, M. A., Ifft, P. J., Zhuang, K. Z., Shokur, S., Bleuler, H., and Nicolelis, M. A. L. (2011). Active tactile exploration using a brain–machine–brain interface. Nature 479, 228–231.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Perrin, M., Maby, E., Bouet, R., Bertrand, O., and Mattout, J. (2011). “Detecting and interpreting responses to feedback in BCI,” in Graz BCI International Workshop, Graz, 116–119.

Peters, G., and Büchel, C. (2010). Neural representations of subjective reward value. Behav. Brain Res. 213, 135–141.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Pfurtscheller, G., Allison, B. Z., Brunner, C., Bauernfeind, G., Solis-Escalante, T., Scherer, R., Zander, T. O., Mueller-Putz, G., Neuper, C., and Birbaumer, N. (2010). The hybrid BCI. Front. Neurosci. 4:30. doi:10.3389/fnpro.2010.00003

Pfurtscheller, G., Linortner, P., Winkler, R., Korisek, G., and Müller-Putz, G. (2009). Discrimination of motor imagery-induced EEG patterns in patients with complete spinal cord injury. Comput. Intell. Neurosci. 2009, 104180.

Philiastides, M. G., Biele, G., Vavatzanidis, N., Kazzer, P., and Heekeren, H. R. (2010). Temporal dynamics of prediction error processing during reward-based decision making. Neuroimage 53, 221–232.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Plass-Oude Bos, D., Reuderink, B., Laar, B., Gürkök, H., Mühl, C., Poel, M., Nijholt, A., and Heylen, D. (2010). “Brain-computer interfacing and games,” in Brain-Computer Interfaces: Applying our Minds to Human-Computer Interaction (Human-Computer Interaction Series), eds D. Tan and A. Nijholt (London: Springer-Verlag), 149–178.

Qian, N., and Lipkin, R. M. (2011). A learning-style theory for understanding autistic behaviors. Front. Hum. Neurosci. 5:77. doi:10.3389/fnhum.2011.00077

CrossRef Full Text

Redcay, E., Dodell-Feder, D., Pearrow, M. J., Mavros, P. L., Kleiner, M., Gabrieli, J. D., and Saxe, R. (2010). Live face-to-face interaction during fMRI: a new tool for social cognitive neuroscience. Neuroimage 50, 1639–1647.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Schilbach, L., Wilms, M., Eickhoff, S. B., Romanzetti, S., Tepest, R., Bente, G., Jon Shah, N., Fink, G. R., and Vogely, K. (2010). Minds made for sharing: initiating joint attention recruits reward-related neurocircuitry. J. Cogn. Neurosci. 22, 2702–2715.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Seeley, T. D., Visscher, P. K., Schlegel, T., Hogan, P. M., Franks, N. R., and Marshall, J. A. (2012). Stop signals provide cross inhibition in collective decision-making by honeybee swarms. Science 335, 108–111.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Serences, J. T., and Saproo, S. (2012). Computational advances towards linking BOLD and behavior. Neuropsychologia 50, 435–446.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Todorov, E. (2009). “Compositionality of optimal control laws,” in Advances in Neural Information Processing Systems 22, eds Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta (Vancouver: MIT Press), 1856–1864.

Vidal, J. J. (1973). Toward direct brain-computer communication. Annu. Rev. Biophys. Bioeng. 2, 157–180.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Vidaurre, C., and Blankertz, B. (2010). Towards a cure for BCI illiteracy. Brain Topogr. 23, 194–198.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Vidaurre, C., Schlögl, A., Cabeza, R., Schere, R., and Pfurtscheller, G. (2006). A fully on-line adaptive BCI. IEEE Trans. Biomed. Eng. 53, 1214–1219.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Wolpert, D. M., Doya, K., and Kawato, M. (2003). A unifying computational framework for motor control and social interaction. Philos. Trans. R. Soc. Lond. B Biol. Sci. 358, 593–602.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Wolpert, D. M., and Flanagan, R. (2010). Q&A: robotics as a tool to understand the brain. BMC Biol. 8, 92. doi:10.1186/1741-7007-8-92

CrossRef Full Text

Yoshida, W., Dolan, R. J., and Friston, K. J. (2008). Game theory of mind. PLoS Comput. Biol. 4, e1000254. doi:10.1371/journal.pcbi.1000254

CrossRef Full Text

Zander, T. O., Lehne, M., Ihme, K., Jatzev, S., Correia, J., Kothe, C., Picht, B., and Nijboer, F. (2011). A dry EEG-system for scientific research and brain–computer interfaces. Front. Neurosci. 5:53. doi:10.3389/fnins.2011.00053

CrossRef Full Text

Keywords: brain computer-interfaces, social interactions, artificial agent, computational neuroscience, real-time decoding

Citation: Mattout J (2012) Brain-computer interfaces: a neuroscience paradigm of social interaction? A matter of perspective. Front. Hum. Neurosci. 6:114. doi: 10.3389/fnhum.2012.00114

Received: 29 February 2012; Accepted: 13 April 2012;
Published online: 01 June 2012.

Edited by:

Chris Frith, Wellcome Trust Centre for Neuroimaging at University College London, UK

Reviewed by:

James Kilner, University College London, UK

Copyright: © 2012 Mattout. This is an open-access article distributed under the terms of the Creative Commons Attribution Non Commercial License, which permits non-commercial use, distribution, and reproduction in other forums, provided the original authors and source are credited.

*Correspondence: Jérémie Mattout, INSERM 1028, Equipe Bertrand, Lyon Neuroscience Research Center, CHS Le Vinatier, 95 Boulevard Pinel, 69500 Bron, Lyon, France. e-mail: jeremie.mattout@inserm.fr

Download