ORIGINAL RESEARCH article
Sec. Social Neuroscience
Volume 11 - 2020 | https://doi.org/10.3389/fpsyt.2020.00279
Imaging Real-Time Tactile Interaction With Two-Person Dual-Coil fMRI
- 1Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, Espoo, Finland
- 2Advanced Magnetic Imaging Centre, Aalto University School of Science, Espoo, Finland
- 3Department of Art, Aalto University School of Arts, Design and Architecture, Espoo, Finland
- 4Turku PET Centre and Department of Psychology, University of Turku, Turku, Finland
Studies of brain mechanisms supporting social interaction are demanding because real interaction only occurs when persons are in contact. Instead, most brain imaging studies scan subjects individually. Here we present a proof-of-concept demonstration of two-person blood oxygenation dependent (BOLD) imaging of brain activity from two individuals interacting inside the bore of a single MRI scanner. We developed a custom 16-channel (8 + 8 channels) two-helmet coil with two separate receiver-coil pairs providing whole-brain coverage, while bringing participants into a shared physical space and realistic face-to-face contact. Ten subject pairs were scanned with the setup. During the experiment, subjects took turns in tapping each other’s lip versus observing and feeling the taps timed by auditory instructions. Networks of sensorimotor brain areas were engaged alternatingly in the subjects during executing motor actions as well as observing and feeling them; these responses were clearly distinguishable from the auditory responses occurring similarly in both participants. Even though the signal-to-noise ratio of our coil system was compromised compared with standard 32-channel head coils, our results show that the two-person fMRI scanning is feasible for studying the brain basis of social interaction.
Humans are embedded in complex social networks where individuals interact at different temporal scales. Most social interactions, such as verbal and nonverbal communication, occur in dyads or groups, where people constantly strive to predict, understand, and influence each other. During the interaction, sensory, cognitive, and emotional information is constantly remapped in the observers’ brain and used for motor actions as responses attuned to the received input (1). Thus the interlocutors’ minds are intertwined into a shared system facilitating reciprocation (2–4) as well as anticipation of the other person’s acts, allowing distribution of neural processing across brains to aid, for example, problem solving.
Some aspects of human social behavior—in particular perceptual and decision-making processes—can be studied by measuring single brains in isolation. Conventional BOLD-fMRI experiments allow to locate brain processes related to different social functions, while intersubject correlation (ISC) analysis based on voxelwise temporal correlation of BOLD-fMRI time series (5–7) or neuromagnetic activity with higher temporal resolution (8) across subject pairs can be used to index similarity of sensory and socioemotional information processing across subjects (9, 10). Recently this approach has also been extended to quantifying but also similarity of person preferences and social relationships (11). Although such data-driven analyses can be used to map brain basis of social perception with high-dimensional stimulus spaces, they are still essentially based on measurement of extrinsic, fixed stimuli and lack the very definition of social interaction, as the subjects have no influence whatsoever on other peoples’ minds during the experiment. This is a critical limitation as social interaction cannot be reduced to sequential, partially parallel processing of the input of the interacting brains, because social interaction only emerges when the two brains (via their owners) are hooked up together (4, 8, 12). Simply put, real-time social interaction does not exist when two or more individuals are not engaged in the same physical or virtual space (13).
Reciprocal social cognitive processes cannot thus be understood completely without studying the complete interaction unit consisting of two individuals (14). Behavioral work suggests that social interaction tunes the individuals into a self-organizing, interactive state. For example, humans automatically mimic other’s emotional expressions (15), gaze direction (16), and postures (17). Social signals, such as laughter, also automatically attune individual not just at the level of motor responses, but also in terms of activation of specific neurotransmitter systems (18). Moreover, many social processes, such as gaze following (19, 20) and turn taking during conversation (21), take place with gaps less than 250 ms, and social interaction may lead to episodes of two-person flow without neither of them consciously leading or following (22). Yet, most of what we know about human social brain functions comes from “spectator” studies where the brains are assumed to generate responses to pre-defined stimulation (8). Even though this approach has been successful in delineating the brain basis of social perception, and on some occasions of social communication, it tells relatively little about the actual mechanisms of dynamic social interaction. Consequently, several researchers have suggested that the spectator paradigm and offline social cognition studies should be complemented with real-time two-person paradigms, where two interacting individuals constantly generate “stimuli” for each other (1, 2, 4, 23, 24).
Some aspects of human communication can be investigated using alternated scanning of the subjects sending and receiving information. In such an approach, the senders convey some social information via, for example, speech or gestures, while their brain activity as well as the communicative information are recorded. The communicative information can then be presented to the receiver subjects as stimuli during brain imaging, allowing joint analysis of the brain activity of the sender and receiver subjects. This line of work has revealed how successful communication via speech (25, 26), hand gestures (3, 27), and facial expressions (28) enhances similarity of neural activation patterns across the interlocutors in a task-specific manner. This approach however lacks any interactivity, as the receiver subjects are essentially viewing pre-recorded stimuli, and need not to generate any responses to them. Recently different neuroimaging techniques have been proposed for studying dynamic “live” interaction. In the hyperscanning approach, two individuals are scanned with two MRI (29–31) or MEG (32) devices connected with an audio-video link, thus enabling interaction of two subjects in independent devices. Furthermore, with EEG recordings real face-to-face to interaction can be achieved in reasonably unconstrained social interaction tasks (33).
Such natural sense of presence of another individual might be critical for understanding the brain basis of social interaction. For example, resting-state brain activity in nonhuman primates is different when conspecifics are present versus absent (34). In humans, interaction with real rather than recorded persons elicits stronger hemodynamic responses (35), and even early electrophysiological responses such as the face-sensitive N170 responses are amplified for real human faces versus those of a human-like dummy (36). These findings highlight the importance of co-presence with other people, and the consequent changes in the way the brain processes both internal and external cues. Consequently, to understand the intricacies of the brain basis of human social interaction and communication, we need techniques that allow simultaneous recording of two individuals in the same physical space. This has already been technically achieved with simultaneous EEG (e.g. 37) and NIRS (e.g. 38, 39) recordings, as these devices can be easily attached to subjects measured in a conventional face-to-face settings. Nevertheless, neither of these techniques allows volumetric measurements of the deep brain structures, many of which are critical for human social processes (40–42).
The Current Study
One potentially powerful approach for studying brain basis of social interaction involves simultaneous blood oxygenation dependent (BOLD) imaging of two persons within one magnetic resonance imaging scanner. Such an approach would bring both subjects into the same physical space whilst allowing tomographic imaging of hemodynamic brain activation. Currently, one such solution has been published, based on decoupled circular-polarized volume coil for two heads (43, 44). We have, in turn, developed a custom-built 16-channel (8 + 8 channels) two-helmet coil with two separate receiving elements (45), allowing experimental setups where the subjects were facing each other so that their feet pointed to opposite directions in the magnet bore. In the present proof-of-concept study we demonstrate how hemodynamic signals can be recorded during real-time social interaction using this a novel MRI setup so that the subject can lie parallel to each other while sharing the same physical space in a realistic face-to-face contact. The setup thereby allows seamless interaction between the members of the dyad, while providing whole-brain coverage.
Because this was the very first proof-of-concept human experiment done with our dual-coil design, we wanted to benchmark the feasibility of the setup with a robust and simple social interaction task, rather than setting up an overly complex design without knowing the potential limitations of the coil setup. Consequently, we used social touching as the model task, as touching is an intimate way of conveying affect and trust in social relationships (46–48). During the fMRI experiments, subjects took turns in tapping each other’s lip versus observing and feeling the taps. We show that overlapping networks of sensorimotor brain areas are engaged during executing motor actions as well as observing and feeling social touching, suggesting that the two-person fMRI recordings are feasible for studying the brain basis of social interaction.
Materials and Methods
We scanned 10 pairs of volunteers with a mean age of 23 ± 3 years (20 subjects; 7 female–male pairs and 3 female–female pairs). One further pair was scanned but excluded due to excessive head motions: one of the subjects moved so that the detector array’s sensitivity was compromised, and repositioning was not possible due to time constraints. All subjects were right-handed per self-report, and none of them reported any history of neurological illness. All pairs were friends or romantic partners. The study was approved by the Aalto University Institutional Review Board. All subjects gave written informed consent and were screened for MRI exclusion criteria prior to scanning.
Data were acquired with 3-T whole body MRI system (MAGNETOM Skyra 3.0 T, Siemens Healthcare, Erlangen, Germany) with both a vendor-provided 32-channel receive head coil (reference scans) and a custom-built 16-channel (8 + 8 channels) two-head, two-helmet receive coil (anatomical images, task-based fMRI, and resting state scan). With both receive coils, the integrated body coil was used for transmit. Figure 1 shows an overview of the coil and subject setup. We originally experimented with a setup where subjects were lying either sideways or in a supine position, while entering the gantry from the opposite ends so that a second custom MRI bed was used for the backwards entry. This setup was however discarded due to subject discomfort and concomitant motion-related artifacts.
Figure 1 Coil and subject setup. (A, B) Illustration of the dual coil and its arrangement in the scanner. (C, D) Subject setup inside the scanner.
Every scanning session consisted of two parts. First both subjects were scanned one-by-one using normal one-person setup (head-first supine, 32-channel coil). T1-weighted MP-RAGE images were acquired for anatomical reference, and gradient echo (GRE) echo-planar imaging (EPI) data were acquired for evaluating the temporal signal-to-noise ratio (tSNR), especially in comparison with the two-person data. Imaging parameters for the MP-RAGE scans were as follows: repetition time (TR) = 2.53 s, echo time (TE) = 28 ms, readout flip angle (α) = 7°, 256 × 256 × 176 imaging matrix, isotropic 1-mm3 resolution, and GRAPPA reduction factor (R) = 2. The parameters for the GRE-EPI were: TR = 3 s, TE = 28 ms, α = 80°, fat saturation was used, in-plane imaging matrix (frequency encoding × phase encoding) = 70 × 70, field-of-view (FOV) = 21 × 21 cm2, in-plane resolution 3 × 3 mm2, R = 2, effective echo spacing (esp) = 0.26 ms, bandwidth = 2380 Hz/pixel (total bandwidth = 167 kHz), phase encoding in anterior–posterior direction, slice thickness = 3 mm with 10% slice gap, interleaved slice-acquisition order. Altogether 126 volumes, with 49 oblique axial slices in each, were acquired during the 6 min 18 s data-collection period. Three “dummy” scans were acquired at the beginning of each acquisition to stabilize the longitudinal magnetization.
Next the subjects were positioned in the scanner together with the two-head coil; the subjects were lying on their sides, facing each other at a close distance. Localizer and GRE-EPI data were acquired after shimming the magnet, using the semi-automated workflow by iteratively acquiring B0 field maps and calculating the shims for as long as the shim was deemed unacceptable. In this phase, the subjects could be repositioned if the their field maps appeared excessively dissimilar. The scan parameters were the same as in the one-person setup, with the following exceptions: in-plane matrix = 160 × 70, FOV = 48.6 × 20.1 cm2, and bandwidth = 2404 Hz/pixel (total bandwidth = 385 kHz). Moreover, the phase encoding was in subjects’ left–right direction to avoid aliasing ghosts from one subject’s brain into the other, and to reduce distortion and scan time by limiting the number of phase encoding steps (to 35 per slice). The 49 slices were oriented axially and tilted only to maximize the symmetry of the acquisition of the two brains. During the two-person measurements, the bodies of the subjects were in contact (without direct skin contact) and pillows and foam mattresses were used to make the subjects as comfortable as possible. The subjects’ heads were stabilized using small pillows with non-slippery surface and additional support was provided using a large vacuum pillow that once deflated retained its shape throughout the session.
Figure 2 summarizes the touching task. The subjects took turns in repeatedly tapping (“actor” subject) the lower lip of their partner (“receiver” subject) with the tip of the index finger, so that both partners could also clearly see the finger movement. This site was chosen so that that the finger movements would be clearly visible to both subjects. Self-paced (∼2 Hz) tapping was performed throughout the 30-s task blocks. Subjects were stressed to minimize finger movements, because motion near the imaging volume perturbs the magnetic field and can interfere with the spatial encoding and introduce head motion. During the rest blocks the subjects were instructed to hold their finger close by but not touching the lower lip of their partner. Each task run contained six rest–task block cycles with an additional rest block at the end of the run. Except for the initial rest condition, transitions between blocks were cued by pre-recorded voice command “Touch” and “Rest.” These were delivered to the participants by connecting the stimulus computer’s audio output to the magnet console to use the intercom system of the MRI scanner. Presentation software (Neurobehavioral Systems, Berkeley, CA, USA) controlled stimulus presentation. After the first task run we confirmed that the subjects could hear the voice commands. For any given run, only one of the subjects performed the active touching task while the other focused on feeling the taps. The roles were switched between runs. Both subjects were always scanned twice in both roles so that altogether four task runs with 126 EPI volumes in each were acquired.
Figure 2 Experimental design. Subjects took 30-s turns in tapping the top of each other’s lip with their index finger, resulting in alternating tapping-feeling boxcar design with complete antiphase across the subjects. Turns were indicated with commands relayed via headphones.
Resting state scans were obtained for inspecting signal quality. During the single-subject GRE-EPI data acquisition, the subjects were instructed to keep their eyes open and still. Eye-blinking was allowed. The two-person resting-state scans were always acquired prior to the task scans, asking the subjects to lie still with eyes open without actively looking at each other.
Image Preprocessing—One-Person Scans
The fMRI data were preprocessed in Matlab utilizing custom code and FSL functions (49). The one-person fMRI data were motion-corrected using FSL MCFLIRT (50). Next, slice-timing correction was applied and the frame-wise motion within each fMRI run was corrected using FSL function MCFLIRT after brain extraction using BET (51) and smoothed with structure-preserving smoothing with SUSAN (52) that employed a 6-mm (full-width-at-half-maximum, FWHM) Gaussian smoothing kernel. The data were rigidly (six free parameters) aligned to the anatomical MP-RAGE scans, with narrow search space for the alignment because the receiver intensity was spatially atypical and prohibited the use of the standard options for several datasets. The anatomical images were normalized to the MNI space, and the resulting warps were then applied to the EPI images. Data were finally smoothed using a Gaussian kernel with 8 mm FWHM.
Image Preprocessing—Two-Person Scans
Individual heads were first separated and rotated to standard head-first supine orientation using a fixed coordinate transformation without resampling. Next both subjects’ data were preprocessed independently as described above. Preprocessing was concluded by recombining the data of each pair so that one subject’s data were in MNI space, and the other subject’s data were placed nose-to-nose with that to mimic the actual positioning during the scanning.
Coil performance was assessed with temporal signal-to-noise ratio (tSNR) of resting-state fMRI scans comprising of 126 time points. The FSL BET program was used to extract the brain voxels from the images, after which the data were motion-corrected using FSL MCFLIRT. Next, voxelwise tSNR values were calculated as the ratio of the mean signal over the measurement, divided by the standard deviation (std) at each voxel. For comparison, similar analysis was carried out for the one-person resting-state data.
Task-Evoked BOLD Responses
Task-evoked BOLD responses were analyzed in FSL using the General Linear Model (GLM). The main blocks were modeled at the stimulus periodicity, and the voice instructions were modeled as 3-s events at the beginning and end of each block (see Figure 2). A canonical double-gamma hemodynamic response function (HRF) was convolved with the timeseries of tactile stimulus blocks and voice events. Also, the motion parameter estimates of both of the simultaneously scanned heads were included as nuisance regressors for both heads individually; in other words, both subjects’ models had their own as well as the other subjects’ motion parameters as nuisance covariates. The other head’s motion estimates were included to gain resilience against motion-related field or signal fluctuations extending from one head to the location of the other. The analyses included the entire two-head volumes, allowing quantification and visualization of subject-specific and shared activation patterns across the dyad. In a complementary methodological approach, we used independent-component analysis with the GIFT toolbox (http://icatb.sourceforge.net/) on the joint dual-head EPI data, and we assessed the temporal profile of the top extracted components against the experimental stimulus model.
Figures 3A, B show a representative dyad’s normalized data for T1 and EPI sequences. tSNR was compared between resting-state scans of the two subjects imaged simultaneously with the two-person coil and the same subjects imaged alone with standard 32-channel head coil. Figure 3C shows the mean tSNR in a sagittal plane of a representative dyad and in a roughly corresponding plane of one of the subjects of this dyad measured individually. The scales of the color bars are different by a factor of 1.5, corresponding to the theoretical scaling factor of SNR resulting from the differences in acquisition bandwidth (inversely proportional to the square-root of the bandwidth). As expected, the tSNRs of the two-person measurements were almost 50% lower than those of the single-subject measurements, with most salient drop of signal in the frontal cortices.
Figure 3 Representative single-dyad T1 (A) and T*2 (B) -weighted images acquired with the dual coil. (C) tSNR for the dual coil and (D) conventional Siemens 32-channel head coil. Note that in due to preprocessing, the data from the dual coil pairs in panel (C) are further away from each other than they actually are (c.f. panel B).
Regional Effects in the GLM
The voice cues modeled as 3-s events elicited reliable bilateral auditory-cortex activations similarly in both subjects regardless of their role as the actor or the receiver (Figure 4A). In turn, the touching task resulted in differential activation patterns in the somatosensory and motor cortices depending on whether the subject was tapping or receiving taps (Figure 4B and Figure S1).
Figure 4 Main effects of auditory cue (A) and the touching task (B) for the actor and receiver subjects. The data are thresholded at p < 0.05, FDR corrected.
We next evaluated the consistency of the auditory and somatosensory activations across individual subjects. To that end, we binarized the first-level activation maps for the verbal instructions and tactile tasks, and generated cumulative activation maps where voxels indicated in how many subjects task-dependent activations were detected at the a priori threshold (Figure 5). This analysis confirmed that the evoked auditory responses could be detected practically in all the subjects, while the magnitude and detectability of the somatosensory responses was significantly more variable.
Figure 5 Cumulative map of the binarized (active / inactive) single-pair level activation maps for the auditory cues and touching task. Color bar indicates the number of subjects where significant activations were observed in the first-level analyses. Note that this analysis does not differentiate which subject was active in the tapping task.
Independent-Component Analysis (ICA)
ICA (Figure 6) applied on the combined data of the two subjects revealed two clear components during the task: IC1 centrally involving the sensorimotor network, and the IC2 involving the auditory cortices and lateral frontal cortices. Both these components were shared with the subjects, implying that similar auditory and somatomotor activity patterns were present in both subjects, irrespectively of whether they were currently executing versus feeling the touches.
Figure 6 (A) Two representative independent components (ICs) and (B) their time courses extracted from the data.
Our results show that hemodynamic activity can be reliably measured from two interacting subjects’ brains within one scanner using a dual-helmet setup with two separate coil arrays, and that this technique can be used for studying elementary social cognitive functions, such as interpersonal communication via touching. Although the SNR of the dual-helmet coil was compromised (see Figure 3) compared with a conventional 32-channel head coil (53), the task-dependent BOLD responses were task- and region-specific: auditory cues activated the auditory cortex similarly in both subjects (as they both heard the same cues), while the somatosensory and motor activations varied depending on which subject was actively tapping the other. The cues however appeared to alert the acting subject more than the reacting subject, as reflected by activation of the parieto-occipital cortex (precuneus). ICA also revealed activation of sensorimotor and auditory networks in both subjects. Altogether our results highlight how sensorimotor networks “resonate” across individuals during tactile interaction and confirm that fMRI with our novel dual-coil design is a potentially useful tool for studying brain basis of social interaction.
Performance of the Dual Coil
GLM revealed that specific task-dependent fluctuations in hemodynamic activity can be picked up with the setup. Despite relatively modest sample size, the contrasts of interest (tactile, motor, and auditory activations) were significant at the a priori FDR-corrected statistical threshold. However, SNR of the dual coil was clearly inferior to a conventional 32-channel head coil. An important source of discrepancy in the tSNR between the two- and the single-subject setups is the smaller number of coil elements in each of the helmets in the two-person coil in comparison to the one-person coil (8 vs. 32). The overall quality and geometry of the coil also matters: while the two-person coil is a working prototype, the 32-channel coil is the state-of-the-art product of the magnet vendor. The homogeneity of the main magnetic field (B0) is another important factor. The second-order shim coils cannot achieve the same degree of homogeneity for the two heads than for a single round object, and the B0 at the edges of the imaging volume is, to begin with, less homogeneous than in the center of the magnet. For these reasons, the water peak is wider in the two-person case.
Also, as the two heads are typically of somewhat different size, the flip angles differ between the heads. Moreover, as the heads after shimming remain in different magnetic fields (and often result in a two-peaked water spectrum; the phase maps of the individual brains are relatively even, but have different offsets), the magnetization transfer due to fat saturation tends to reduce the signal of one head more than of the other, with fat saturation performance varying correspondingly. The homogeneity of the tSNR in the brain is also compromised due to the absence of coil elements in the anterior parts of the brains (see Figures 1 and 3). This drop is similar to what occurs when the anterior part of the 32-channel coil is removed and only the posterior elements are used for imaging. A final reason that influences the tSNR is the subject comfort and stability, which in the two-person setup are worse than in the normal setup, further compromised because the subjects need to be scanned in close proximity and in a sideways position. We tried to alleviate this problem by keeping the experimental runs short and by padding the subjects well, as well as using both subjects’ motion parameters as nuisance regressors in the analysis. It is however obvious that future studies need to implement more effective prospective means for motion control, such as neck or head restraints.
Simultaneous Measurement of Interacting Individuals
In contrast to conventional single-person MR imaging, the present two-person functional imaging approach provides novel means for understanding the neural basis of human social interaction. During social interaction, the interaction partners’ brains need to continuously anticipate as well as respond and adjust to incoming signals. A critical question is whether these sensorimotor loops function only recursively, as a cascade of third-person action-response processes? For example, a dialogue between two persons becomes fully incomprehensible if one persons’ speech fragments are removed from the recording. Brains are coupled with each other via behavior, and they influence each other via extracranial loops: Motor actions conveyed by one individual are interpreted by means of the sensory systems of another, and converted to sensorimotor format for promoting action understanding (1). The present 2-person fMRI setup provides means for studying how these loops are established during real-time interaction, as the evolving temporal cascade of sender-receiver operations in the social interaction can be measured continuously.
Intuitively two-person neuroimaging sounds like an outstanding means for analyzing social interaction, because it allows quantifying the dynamic interaction between two brains similarly as such interaction occurs in real life. Yet after initial demonstrations of the feasibility of the two-person hyperscanning fMRI technique (30), it is surprising how little work has been conducted in this domain given the prominence of other individuals to practically all aspects of our lives (54). For example, by the time of writing this article, searching Web of Science for “fMRI and hyperscanning” yields only 52 hits (of which 15 are original articles actually using fMRI hyperscanning), whereas searching for “fMRI” yields no less than 70,460 hits. One likely reason for the paucity of fMRI hyperscanning studies is that such experiments are inherently difficult to carry out and analyze. The two-person approach adds significantly to the complexity of the data—not just due to the doubled number of analyzed voxels, but due to the interactive and temporally evolving nonlinear nature of real social interaction. It is thus possible that this line of work has not increased our knowledge on social interaction as much as the extra complexity would warrant. But it is also possible that we have not yet asked the best questions with the two-person neuroimaging setups, and maybe we need to adopt a new theoretical framework for measuring and analyzing brain signals emerging from social interaction, rather than just scanning two brains at the same time using traditional approach with pre-determined stimulus models. During social interaction, the interlocutors constantly generate “stimuli” for each other in an adaptive fashion, meaning that one potentially powerful approach involves careful recording and annotation of the behavioral dimensions of the social interaction as it occurs during the experiment, and using that data for post-experiment generation of the subject-specific stimulus models. This approach obviously leads to a high-dimensional stimulus space that again can be capitalized in the analysis: we do not necessarily know which features of social interaction form the most important dimensions when generating a classic stimulation model (55). On the contrary, when the stimulus model is generated based on the subject behavior during the experiment, the critical dimensions do not need to be known in advance but the research may aim at constructing them based on the data.
Practicality of the Two-Person Imaging Setup
We had to position our subjects into close proximity with each other due to the limited size of the transmitting body coil but also to provide a shared interpersonal space, allowing, for example, joint manipulation of objects. However, this intimate setting likely led to breaching the subjects’ peripersonal spaces, potentially influencing social processes because close social proximity may feel uncomfortable (56, 57). Accordingly, this setup is best suited for scanning subjects who know each other well enough, and the intimacy may also yield biases in subject selection. For the same reasons, this type of dual-coil imaging might be impossible for patient populations with disorders involving social interaction. An optimized version of the coil design could involve a setup comparable to two conventional head coils with subjects' vertices aligned against each other, so that both subjects can be scanned in supine position while they enter the scanner from opposite ends of the bore. Although subjects cannot directly see each other, eye contact can be arranged using a mirror system. Our setup only had external auditory stimulus delivery system for the subjects. In theory, it would also be possible to project visual stimuli to the subjects, but due to the close proximity of the subjects’ faces this is deemed impractical. Our proof-of-concept study also revealed that the dual-coil setup is significantly less comfortable than conventional 32-channel head coil. Subject setup and shimming are slow, and the scanning position is difficult to maintain over prolonged periods of time. Interlocking of the head coils and close proximity of subjects also increased susceptibility to motion. Accordingly, we tried to maximize subject comfort by limiting the scanning time into short blocks; in our experience the current scanning time (four 6-min sessions plus anatomical images and preparations) was close to the maximum that subjects can comfortably do.
Limitations and Future Directions
In this study we resorted to conventional moderately accelerated fMRI acquisitions. However, recent advances in multi-band excitation, to improve temporal resolution, and parallel transmit, to even out the flip angles in the two potentially very different sized heads, could greatly benefit the two-person MRI setup. The SNR for the dual coil was significantly worse than that of the conventional 32-channel head coil, particularly in the frontal cortex due to multiple factors pertaining to coil geometry and the low number of channels. This lacking signal in frontal cortex is a limiting factor when it comes to investigating social interaction, for which the frontal cortex acts as a central hub region (58). However, many social processes emerge in regions where the coil system has adequate signal [such as posterior temporal and parietal cortices (16, 40, 59)], thus care must be taken when deciding what sort of social tasks can be studied with the present setup. Additionally, future benchmarking with variable tasks and experimental setups should be conducted to evaluate what types of tasks are ultimately feasible for this type of dual-coil imaging setup. Future developments of the coil setup should strive to maximize coil coverage of the scalp more evenly, and with higher-density coil arrangements. Such new devices would also allow more efficient control of subject motion: the limited contact of the current coil design with the scalp, combined with the sideways scanning position makes the setup sensitive to head motion.
We conclude that two-person fMRI is a feasible and potentially powerful tool for studying brain dynamics of real-time social interaction. Even though the signal quality was compromised compared with state-of-the art head coils, our results show that our dual-head coil yields sufficient SNR for quantifying the dynamics of the real-time two-person interaction. This proof-of-concept study revealed that it is possible to measure good-quality hemodynamic signals simultaneously from two brains with one scanner. The two-person fMRI approach presented in this study complements the existing fMRI and MEG hyperscanning and face-to-face EEG and fNIRS techniques by allowing tomographic imaging of brain activations of two interacting subjects in face-to-face settings. Even though both subjects generated tactile stimuli to each other in the experiment, the task was still externally controlled. Our data however suggest that in the future this methodology can be used for quantifying brain activation in dyadic, unconstrained, and naturalistic social interaction.
Data Availability Statement
The datasets generated for this study will not be made publicly available. The institutional review board did not give permission for sharing sensitive medical data (MR images); thus data sharing waiver could not be included in the informed consent. Requests to access these datasets should be directed to the corresponding author.
The studies involving human participants were reviewed and approved by Aalto University Institutional Review Board. The participants provided their written informed consent to participate in this study.
VR, JK, RH, and LN designed research. VR, SM, and RH developed instruments. VR and JK acquired data. VR and JK analyzed data. VR, JK, RH, and LN interpreted data. VR, JK, SM, RH, and LN wrote the paper.
Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
We thank Ms. Anna Anttalainen, Dr. Toni Auranen, Ms. Marita Kattelus, Mr. Veli-Matti Saarinen, and Mr. Tuomas Tolvanen for assistance, and Insight MRI for the development of the dual coil. The research was made possible by the Advanced Magnetic Imaging Centre, Aalto University School of Science, Espoo, Finland. VR is grateful to the funding provided by the Swedish Cultural Foundation in Finland, Instrumentarium Science Foundation, and Kalle and Dagmar Välimaa Fund of the Finnish Cultural Foundation. The funding support of the Academy of Finland (grant #218072 to RH, grant #265917 to LN) and European Research Council (“Brain2Brain” grant #232946 to RH and “SocialBrain” grant #313000 to LN) is thankfully acknowledged.
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyt.2020.00279/full#supplementary-material
Figure S1 | (A) Alternating responses to active touching by left and right subjects. (B) Overlapping activations for touching and feeling touch.
1. Hari R, Kujala MV Brain Basis of Human Social Interaction: From Concepts to Brain Imaging. Physiol Rev (2009) 89:453–79. doi: 10.1152/physrev.00041.2007
2. Hasson U, Ghazanfar AA, Galantucci B, Garrod S, Keysers C Brain-To-Brain Coupling: a Mechanism for Creating and Sharing a Social World. Trends Cognit Sci (2012) 16:114–21. doi: 10.1016/j.tics.2011.12.007
3. Smirnov D, Lachat F, Peltola T, Lahnakoski JM, Koistinen O-P, Glerean E, et al. Brain-To-Brain Hyperclassification Reveals Action-Specific Motor Mapping of Observed Actions in Humans. PloS One (2017) 12:E0189508. doi: 10.1371/journal.pone.0189508
4. Nummenmaa L, Lahnakoski JM, Glerean E Sharing the Social World Via Intersubject Neural Synchronisation. Curr Opin Psychol (2018) 24:7–14. doi: 10.1016/j.copsyc.2018.02.021
5. Bartels A, Zeki S. The Chronoarchitecture of the Human Brain—Natural Viewing Conditions Reveal a Time-Based Anatomy of the Brain. Neuroimage (2004) 22:419–33. doi: 10.1016/j.neuroimage.2004.01.007
6. Hasson U, Nir Y, Levy I, Fuhrmann G, Malach R. Intersubject Synchronization of Cortical Activity During Natural Vision. Science (2004) 303:1634–40. doi: 10.1126/science.1089506
7. Wilson SM, Molnar-Szakacs I, Iacoboni M. Beyond Superior Temporal Cortex: Intersubject Correlations in Narrative Speech Comprehension. Cereb Cortex (2008) 18:230–42. doi: 10.1093/cercor/bhm049
8. Hari R, Henriksson L, Malinen S, Parkkonen L. Centrality of Social Interaction in Human Brain Function. Neuron (2015) 88:181–93. doi: 10.1016/j.neuron.2015.09.022
9. Lahnakoski JM, Glerean E, Jääskeläinen IP, Hyönä J, Hari R, Sams M. Synchronous Brain Activity Across Individuals Underlies Shared Psychological Perspectives. Neuroimage (2014) 100:316–24. doi: 10.1016/j.neuroimage.2014.06.022
10. Nummenmaa L, Glerean E, Viinikainen M, Jaaskelainen IP, Hari R, Sams M. Emotions Promote Social Interaction by Synchronizing Brain Activity Across Individuals. Proc Natl Acad Sci U S A (2012) 109:9599–604. doi: 10.1073/pnas.1206095109
11. Parkinson C, Kleinbaum AM, Wheatley T. Similar Neural Responses Predict Friendship. Nat Commun (2018) 9:332. doi: 10.1038/s41467-017-02722-7
12. Hari R, Sams M, Nummenmaa L. Attending to and Neglecting People: Bridging Neuroscience, Psychology and Sociology. Phil Trans B (2016) 371:1–9. doi: 10.1098/rstb.2015.0365
13. De Jaegher H, Di Paolo E, Gallagher S. Can Social Interaction Constitute Social Cognition? Trends Cognit Sci (2010) 14:441–7. doi: 10.1016/j.tics.2010.06.009
14. Konvalinka I, Roepstorff A. The Two-Brain Approach: How Can Mutually Interacting Brains Teach Us Something About Social Interaction? Front Hum Neurosci (2012) 6:1–10. doi: 10.3389/fnhum.2012.00215
15. Dimberg U, Thunberg M, Elmehed K. Unconscious Facial Reactions to Emotional Facial Expressions. Psychol Sci (2000) 11:86–9. doi: 10.1111/1467-9280.00221
16. Nummenmaa L, Calder AJ. Neural Mechanisms of Social Attention. Trends Cognit Sci (2009) 13:135–43. doi: 10.1016/j.tics.2008.12.006
17. Lakin JL, Jefferis VE, Cheng CM, Chartrand TL. The Chameleon Effect as Social Glue: Evidence for the Evolutionary Significance of Nonconscious Mimicry. J Nonverbal Behav (2003) 27:145–62. doi: 10.1023/A:1025389814290
18. Manninen S, Tuominen L, Dunbar RIM, Karjalainen T, Hirvonen J, Arponen E, et al. Social Laughter Triggers Endogenous Opioid Release in Humans. the. J Neurosci (2017) 37:6125–31. doi: 10.1523/JNEUROSCI.0688-16.2017
19. Frischen A, Bayliss AP, Tipper SP. Gaze Cueing of Attention: Visual Attention, Social Cognition, and Individual Differences. Psychol Bull (2007) 133:694–724. doi: 10.1037/0033-2909.133.4.694
20. Pfeiffer UJ, Vogeley K, Schilbach L. From Gaze Cueing to Dual Eye-Tracking: Novel Approaches to Investigate the Neural Correlates of Gaze in Social Interaction. Neurosci Biobehav Rev (2013) 37:2516–28. doi: 10.1016/j.neubiorev.2013.07.017
21. Stivers T, Enfield NJ, Brown P, Englert C, Hayashi M, Heinemann T, et al. Universals and Cultural Variation in Turn-Taking in Conversation. Proc Nat Acad Sci USA (2009) 106:10587–92. doi: 10.1073/pnas.0903616106
22. Noy L, Dekel E, Alon UT. The Mirror Game as a Paradigm for Studying the Dynamics of Two People Improvising Motion Together. Proc Natl Acad Sci (2011) 108:20947–52. doi: 10.1073/pnas.1108155108
23. Redcay E, Schilbach L. Using Second-Person Neuroscience to Elucidate the Mechanisms of Social Interaction. Nat Rev Neurosci (2019) 20:495–505. doi: 10.1038/s41583-019-0179-4
24. Schilbach L, Timmermans B, Reddy V, Costall A, Bente G, Schlicht T, et al. Toward a Second-Person Neuroscience. Behav Brain Sci (2013) 36:393–414. doi: 10.1017/S0140525X12000660
25. Smirnov D, Saarimäki H, Glerean E, Hari R, Sams M, Nummenmaa L. Emotions Amplify Speaker–Listener Neural Alignment. Hum Brain Mapp (2019) 40:4777–88. doi: 10.1002/hbm.24736
26. Stephens GJ, Silbert LJ, Hasson U. Speaker-Listener Neural Coupling Underlies Successful Communication. Proc Natl Acad Sci U S A (2010) 107:14425–30. doi: 10.1073/pnas.1008662107
27. Schippers MB, Roebroeck A, Renken R, Nanetti L, Keysers C. Mapping the Information Flow From One Brain to Another During Gestural Communication. Proc Natl Acad Sci U S A (2010) 107:9388–93. doi: 10.1073/pnas.1001791107
28. Anders S, Heinzle J, Weiskopf N, Ethofer T, Haynes JD. Flow of Affective Information Between Communicating Brains. Neuroimage (2011) 54:439–46. doi: 10.1016/j.neuroimage.2010.07.004
29. King-Casas B, Sharp C, Lomax-Bream L, Lohrenz T, Fonagy P, Montague PR. the Rupture and Repair of Cooperation in Borderline Personality Disorder. Science (2008) 321:806–10. doi: 10.1126/science.1156902
30. Montague PR, Berns GS, Cohen JD, McClure SM, Pagnoni G, Dhamala M, et al. Hyperscanning: Simultaneous Fmri During Linked Social Interactions. Neuroimage (2002) 16:1159–64. doi: 10.1006/nimg.2002.1150
31. Saito DN, Tanabe HC, Izuma K, Hayashi MJ, Morito Y, Komeda H, et al. “Stay Tuned”: Inter-Individual Neural Synchronization During Mutual Gaze and Joint Attention. Front Integr Neurosci (2010) 4:1–12. doi: 10.3389/fnint.2010.00127
32. Baess P, Zhdanov A, Mandel A, Parkkonen L, Hirvenkari L, Mäkelä JP, et al. Meg Dual Scanning: a Procedure to Study Real-Time Auditory Interaction Between Two Persons. Front Hum Neurosci (2012) 6:1–7. doi: 10.3389/fnhum.2012.00083
33. Babiloni F, Astolfi L. Social Neuroscience and Hyperscanning Techniques: Past, Present and Future. Neurosci Biobehav Rev (2014) 44:76–93. doi: 10.1016/j.neubiorev.2012.07.006
34. Monfardini E, Redoute J, Hadj-Bouziane F, Hynaux C, Fradin J, Huguet P, et al. Others’ Sheer Presence Boosts Brain Activity in the Attention (But Not the Motivation) Network. Cereb Cortex (2016) 26:2427–39. doi: 10.1093/cercor/bhv067
35. Redcay E, Dodell-Feder D, Pearrow MJ, Mavros PL, Kleiner M, Gabrieli JDE, et al. Live Face-To-Face Interaction During Fmri: a New Tool for Social Cognitive Neuroscience. Neuroimage (2010) 50:1639–47. doi: 10.1016/j.neuroimage.2010.01.052
36. Ponkanen LM, Hietanen JK, Peltola MJ, Kauppinen PK, Haapalainen A, Leppanen JM. Facing a Real Person: an Event-Related Potential Study. Neuroreport (2008) 19:497–501. doi: 10.1097/WNR.0b013e3282f7c4d3
37. Dumas G, Nadel J, Soussignan R, Martinerie J, Garnero L. Inter-Brain Synchronization During Social Interaction. PloS One (2010) 5:E12166. doi: 10.1371/journal.pone.0012166
38. Cui X, Bryant DM, Reiss AL. Nirs-Based Hyperscanning Reveals Increased Interpersonal Coherence in Superior Frontal Cortex During Cooperation. Neuroimage (2012) 59:2430–7. doi: 10.1016/j.neuroimage.2011.09.003
39. Funane T, Kiguchi M, Atsumori H, Sato H, Kubota K, Koizumi H. Synchronous Activity of Two People’s Prefrontal Cortices During a Cooperative Task Measured by Simultaneous Near-Infrared Spectroscopy. J Biomed Opt (2011) 16:1–10. doi: 10.1117/1.3602853
40. Lahnakoski JM, Glerean E, Salmi J, Jaaskelainen I, Sams M, Hari R, et al. Naturalistic Fmri Mapping Reveals Superior Temporal Sulcus as the Hub for the Distributed Brain Network for Social Perception. Front Hum Neurosci (2012) 6:14. doi: 10.3389/fnhum.2012.00233
41. Saarimaki H, Ejtehadian LF, Glerean E, Jaaskelainen IP, Vuilleumier P, Sams M, et al. Distributed Affective Space Represents Multiple Emotion Categories Across the Human Brain. Soc Cognit Affect Neurosci (2018) 13:471–82. doi: 10.1093/scan/nsy018
42. Saarimäki H, Gotsopoulos A, Jääskeläinen IP, Lampinen J, Vuilleumier P, Hari R, et al. Discrete Neural Signatures of Basic Emotions. Cereb Cortex (2016) 6:2563–73. doi: 10.1093/cercor/bhv086
43. Lee RF. Dual Logic and Cerebral Coordinates for Reciprocal Interaction in Eye Contact. PloS One (2015) 10. doi: 10.1371/journal.pone.0121791
44. Lee RF, Dai W, Jones J. Decoupled Circular-Polarized Dual-Head Volume Coil Pair for Studying Two Interacting Human Brains With Dyadic Fmri. Magn Reson Med (2012) 68:1087–96. doi: 10.1002/mrm.23313
45. Renvall V, Malinen S. Setup and apparatus for two-person fMRI. (Beijing China: Poster presented at the 18th annual meeting of the Organization for Human Brain Mapping). (2012).
46. Nummenmaa L, Suvilehto JM, Glerean E, Santtila P, Hietanen JK. Topography of Human Erogenous Zones. Arch Sex Behav (2016). 1207–16 doi: 10.1007/s10508-016-0745-z
47. Suvilehto J, Glerean E, Dunbar RIM, Hari R, Nummenmaa L. Topography of Social Touching Depends on Emotional Bonds Between Humans. Proc Natl Acad Sci U S A (2015) 112:13811–6. doi: 10.1073/pnas.1519231112
48. Suvilehto J, Nummenmaa L, Harada T, Dunbar RIM, Hari R, Turner R, et al. Cross-Cultural Similarity in Relationship-Specific Social Touching. Proc R Soc Ser B-Biol Sci (2019). 1–10 doi: 10.1098/rspb.2019.0467
49. Jenkinson M, Beckmann CF, Behrens TE, Woolrich MW, Smith SM. Fsl. Neuroimage (2012) 62:782–90. doi: 10.1016/j.neuroimage.2011.09.015
50. Jenkinson M, Bannister P, Brady M, Smith S. Improved Optimization for the Robust and Accurate Linear Registration and Motion Correction of Brain Images. Neuroimage (2002) 17:825–41. doi: 10.1006/nimg.2002.1132
51. Smith SM. Fast Robust Automated Brain Extraction. Hum Brain Mapp (2002) 17:143–55. doi: 10.1002/hbm.10062
52. Smith SM, Brady JM. Susan - a new approach to low level image processing. Int J Comput Vis (1997) 23:45–78. doi: 10.1023/A:1007963824710
53. Kaza E, Klose U, Lotze M. Comparison of a 32-Channel With a 12-Channel Head Coil: Are There Relevant Improvements for Functional Imaging? J Magn Reson Imaging (2011) 34:173–83. doi: 10.1002/jmri.22614
54. Dunbar RIM. the Social Brain Hypothesis. Evol Anthropol (1998) 6:178–90. doi: 10.1002/(SICI)1520-6505(1998)6:5<178::AID-EVAN5>3.0.CO;2-8
55. Adolphs R, Nummenmaa L, Todorov A, Haxby JV. Data-Driven Approaches in the Investigation of Social Perception. Phil Trans B (2016), 371. doi: 10.1098/rstb.2015.0367
56. Kennedy DP, Glascher j, Tyszka JM, Adolphs R. Personal Space Regulation by the Human Amygdala. Nat Neurosci (2009) 12:1226–7. doi: 10.1038/nn.2381
57. Tsakiris M. My Body in the Brain: a Neurocognitive Model of Body-Ownership. Neuropsychologia (2010) 48:703–12. doi: 10.1016/j.neuropsychologia.2009.09.034
58. Amodio DM, Frith CD. Meeting of Minds: the Medial Frontal Cortex and Social Cognition. Nat Rev Neurosci (2006) 7:268–77. doi: 10.1038/nrn1884
59. Salmi J, Glerean E, Jaaskelainen IP, Lahnakoski JM, Kettunen J, Lampinen J, et al. Posterior Parietal Cortex Activity Reflects the Significance of Others’ Actions During Natural Viewing. Hum Brain Mapp (2014) 35:4767–76. doi: 10.1002/hbm.22510
Keywords: functional magnetic resonance imaging, touch, somatosensory, motor, two-person neuroscience
Citation: Renvall V, Kauramäki J, Malinen S, Hari R and Nummenmaa L (2020) Imaging Real-Time Tactile Interaction With Two-Person Dual-Coil fMRI. Front. Psychiatry 11:279. doi: 10.3389/fpsyt.2020.00279
Received: 01 December 2019; Accepted: 23 March 2020;
Published: 28 April 2020.
Edited by:Elizabeth Redcay, University of Maryland, United States
Reviewed by:Arjen Stolk, Dartmouth College, United States
Edda Bilek, University College London, United Kingdom
Copyright © 2020 Renvall, Kauramäki, Malinen, Hari and Nummenmaa. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Lauri Nummenmaa, firstname.lastname@example.org