Skip to main content

ORIGINAL RESEARCH article

Front. Behav. Neurosci., 06 April 2018
Sec. Emotion Regulation and Processing
Volume 12 - 2018 | https://doi.org/10.3389/fnbeh.2018.00066

Neurophysiological Effects of Trait Empathy in Music Listening

  • 1Meadows School of the Arts, Southern Methodist University, Dallas, TX, United States
  • 2Ahmanson-Lovelace Brain Mapping Center, University of California, Los Angeles, Los Angeles, CA, United States
  • 3Academic Center for ECT and Neuromodulation, University Psychiatric Center, University of Leuven, Leuven, Belgium
  • 4Department of Psychiatry and Biobehavioral Sciences, Semel Institute for Neuroscience and Human Behavior, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States

The social cognitive basis of music processing has long been noted, and recent research has shown that trait empathy is linked to musical preferences and listening style. Does empathy modulate neural responses to musical sounds? We designed two functional magnetic resonance imaging (fMRI) experiments to address this question. In Experiment 1, subjects listened to brief isolated musical timbres while being scanned. In Experiment 2, subjects listened to excerpts of music in four conditions (familiar liked (FL)/disliked and unfamiliar liked (UL)/disliked). For both types of musical stimuli, emotional and cognitive forms of trait empathy modulated activity in sensorimotor and cognitive areas: in the first experiment, empathy was primarily correlated with activity in supplementary motor area (SMA), inferior frontal gyrus (IFG) and insula; in Experiment 2, empathy was mainly correlated with activity in prefrontal, temporo-parietal and reward areas. Taken together, these findings reveal the interactions between bottom-up and top-down mechanisms of empathy in response to musical sounds, in line with recent findings from other cognitive domains.

Introduction

Music is a portal into the interior lives of others. By disclosing the affective and cognitive states of actual or imagined human actors, musical engagement can function as a mediated form of social encounter, even when listening by ourselves. It is commonplace for us to imagine music as a kind of virtual “persona,” with intentions and emotions of its own (Watt and Ash, 1998; Levinson, 2006): we resonate with certain songs just as we would with other people, while we struggle to identify with other music. Arguing from an evolutionary perspective, it has been proposed that the efficacy of music as a technology of social affiliation and bonding may have contributed to its adaptive value (Cross, 2001; Huron, 2001). As Leman (2007) indicates: “Music can be conceived as a virtual social agent … listening to music can be seen as a socializing activity in the sense that it may train the listener’s self in social attuning and empathic relationships.” In short, musical experience and empathy are psychological neighbors.

The concept of empathy has generated sustained interest in recent years among researchers seeking to better account for the social and affective valence of musical experience (for recent reviews see Clarke et al., 2015; Miu and Vuoskoski, 2017); it is also a popular topic of research in social neuroscience (Decety and Ickes, 2009; Coplan and Goldie, 2011). However, the precise neurophysiological relationship between music processing and empathy remains unexplored. Individual differences in trait empathy modulate how we process social stimuli—does empathy modulate music processing as well? If we consider music through a social-psychological lens (North and Hargreaves, 2008; Livingstone and Thompson, 2009; Aucouturier and Canonne, 2017), it is plausible that individuals with a greater dispositional capacity to empathize with others might also respond to music-as-social-stimulus differently on a neurophysiological level by preferentially engaging brain networks previously found to be involved in trait empathy (Preston and de Waal, 2002; Decety and Lamm, 2006; Singer and Lamm, 2009). In this article, we test this hypothesis in two experiments using functional magnetic resonance imaging (fMRI). In Experiment 1, we explore the neural correlates of trait empathy (as measured using the Interpersonal Reactivity Index) as participants listened to isolated instrument and vocal tones. In Experiment 2, excerpts of music in four conditions (familiar liked/disliked, unfamiliar liked/disliked) were used as stimuli, allowing us to examine correlations of neural activity with trait empathy in naturalistic listening contexts.

Measuring Trait Empathy

Trait empathy refers to the capacity for empathic reactions as a stable feature of personality. Individual differences in trait empathy have been shown to correlate with prosocial behavior (Litvack-Miller et al., 1997; Balconi and Canavesio, 2013) and situational, “state” empathic reactions to others (Bufalari et al., 2007; Avenanti et al., 2009). Trait empathy is commonly divided into two components: emotional empathy is the often unconscious tendency to share the emotions of others, while cognitive empathy is the ability to consciously detect and understand the internal states of others (Goldman, 2011).

There are a number of scales to measure individual differences in trait empathy currently in use, including the Toronto Empathy Questionnaire (TEQ), Balanced Emotional Empathy Scale (BEES), Empathy Quotient (EQ), Questionnaire of Cognitive and Affective Empathy (QCAE) and Interpersonal Reactivity Index (IRI). Here we use the IRI (Davis, 1980, 1996), which is the oldest and most widely validated of these scales and frequently used in neurophysiological studies of empathy (Jackson et al., 2005; Gazzola et al., 2006; Pfeifer et al., 2008; Avenanti et al., 2009; Christov-Moore and Iacoboni, 2016). The IRI consists of 28 statements evaluated on a 5-point Likert scale (from “does not describe me well” to “describes me very well”). It is subdivided into four subscales meant to tap different dimensions of self-reported emotional and cognitive empathy. Emotional empathy is represented by two subscales: the empathic concern scale (hereafter EC) assesses trait-level “other-oriented” sympathy towards misfortunate others, and the personal distress scale (PD) measures “self-oriented” anxiety and distress towards misfortunate others. The two cognitive empathy subscales consist of perspective taking (PT), or the tendency to see oneself from another’s perspective, and fantasy (FS), the tendency to imaginatively project oneself into the situations of fictional characters.

Music and Empathy

Theories of empathy have long resonated with the arts. The father of the modern concept of empathy, philosopher Theodor Lipps (1907), originally devised the notion of Einfühlung (“feeling into”) in order to explain aesthetic experience. Contemporary psychological accounts have invoked mirror neurons as a possible substrate supporting Lipps’s “inner imitation” theory of the visual and performing arts (Molnar-Szakacs and Overy, 2006; Freedberg and Gallese, 2007). However, the incorporation of psychological models of empathy in empirical music research is still in its early stages. Empathy remains an ambiguous concept in general (Batson, 2009), but applications to music can appear doubly vexed. In an influential formulation, Eisenberg et al. (1991) define empathy as, “an emotional response that stems from another’s emotional state or condition and is congruent with the other’s emotional state or condition.” Aspects of this definition, though, might seem incongruous when applied to music, which is inanimate and not capable of possessing an emotional “state” (Davies, 2011). To connect music processing to trait empathy, therefore, it is first necessary to determine the extent to which music comprises a social stimulus. Who or what do we empathize with when listening to music?

Scherer and Zentner (2001) proposed that empathy toward music is often achieved via identification and sympathy with the lived experiences and expressive intentions of composers and performers. Corroborating this view, in a large web-based experiment Egermann and McAdams (2013) found that “empathy for the musician” moderated between recognized and induced emotions in music: the greater the empathy, the more likely an individual was to exhibit a strong affective response when listening. In a related study, Wöllner (2012) presented participants with video of a string quartet performance in three conditions—audio/visual, visual only, and audio only—and reported a significant correlation between trait empathy measures and perceived expressiveness in both visual conditions (music-only condition was non-significant), leading him to conclude: “since music is the audible outcome of actions, empathic responses to the performer’s movements may enhance the enjoyment of music.” Similarly, Taruffi et al. (2017) found correlations between the EC and FS scales of the IRI and accuracy in emotion recognition relative to musicians’ self-reported expressive encodings in an audio-only task.

A music-specific manifestation of trait empathy was proposed by Kreutz et al. (2008), who defined “music empathizing” as a cognitive style of processing music that privileges emotional recognition and experience over the tendency to analyze and predict the rules of musical structure (or, “music systematizing”). Garrido and Schubert (2011) compared this “music empathy” scale alongside the IRI-EC subscale in a study exploring individual differences in preference for sad music. They found that people who tend towards music empathizing are more likely to enjoy sad music; however, high trait empathy was not significantly correlated with enjoyment of sad music. This would seem to suggest that the music empathizing cognitive style differs from general trait empathy. A number of other studies have investigated the relationship between trait empathy and enjoyment of sad music using the IRI. In a series of experiments, Vuoskoski and Eerola (2011), Vuoskoski et al. (2012) and Eerola et al. (2016) reported statistically significant correlations between EC and FS subscales and self-reported liking for sad and tender music. Similarly, Kawakami and Katahira (2015) found that FS and PT were associated with preference for and intensity of emotional reactions to sad music among children.

There is evidence that musical affect is often achieved through mechanisms of emotional empathy (Juslin and Västfjäll, 2008). According to this theory, composers and performers encode affective gestures into the musical signal, and listeners decode that signal by way of mimetic, mirroring processes; musical expression is conveyed transparently as affective bodily motions are internally reenacted in the listening process (Overy and Molnar-Szakacs, 2009). Schubert (2017), in his Common Coding Model of Prosocial Behavior Processing, suggests that musical and social processing draw upon shared neural resources: music, in this account, is a social stimulus capable of recruiting empathy systems, including the core cingulate-paracingulate-supplementary motor area (SMA)-insula network (Fan et al., 2011), along with possible sensorimotor, paralimbic and limbic representations. The cognitive empathy component, which can be minimal, is involved primarily in detecting the aesthetic context of listening, enabling the listener to consciously bracket the experience apart from the purely social. This model may help account for the perceived “virtuality” of musical experience, whereby music is commonly heard as manifesting the presence of an imagined other (Watt and Ash, 1998; Levinson, 2006).

In sum, trait empathy appears to modulate self-reported affective reactions to music. There is also peripheral psychophysiological evidence that primed situational empathy may increase emotional reactivity to music (Miu and Balteş, 2012). Following Schubert (2017), it is plausible that such a relationship is supported by shared social cognitive mechanisms that enable us to process music as a social stimulus; however, this hypothesis has not yet been explicitly tested at the neurophysiological level.

Neural Correlates of Trait Empathy

Corroborating the bipartite structure of trait empathy that appears in many behavioral models of empathy, two interrelated but distinct neural “routes” to empathy have been proposed (Goldman, 2011), one associated with emotional contagion and the other with cognitive perspective taking. Emotional empathy is conceived as a bottom-up process that enables “feeling with someone else” through perception-action coupling of affective cues (Preston and de Waal, 2002; Goldman, 2006). Such simulation or “mirroring” models maintain that empathy is subserved by the activation of similar sensorimotor, paralimbic and limbic representations both when one observes another and experiences the same action and emotional state oneself (Gallese, 2003; Iacoboni, 2009). This proposed mechanism is generally considered to be pre-reflective and phylogenetically ancient; it has also been linked behaviorally to emotional contagion, or the propensity to “catch” others’ feeling states and unconsciously co-experience them (Hatfield et al., 1994). For example, several imaging studies have found evidence for shared representation of observed/experienced pain in anterior cingulate and anterior insula (Singer et al., 2004; Decety and Lamm, 2006; de Vignemont and Singer, 2006), as well as somatosensory cortex (Bufalari et al., 2007). Similarly, disgust for smells and tastes has been shown to recruit the insula during both perception and action (Wicker et al., 2003; Jabbi et al., 2007), and insula has been proposed as a relay between a sensorimotor fronto-parietal circuit with mirror properties and the amygdala in observation and imitation of emotional facial expressions (Carr et al., 2003). There is also evidence that insula functions similarly in music-induced emotions (Molnar-Szakacs and Overy, 2006; Trost et al., 2015), particularly involving negative valence (Wallmark et al., 2018).

In contrast to emotional empathy, trait cognitive empathy has been conceived as a deliberative tendency to engage in top-down, imaginative transpositions of the self into the “other’s shoes,” with concomitant reliance upon areas of the brain associated with theory of mind (Saxe and Kanwisher, 2003; Goldman, 2006), executive control (Christov-Moore and Iacoboni, 2016), and contextual appraisal (de Vignemont and Singer, 2006), including medial, ventral and orbital parts of the prefrontal cortex (PFC; Chakrabarti et al., 2006; Banissy et al., 2012); anterior cingulate (Singer and Lamm, 2009); somatomotor areas (Gazzola et al., 2006); temporoparietal junction (TPJ; Lamm et al., 2011); and precuneus/posterior cingulate (Chakrabarti et al., 2006). As implied in the functional overlap between certain emotional and cognitive empathy circuits, some have argued that the two routes are neither hierarchical nor mutually exclusive (Decety and Lamm, 2006): cognitive perspective taking is premised upon emotional empathy, though it may, in turn, exert top-down control over contagion circuits, modifying emotional reactivity in light of contextual cues and more complex social appraisals (Christov-Moore and Iacoboni, 2016; Christov-Moore et al., 2017b).

Brain studies have converged upon the importance of the human mirror neuron system in action understanding, imitation and empathy (Iacoboni, 2009), and has been demonstrated in multiple sensorimotor domains, including the perception of action sounds (for a review see Aglioti and Pazzaglia, 2010). Mirror properties were initially reported in the inferior frontal gyrus (IFG) and the inferior parietal lobule (IPL; Iacoboni et al., 1999; Shamay-Tsoory, 2010); consistent with simulation theories of trait empathy, moreover, activity in these and other sensorimotor mirror circuits has been found to correlate with IRI scales in a variety of experimental tasks, including viewing emotional facial expressions (all IRI scales; Pfeifer et al., 2008); video of grasping actions (EC and FS; Kaplan and Iacoboni, 2006); and video of hands injected with a needle (PT and PD; Bufalari et al., 2007; Avenanti et al., 2009). That is, high empathy people tend to exhibit greater activation in mirror regions during the observation of others. Simulation mechanisms also appear to underpin prosocial decision-making (Christov-Moore and Iacoboni, 2016; Christov-Moore et al., 2017b). Implication of inferior frontal and inferior parietal mirror neuron areas is not a universal finding in the empathy literature, and some have suggested that it may reflect specific socially relevant tasks or stimulus types, not empathy in and of itself (Fan et al., 2011). However, evidence for mirror properties in single cells of the primate brain now exists in medial frontal and medial temporal cortex (Mukamel et al., 2010), dorsal premotor and primary motor cortex (Tkach et al., 2007), lateral intraparietal area (Shepherd et al., 2009), and ventral intraparietal area (Ishida et al., 2009). This means that in brain imaging data the activity of multiple brain areas may potentially be driven by cells with mirror properties.

In addition to studies using visual tasks, auditory studies have revealed correlations between mirror neuron activity and trait empathy. Gazzola et al. (2006), for instance, reported increased premotor and somatosensory activity associated with PT during a manual action sound listening task. A similar link was observed between IFG and PD scores while participants listened to emotional speech prosody (Aziz-Zadeh et al., 2010). To date, however, no studies have investigated whether individual differences in empathy modulate processing of more socially complex auditory stimuli, such as music.

Study Aim

To investigate the neural substrates underlying the relationship between trait empathy and music, we carried out two experiments using fMRI. In Experiment 1, we focused on a single low-level attribute of musical sound—timbre, or “tone color”—to investigate the effects of empathy on how listeners process isolated vocal and instrumental sounds outside of musical context. We tested two main hypotheses: First, we anticipated that trait empathy (measured with the IRI) would be correlated with increased recruitment of empathy circuits even when listening to brief isolated sounds out of musical context (Gazzola et al., 2006). Second, following an embodied cognitive view of timbre perception (Wallmark et al., 2018), we hypothesized that subjectively and acoustically “noisy” timbral qualities would preferentially engage the emotional empathy system among higher empathy listeners. Abrasive, noisy acoustic features in human and many non-human mammal vocalizations are often signs of distress, pain, or aggression (Tsai et al., 2010): such state cues may elicit heighted responses among people with higher levels of trait EC.

To explore the relationship between trait empathy and music processing, in Experiment 2 participants passively listened to excerpts of self-selected and experimenter-selected “liked” and “disliked” music in familiar and unfamiliar conditions while being scanned. Musical preference and familiarity have been shown to modulate neural response (Blood et al., 1999; Pereira et al., 2011). Extending previous research on the neural mechanisms of empathy, we predicted that music processing would involve circuitry shared with empathic response in non-musical contexts (Schubert, 2017). Unlike Experiment 1, we had no a priori hypotheses regarding modulatory effects of empathy specific to each of the four music conditions. However, we predicted in both experiments that emotional empathy scales (EC and PD) would be associated with regions of the emotional empathy system in music listening, including sensorimotor, paralimbic and limbic areas, while cognitive empathy scales (PT and FS) would primarily be correlated with activity in prefrontal areas implicated in previous cognitive empathy studies (Singer and Lamm, 2009; imaging data for both experiments are available online: see Supplementary Material S1 Dataset and S2 Dataset in the online supporting information for NIFTI files of all contrasts reported here).

Experiment 1

Methods

Subjects

Fifteen UCLA undergraduate students were recruited for the study (eight female; 18–20 years old, M age = 19.1, SD = 0.72). All were non-music majors; self-reported years of musical training ranged from no experience to 10 years (M = 3.27, SD = 1.44). Subjects were ethnically diverse (six white, four east Asian, three south Asian, two black), right-handed, normal or corrected-to-normal vision, and had no history of neuropsychiatric disorder. All were paid $25 for their participation. This study was carried out in accordance with the recommendations of the UCLA Office of the Human Research Protection Program with written informed consent from all subjects. All subjects gave written informed consent in accordance with the Declaration of Helsinki. No vulnerable populations were involved as subjects in this research. The protocol was approved following expedited review by the UCLA Institutional Review Board in March 2012. The protocol expired in March 2014 following all data gathering and a 1-year renewal.

Stimuli

We recorded twelve approximately 2-s stimuli (1.8–2.1 s): three electric guitar, three tenor saxophone, three shakuhachi (Japanese bamboo flute), and three female vocals. For each sound generator, signals were divided into “normal” and “noisy” versions: (1) normal condition; (2) noisy condition #1; and (3) noisy condition #2. For example, the normal saxophone condition (1) consisted of a regular tone, while noisy conditions (2–3) were growled and overblown to create distortion, as shown in the spectrographs of Figure 1. Noisy signals were characterized acoustically by elevated inharmonicity, spectral centroid, spectral flatness, zero-cross rate, and auditory roughness. Although stimuli were conceived ordinally (normal, medium-noise, high-noise), behavioral and neural evidence suggest that they are perceived dichotomously (i.e., as either not noisy or noisy), as reported in Wallmark et al. (2018). All signals were the same pitch (233 Hz, B♭3) and were manually equalized for loudness. Stimuli were identical to those used in the other study: for complete details, see Wallmark et al. (2018; see Supplementary Material S1 Stimuli).

FIGURE 1
www.frontiersin.org

Figure 1. Saxophone stimuli in three conditions: (1) normal, (2) noisy #1, (3) noisy #2. Spectrograph settings: size = 1024 with Hanning window, 50% overlap.

Behavioral Procedure

Fourteen of the fifteen subjects completed the IRI (Davis, 1980) following the scan (one subject did not complete the IRI due to scheduling conflicts). Additionally, in order to evaluate the effect of noisiness levels on valence, subjects rated the stimuli on three covarying perceptual scales using a 0–100 bipolar semantic differential rating scale (Wallmark et al., 2018): (1) bodily exertion required to produce each sound (“low exertion-high exertion”), which is generally correlated with acoustic noise in vocal sound production (Tsai et al., 2010); (2) negative valence (“like-dislike”); and (3) perceived noisiness (“not noisy-noisy”). Only 10 of the subjects were able to complete this ratings task, also due to scheduling.

MRI Procedure

Subjects (N = 15) listened to the randomized stimuli while being scanned. A sound check prior to the functional scan (conducted with earplugs inserted and the scanner running) allowed subjects to adjust the headphone volume to a subjectively determined comfortable listening level. Participants were then instructed to relax and keep their heads still while listening to the stimuli, and to keep their eyes open and their vision trained on a fixation cross presented through magnet-compatible LCD goggles.

We used a block design consisting of an alternation of 15–16 s baseline period of silence with randomized blocks of all 12 stimuli. Each signal was repeated five times in a row with 100 ms of silence between each onset (9.4–10.9 s total per signal in each block). The full block took approximately 135–140 s, and was repeated three times for a total duration of 405–420 s plus final baseline (around 7.25 m).

Data Acquisition, Preprocessing and Statistics

Images were acquired on a Siemens 3T Trio MRI scanner. Functional runs employed a continuous scanning protocol comprised 231 T2-weighted echoplanar images (EPIs; repetition time (TR) 2000 ms; echo time (TE) 28 ms; flip angle = 90°; 34 slices; slice thickness 4 mm; matrix 64 × 64; FOV 192 mm) sensitive to blood oxygenation-dependent (BOLD) contrast. To enable T1 equilibrium the first two volumes of each functional scan were automatically discarded before data collection commenced. Additionally, two sets of structural images were acquired for registration of functional data: a T2-weighted matched-bandwidth high-resolution scan with the same slice prescription as the EPI (TR 5000 ms; TE 34 ms; flip angle = 90°; 34 slices; slice thickness 4 mm; matrix 128 × 128; FOV 192 mm); and a T1 weighted magnetization prepared rapid-acquisition gradient echo image (MPRAGE; TR, 1900 ms; TE 2.26 ms; flip angle = 9°; 176 sagittal slices; slice thickness 1 mm; matrix 256 × 256; FOV 250 mm).

Image preprocessing and data analysis were performed with FSL version 5.0.4. Images were realigned to the middle volume to compensate for any head motion using MCFLIRT (Jenkinson et al., 2002). Volumes were then examined manually for gross motion artifacts that cannot be corrected with simple realignment. When motion artifacts were detected, a nuisance regressor for each affected volume was included in the general linear model (GLM). One run for one subject was excluded for excessive motion (more than 10% volumes exhibiting motion artifacts). Data were temporally filtered with a high-pass filter cutoff of 100 s and spatially smoothed with a 8 mm full width half maximum Gaussian kernel in three dimensions.

Statistical analyses were performed at the single-subject level using a GLM with fMRI Expert Analysis Tool (FEAT, version 6.00). Contrasts included the following: (1) all timbres (task) > baseline; (2) each of the 12 individual stimuli > baseline; (3) intra-instrument comparisons (e.g., Guitar 3 > Guitar 1); (4) inter-timbre comparisons (e.g., all condition 3 > all condition 1); and (5) each instrument > others (e.g., voice > others). Additionally, conditions 1 and 2 were combined for normal > noisy and noisy > normal comparisons. First-level contrast estimates were computed for each run and then registered to standard space (MNI) in three stages. The middle volume of each run of individual EPI data was registered first to the co-planar matched-bandwidth high-resolution T2-weighted image. Following this, the co-planar volume was registered to the T1-weighted MPRAGE. Both of these steps were carried out using FLIRT (affine transformations: EPI to co-planar, df = 6; co-planar to MPRAGE, df = 6; Jenkinson et al., 2002). Registration of the MPRAGE to MNI space (FSL’s MNI Avg152, T1 2 × 2 × 2 mm) was carried out with FLIRT (affine transformation, df = 12). Contrast estimates for each subject were then computed treating each of the three runs as a fixed effect.

Next, group-level analysis was carried out using FSL FLAME stage 1 and 2 (Beckmann et al., 2003). All images were thresholded at Z > 2.3, p < 0.01, corrected for multiple comparisons using cluster-based Gaussian random field theory controlling family-wise error across the whole brain at p < 0.05 (Friston et al., 1994; Forman et al., 1995). In addition to basic group-level contrasts, behavioral ratings were added as continuous covariates to assess the neural correlates of the four IRI scales. Due to moderate (though non-significant) correlations between some of the scales, each of the four IRI covariates were entered into separate second-level analyses. Finally, to isolate regions of covarying activation that correspond to the task, the region of activation from the task > baseline contrast was used to mask the subsequent contrasts so only regions that were also task positive by that criterion showed.

Behavioral Results

Normality of the distribution of IRI and perceptual data was confirmed (Shapiro-Wilk); to correct for violations in the perceptual dataset, 5 of the 36 variables were transformed using an inverse-normal procedure (Templeton, 2011). Scores for the IRI subscales were then compared using repeated-measures analysis of variance (ANOVA). The test revealed a significant difference between the four scales, F(3,39) = 18.28, p < 0.0001, ηp2 = 0.58; post hoc testing (Bonferroni) found that mean PD scores were significantly lower than the other three scales. However, subscales were only modestly reliable (M Cronbach’s α = 0.54). The four scales were not significantly correlated with one another.

To verify whether there was a reliable difference in valence ratings between normal and noisy stimuli, we performed another repeated-measures ANOVA on perceptual ratings data (3 × 4 × 2 design with three perceptual conditions, four sound generators and binary timbral noisiness levels (normal condition 1 and noisy conditions 2 + 3/2)). As expected, no significant main effects of the perceptual scales were revealed (bodily exertion, negative valence, noisiness), F(2,12) = 0.6, p = 0.94, ηp2 = 0.01, indicating that all three tapped a similar affective structure (for this reason, only valence was included in subsequent analyses). The main effect of sound generator was significant, F(3,18) = 3.34, p = 0.04, ηp2 = 0.34; timbral noisiness also had a large effect on ratings, F(2,12) = 7.51, p < 0.01, ηp2 = 0.56, with “noisier” timbres rated significantly lower than the normal condition. Two-way interactions between perceptual categories*sound generator and sound generator*timbral noisiness were likewise significant (p < 0.05), and appear to have been driven by the electric guitar and the female voice, which did not differ in bodily exertion and noisiness means but crossed substantially in negative valence and timbral noisiness (voice was perceived as being significantly noisier and more negatively valenced than guitar).

Since many studies have shown a gender difference in IRI scores (Mehrabian et al., 1988; Davis, 1996), we next tested for a behavioral and neural effect of gender on trait empathy. Females showed significantly higher EC scores than males, t(13) = 5.44, p < 0.0001, Cohen’s d = 2.91; no other subscales were significantly different between the sexes. To investigate the possible effect of gender EC differences on imaging results (Derntl et al., 2010), we added sex as a covariate in another second-level analysis. The analysis revealed increased activation of the brain stem among females compared to males in the task > baseline contrast, which is consistent with other studies (Filkowski et al., 2017). However, this result did not survive masking for the task. No other significant differences were found. The same confirmatory analysis was carried out in Experiment 2, which also yielded no significant sex differences aside from EC. Though the sample size for this comparison was small, we concluded for the sake of this study that sex was not a significant neurophysiological factor in music processing.

Imaging Results

We evaluated the effect of trait empathy on the processing of musical timbre in three basic conditions: (1) task > baseline (i.e., sound > silence); (2) positively valenced (normal) > negatively valenced (noisy) timbres; and (3) noisy > normal timbres. Results for these three contrasts are organized according to IRI subscales in Table 1. With trait empathy scores added as covariates to our model, we found that neural responses to the valence of timbre are differentiated by IRI subscale. PT was associated with activation in bilateral sensorimotor areas when listening to aggregated timbres (task > baseline), including SMA and anterior cingulate (ACC), primary motor cortex and primary somatosensory cortex (SI), as shown in Figure 2. FS also involved SMA activation in the task > baseline contrast, in addition to ventrolateral PFC (VLPFC). FS scores were correlated with both directions of the valence contrast: normal timbres modulated activity in left TPJ, inferior/middle frontal gyrus (IFG/MFG) and anterior insula cortex (AIC), while noisy timbres preferentially engaged medial prefrontal (MPFC/VMPFC) and temporal areas, as well as precuneus (PCUN). Both directions of valence modulated activity in IPL.

TABLE 1
www.frontiersin.org

Table 1. Experiment 1 results by interpersonal reactivity index (IRI) subscale.

FIGURE 2
www.frontiersin.org

Figure 2. Selected activation sites correlating with trait empathy (IRI subscales) in three contrasts. Perspective taking (PT) = blue, fantasy (FS) = green, empathic concern (EC) = red. All contrasts, Z > 2.3, p < 0.01 (cluster corrected, p < 0.05).

On the emotional empathy scales, we found that in the task > baseline contrast EC modulated activity in a wide swath of bilateral motor (SMA, IPL, IFG), auditory (STG), and somatosensory (SI and SII) areas, in addition to cerebellum and AIC. Some of this sensorimotor activity corresponds to areas also implicated in PT. EC was also correlated with activation of SMA in the noisy > normal contrast, indicating a motor component in the processing of aversive sounds among listeners with higher emotional empathy. PD was not significantly correlated with BOLD signal change in any of the contrasts.

In sum, Experiment 1 demonstrated that trait empathy is correlated with increased activation of circuitry often associated with emotional contagion, including sensorimotor areas and insula, in the perception of isolated musical timbres. FS and EC also appear to be sensitive to the affective connotations of the stimuli. Timbre is arguably the most basic and quickly processed building block of music (Tervaniemi et al., 1997). Though sufficient to recruit empathy areas, these brief stimuli do not, however, constitute “music” per se. In Experiment 2, we turned our focus to more naturalistic stimuli—including excerpts of music selected in advance by participants—in order to explore the effect of trait empathy on the processing of music.

Experiment 2

Methods

Subjects

Twenty UCLA undergraduates (13 female, seven males; 18–20 years old, M age = 19.1, SD = 0.72) with a range of musical backgrounds were recruited (all non-music-majors; M years musical training = 5, SD = 3.78). Subjects were ethnically diverse (seven white, five east Asian, four south Asian, two Hispanic, two black), right-handed, normal or corrected-to-normal vision, and had no history of neuropsychiatric disorder. Ten of the subjects also participated in Experiment 1. To ensure that only individuals with strong musical preferences enrolled in the study, we specified in recruitment materials that interested individuals must regularly experience “intense positive and negative emotions when listening to music.” Subjects were paid $50 for their participation. The experiment was approved by the UCLA IRB.

Stimuli

Stimuli consisted of sixteen 16-s excerpts of recorded music, half of which were individually selected in pre-scan meetings with each participant. Because musical preference (i.e., liking or disliking) and familiarity have been shown to modulate neural response (Blood et al., 1999; Pereira et al., 2011), we decided to subdivide stimuli into four categories: familiar liked (FL), familiar disliked (FD), unfamiliar liked (UL) and unfamiliar disliked (UD). For FL excerpts, subjects brought us four songs they “love” and, for FD, four songs they “hate.” During the meeting and over follow-up communications with subjects, we collaboratively defined the “best” (or “worst”) part of each song for use in the scanner, which typically corresponded to the chorus, the beginning of the first verse, or the introduction. Prior to the scan, all excerpts were approved by the subjects as an accurate reflection of their musical “loves” and “hates.”

The other eight stimuli were selected by the researchers, in consultation with a popular music scholar at UCLA, to match the two categories of self-selected music with UL and UD excerpts. Selections were based on three general criteria: (1) they should roughly match the stylistic and generic features of the familiar songs; (2) they should take into account additional comments relating to musical tastes and affective orientations made by subjects during the meeting; and (3) they should be relatively obscure to typical undergraduate non-music majors, so that the in-scanner hearing represents subjects’ first exposure to the song (for a complete list of stimuli used in the experiment, see Supplementary Material S2 Table 1).

All audio files were trimmed to representative 16-s excerpts with 500-ms amplitude ramps on either end of the signal. Loudness was equalized manually. Control conditions were 16 s of silence (eight times during the MRI run) and a 16-s clip of pink noise (eight times).

Procedure

The MRI procedure was similar to Experiment 1. Subjects (N = 20) were instructed to passively listen to the randomized stimuli while being scanned. We employed a block design in which each stimulus was presented once, with 16-s baseline silence or noise between each musical excerpt. The full scan took approximately 9 min, following which 19 subjects completed the IRI questionnaire in a quiet room. Sixteen of the participants also rated their preference for the self- and researcher-selected excerpts using a 0–100 horizontal numbered bipolar scale (“strongly dislike-strongly like”).

MRI data acquisition, preprocessing, and statistics were identical to Experiment 1. Contrasts included: (1) each category > baseline silence and noise (e.g., FL > silence/noise); (2) inter-categorical contrasts (e.g., FD > FL); (3) cross-categorical contrasts (e.g., FL > UL); (4) aggregate contrasts (e.g., L > D); and (5) interactions. As in Experiment 1, all images were thresholded at Z > 2.3 (p < 0.01), corrected for multiple comparisons across the whole brain at p < 0.05. IRI ratings were added as covariates in four separate analyses, and the region of activation from the task > baseline contrast was used to mask the subsequent contrasts so only regions that were also task positive by that criterion showed.

Behavioral Results

We first examined differences between IRI scales and preference ratings in the four stimuli conditions. A test for normality of distribution (Shapiro-Wilk) resulted in the transformation of data for two IRI scales and two preference ratings (Templeton, 2011). Preference ratings for one subject were omitted as an outlier based on a criterion of ±2 SDs from the mean. As in Experiment 1, repeated-measure ANOVA revealed a significant difference between IRI subscales, F(3,54) = 28.33, p < 0.0001, ηp2 = 0.61, and as anticipated, in a post hoc test (Bonferroni) PD was found to differ significantly from the others (all p < 0.05). The four subscales registered acceptable internal consistency (M Cronbach’s α = 0.65), and were not significantly correlated after correcting for multiple comparisons (False Discovery Rate method; see Benjamini and Hochberg, 1995). To confirm main effects of the four familiarity/valence conditions on mean preference ratings, we ran another repeated-measure ANOVA, which revealed a large main effect, F(3,45) = 152.93, p < 0.0001, ηp2 = 0.91. Significance of all pairwise comparisons was confirmed at p < 0.0001 (Bonferroni; see Supplementary Material S2 Table 2 for descriptive statistics of Experiment 2 behavioral data).

PT, FS and PD were not strongly associated with the preference ratings, but, as shown in Figure 3, EC showed strong correlations with preference for FL music, r(14) = 0.70, p < 0.01; FD, r(14) = −0.64, p = 0.02; liked music (i.e., both familiar and unfamiliar), r(14) = 0.57, p = 0.03; and unfamiliar music (both liked and disliked), r(14) = 0.56, p = 0.03, significance levels adjusted for multiple comparisons (False Discovery Rate). The correlation between empathy and liking of unfamiliar music suggests that empathic people in our sample were more likely to be affectively open-minded to new music, even excerpts they disliked.

FIGURE 3
www.frontiersin.org

Figure 3. Correlations between EC and musical preference: (1) ratings for familiar liked (FL) excerpts; (2) ratings for familiar disliked (FD) music; (3) ratings for all liked excerpts (familiar and unfamiliar); and (4) ratings for all unfamiliar music (liked and disliked).

Imaging Results

In the basic group-level analysis (no empathy covariates), we found the involvement of left ventral pallidum (VP) and thalamus in musical liking (L > D). This result is consistent with previous research on the neurophysiology of musical preference, which has broadly confirmed the role of basal ganglia reward circuitry in musical pleasure (Blood et al., 1999; Salimpoor et al., 2013). In contrast, musical disliking (D > L) was accompanied by activity in right AIC, MPFC, OFC, superior temporal gyrus (STG) and amygdala/parahippocampus. AIC has been implicated in most emotional empathy studies; it has also been found to contribute to both positive (Koelsch et al., 2006) and negative reactions to musical stimuli (Wallmark et al., 2018). Amygdala and OFC are also often involved in negative affect (Phan et al., 2002), and connectivity between these two areas is indicative of emotional regulation (Banks et al., 2007). Moreover, lateralization of affective response—left with positive, right with negative—is also consistent with other studies (Hellige, 1993). Finally, unfamiliar music was associated with enhanced activation of bilateral superior frontal gyrus (SFG) compared to familiar—possibly indicating heightened attention—while familiar music involved the contribution of a dense network of areas across the whole brain, including bilateral IPL, anterior cingulate and paraginculate, premotor cortex, SMA, medial prefrontal areas, and cerebellum (Janata et al., 2007; Pereira et al., 2011). Figure 4 displays results for these four basic contrasts (see also Supplementary Material S2 Table 3).

FIGURE 4
www.frontiersin.org

Figure 4. Selected results from four basic contrasts. Familiar > Unfamiliar displayed as heat map in seven axial slices; UNFAMILIAR > FAMILIAR = blue; LIKED > DISLIKED = red; DISLIKED > LIKED = green. All contrasts, Z > 2.3, p < 0.01 (cluster corrected, p < 0.05).

With IRI covariates added in a second-level analysis, we found neural signatures distinctive to all subscales. As shown in Table 2, PT was associated with activity in the left TPJ in response to the task. Familiar liked music covaried with PT in cerebellum, TPJ, SFG and the posterior part of the cingulate (PCC) compared to FD music; further, in comparison with UL music, FL showed increased activation in a large region of the right PFC. A significant interaction effect between liking and familiarity was observed in the right dorsolateral PFC (DLPFC). In contrast, FS exhibited significant correlations with activity in dorsal striatum and limbic areas—including caudate, putamen, thalamus, fornix, hippocampus and amygdala—as a function of musical familiarity (F > U) and liking (FL > UL). Activations for IRI covariates in selected contrasts are shown in Figure 5.

TABLE 2
www.frontiersin.org

Table 2. Experiment 2 results by IRI subscale.

FIGURE 5
www.frontiersin.org

Figure 5. Activation sites correlating with trait empathy (IRI subscales) in selected contrasts. PT = blue, FS = green, EC = red. All contrasts, Z > 2.3, p < 0.01 (cluster corrected, p < 0.05).

EC also showed sensitivity to familiarity, which was correlated with activity in dorsomedial PFC (extending ventrally to paracingulate), IPL, DLPFC and IFG, cerebellum, visual areas (lingual gyrus and occipital pole), dorsal striatum, VMPFC, VLPFC and amygdala in the F > U contrast. This strong familiarity effect could also be seen in a number of other contrasts, including FL > FD, which showed a correlation with activity in middle temporal gyrus; FL > UL, which added activity in right posterior IFG; and the interaction between liking and familiarity. Additionally, EC revealed some rare correlations with disliked music, including the head of the caudate in the FD > UD contrast, as well as DLPFC in the UD > UL contrast. Finally, the PD scale was associated with activation in the right MFG in the FD > UD contrast. This was the only significant result for this subscale in either experiment.

Discussion

The present study demonstrates that trait empathy is correlated with neurophysiological differences in music processing. Music has long been conceived as a social stimulus (North and Hargreaves, 2008; Livingstone and Thompson, 2009; Aucouturier and Canonne, 2017). Supporting this view, our study offers novel evidence that neural circuitry involved in trait empathy is active to a greater degree in empathic individuals during perception of both simple musical tones and full musical excerpts. Individual variances in empathy are reflected in differential recruitment of core empathy networks (Fan et al., 2011) during music listening; specifically, IRI subscales were found to correlate with activity in regions associated with both emotional (e.g., sensorimotor regions, insular and cingulate cortex) and cognitive empathy (e.g., PFC, TPJ) during passive listening tasks.

Our main hypotheses were confirmed, though with an unexpected twist regarding the two putative empathy types (at least as structured by the IRI). Both experiments seem to suggest interactions between bottom-up and top-down processes (indexed in our study by both IRI scores and activity in neural systems) in empathy-modulated music listening. This is in line with recent findings in prosocial decision making studies (Christov-Moore and Iacoboni, 2016; Christov-Moore et al., 2017a,b). Stimulus type, however, seems associated with different patterns of neural systems engagement. In Experiment 1, sensorimotor areas were more frequently modulated by trait empathy in the processing of musical timbre; conversely, in Experiment 2, cognitive areas were more frequently modulated by trait empathy in the processing of (familiar) music. Together this suggests that, contrary to our initial hypothesis for Experiment 2, modulation of neural activity by empathy was driven more by stimulus type than by empathy type; that is, the emotional empathy subscale (EC) was no more selective to emotional contagion circuitry than cognitive empathy scales (PT and FS), and vice versa (the PD scale did not reveal any significant correlations with brain activity). In what follows, we interpret these results and discuss their implications.

Empathy-Modulated Sensorimotor Engagement in Timbre Processing

Using isolated 2-s instrument and vocal tones as stimuli, Experiment 1 found that the four IRI subscales modulated response to timbre. First, we found that cognitive perspective taking (PT) was correlated with activity in motor areas (SMA and primary motor cortex), SI and anterior cingulate (ACC). This finding is in line with numerous studies suggesting a role for ACC and SI in emotional empathy (Bufalari et al., 2007; Singer and Lamm, 2009); it also replicates a result of Gazzola et al. (2006), who reported a correlation of somatomotor activity and PT scores in an action sound listening task. Activity in these regions may suggest a sensorimotor simulation process whereby high-PT individuals imitate internally some aspect of the production of these sounds. This result could be explained in light of Cox’s (2016) “mimetic hypothesis,” according to which music is understood by way of covert or overt motor reenactments of sound-producing physical gestures. It is quite conceivable that people who are inclined to imagine themselves from others’ perspectives also tend to take up the physical actions implied by others’ musical sounds, whether a smooth and gentle voice, a growled saxophone, or any other musical sound reflecting human actions. It is intriguing, however, that PT was not implicated in the processing of positive or negative valence. One might assume that perspective takers possess a neural preference for “good” sounds: for example, one study reported activation of larynx control areas in the Rolandic operculum while subjects listened to pleasant music (but not unpleasant), suggesting subvocalization only to positively valenced music (Koelsch et al., 2006). Our results, however, indicate that PT is not selective to valence in these sensorimotor areas.

FS also revealed motor involvement (SMA) in the task > baseline contrast. Unlike PT, FS appeared to be sensitive to both positive and negative valence of timbres: we found activity in left TPJ and Broca’s area of the IFG associated with positively valenced timbres, and temporal, parietal and prefrontal activations associated with disliked timbres. TPJ is an important structure for theory of mind (Saxe and Kanwisher, 2003; Young et al., 2010). Together with Broca’s area—a well-studied language and voice-specific motor region (Watkins and Paus, 2004; Brown et al., 2008) that has been implicated in emotional empathy (Fan et al., 2011; Cheetham et al., 2014)—it is plausible to suggest that individuals who are prone to fantasizing may exhibit a greater tendency to attribute mental states to the virtual human agents responsible for making musical sounds, and that this attribution would be more pronounced for positively valenced stimuli (Warren et al., 2006).

As hypothesized, EC was correlated with activation in a range of areas previously implicated in empathy studies, including IPL, IFG and SMA, along with SI, STG, cerebellum and AIC (Iacoboni, 2009). It was also sensitive to negative valence: noisy timbres were processed with greater involvement from SMA in individuals with higher EC. EC is an “other-oriented” emotional scale measuring sympathy or compassion towards the misfortune of others (Batson, 1991; Davis, 1996). Since noisy, distorted qualities of vocal timbre are an index of generally high-arousal, negatively valenced affective states (Tsai et al., 2010), we theorize that individuals with higher trait EC exhibited greater motor attunement owing to the ecological urgency typically signaled by such sound events. In short, we usually deploy harsh vocal timbres when distressed or endangered (e.g., screaming or shouting), not during affectively positive or neutral low-arousal states, and high-empathy people are more likely to pick up on and simulate the affective motor implications of others in distress. Though our sensitivity to the human voice is especially acute (Belin et al., 2000), researchers have hypothesized that instrumental timbre can similarly function as a “superexpressive voice” via acoustic similarities to emotional vocal expression (Juslin and Västfjäll, 2008). Our result would seem to support this theory, as motor response appears to encode the combined effects of noisy tones, both vocal and instrumental.

It is also worth noting, as might be expected given the above, that noisy voice produced a unique signature of activation among high FS and EC participants relative to the normal vocal stimuli (Supplementary Material S1 Figure 1; Supplementary Material S1 Tables 1, 2): FS modulated processing of the noisy voice in SII and IPL, while EC was selective to noisy vocal sounds in the SMA and primary motor cortex. This result appears to be at odds with other studies of vocal affect sensitivity that report motor-mimetic selectivity for pleasant vocalizations (Warren et al., 2006; Wallmark et al., 2018). It is likely that individual variances in empathy (plus other mediating factors) predispose listeners to differing orientations towards others’ affective vocalizations, with empathic listeners more likely to “catch” the motor-affective implications of aversive sounds than low-empathy people, who might only respond to sounds they find pleasant while tuning out negatively valenced vocalizations. Cox (2016) theorizes that music can afford listeners an “invitation” for motor engagement, which they may choose to accept or decline. Seen from this perspective, it is likely that individual differences in empathy play an important role in determining how we choose to respond to music’s motor invitations.

Regarding motor engagement, across IRI subscales it is apparent that SMA is the most prominent sensorimotor area involved in empathy-modulated processing of timbre. SMA is a frequently reported yet undertheorized part of the core empathy network (Fan et al., 2011); it has also been implicated in internally generated movement and coordination of action sequences (Nguyen et al., 2014), and has been shown in a single-neuron study to possess mirror properties (Mukamel et al., 2010). Most relevant to the present study, moreover, SMA contributes to the vividness of auditory imagery, including imagery for timbre (Halpern et al., 2004; Lima et al., 2015). Halpern et al. (2004) attributed SMA activity in an auditory imagery task in part to subvocalization of timbral attributes, and the present study would seem to partially corroborate this explanation. We interpret this result as a possible instance of sensorimotor integration: SMA activity could reflect a basic propensity to link sounds with their associated actions, which are internally mirrored while listening. In accordance with this view, we would argue that people do not just passively listen to different qualities of musical timbre—they enact some of the underlying physical determinants of sound production, whether through subvocalization (Halpern et al., 2004), biography specific act-sound associations (Bangert et al., 2006; Margulis et al., 2009), or other theorized mechanisms of audio-motor coupling (Cox, 2016).

To summarize, sensorimotor areas have been implicated in many previous studies of emotional empathy, including IFG and IPL (Carr et al., 2003; Shamay-Tsoory, 2010); “pain circuit” areas in AIC and ACC (Singer et al., 2004; Shamay-Tsoory, 2010); and somatomotor regions (i.e., pre/primary motor cortices and SI/SII; Carr et al., 2003; Gazzola et al., 2006; Pfeifer et al., 2008). Interestingly, these precise regions dominated results of the Experiment 1 timbre listening task. This is true, moreover, for both emotional and cognitive scales: PT and FS, though often implicated in cognitive tasks (Banissy et al., 2012), were found in this experiment to modulate SMA, SI, primary motor cortex, IPL, AIC and IFG, well-documented motor-affective areas. We theorize that the contextual impoverishment and short duration of the timbre listening task (2-s isolated tones) may have largely precluded any genuine perspective taking or fantasizing from occurring—it is much harder to put oneself in the “shoes” of an single isolated voice or instrument, of course, than it is an affectively rich piece of actual music. However, even in the absence of conscious cognitive empathizing, which presumably would have been reflected in engagement of the cognitive empathy system, individuals with high trait PT and FS still showed selective activations of sensorimotor and affective relay circuits typically associated with emotional empathy. This could be interpreted to suggest that the two “routes” to empathy are not dissociated in music listening: although conscious PT in response to abbreviated auditory cues is unlikely, people who frequently imagine themselves in the positions of others also exhibit a tendency toward motor resonance in this basic listening task, even when musical context is missing.

Prefrontal and Reward Activation During Music Listening

Experiment 2 used 16-s excerpts of self- and experimenter-selected music to explore the effect of dispositional empathy on the processing of music in four conditions, familiar liked (FL), familiar disliked (FD), unfamiliar liked (UL), and unfamiliar disliked (UD). Participants consisted of individuals who reported regularly experiencing intense emotional reactions while listening to music. Musical liking is associated at the group level (i.e., no IRI covariates) with left basal ganglia reward areas, and disliking with activity in right AIC, primary auditory cortex and prefrontal areas (OFC and VLPFC). Musical familiarity is associated with activation across a broad region of the cortex, subcortical areas, and cerebellum, including IPL, premotor cortex and the core empathy network (Fan et al., 2011), while unfamiliarity recruits only the SFG. This robust familiarity effect is even more acute among high-empathy listeners: after adding empathy covariates to our analysis, there were no regions that demonstrated an affect-specific response after controlling for familiarity. This result is consistent with the literature in showing a large neurophysiological effect of familiarity on musical liking (Pereira et al., 2011); it appears that trait empathy, as well, modulates responses to familiar music to a greater degree than unfamiliar music.

Contrary to expectations, activation in regions primarily associated with emotional empathy (e.g., sensorimotor areas, ACC, AIC) was not a major component in empathy-modulated music processing. Instead, the most prominent activation sites for PT and EC scales were prefrontal, including medial, lateral, and orbital portions of the cortex, as well as TPJ. These regions are involved in executive control, regulation of emotions, mentalizing, contextual appraisal, and “enactment imagination” (Goldman, 2006), and have figured prominently in many studies on the neurophysiology of cognitive empathy (Decety and Grèzes, 2006; Frith and Frith, 2006). Additionally, FS and EC results were characterized by dorsal striatum when participants listened to familiar music. This basal ganglia structure has been frequently reported in empathy studies but not often discussed (Fan et al., 2011); it has also long been associated with musical pleasure (Blood et al., 1999; Salimpoor et al., 2011, 2013). Replicating this association, our results suggest that empathic people experience a higher degree of reward and motivation when listening to familiar music compared to lower-empathy people.

PT was associated with left TPJ in the task > baseline contrast. Activation of this region among perspective-takers is consistent with studies implicating TPJ in theory of mind (Saxe and Kanwisher, 2003) and the merging of self and other (Lawrence et al., 2006). The TPJ was joined by posterior cingulate, cerebellum and superior prefrontal areas when listening to familiar liked music (FL > FD), the former two of which were also identified in a study on the neural bases of perspective taking (Jackson et al., 2006). Interestingly, these results differ substantially from the PT correlations in Experiment 1, which were entirely sensorimotor. In the context of isolated musical sounds, PT results were interpreted as a reflection of covert imitation (or, enactive perspective taking): in contrast, however, it appears here that PT may reflect a more cognitively mediated, mental form of perspective taking, which conceivably extends beyond action-perception coupling of musicians’ affective motor cues to encompass contextual appraisal, assessments of the affective intent embodied in the music, and other executive functions.

In contrast to the prominent TPJ and prefrontal activation associated with PT, FS results revealed activation of dorsal striatum (caudate and putamen) and limbic areas (thalamus, hippocampus and amygdala). Activation of reward and emotion centers may suggest that fantasizers also tend to exhibit heightened positive emotional reactions to familiar music. Indeed, we found a moderate correlation between FS and preference ratings for familiar liked music, r(14) = 0.52, which may tentatively corroborate this claim. Moreover, structural brain studies have found that FS is associated with increased gray matter volume in hippocampus (Cheetham et al., 2014), an important memory area, perhaps also indicating enhanced encoding of familiar liked music among fantasizers.

The contrast in activation between the two IRI cognitive empathy scales (PT and FS) is notable, and may be attributed to the different aspects of empathy they were designed to assess. PT taps the tendency to imagine oneself in other people’s shoes, whereas FS captures the tendency to imagine oneself from the perspective of fictional characters (Davis, 1980, 1996). With this distinction in mind, one could surmise that the two scales also tap different views regarding the ontology of the musical agent: in this reading, people with high trait PT are more likely to take music as a social stimulus, i.e., as if it was a real or virtual human presence (with theory of mind, goals, beliefs), while high FS listeners are more likely to hear it as “fictional” from a social perspective, i.e., as a rewarding sensory stimulus with an attenuated grip on actual social cognition. Further research is called for to explore possible explanations for the differences in cognitive scales as reflected in music listening.

Turning finally to emotional empathy, we found that EC recruits prefrontal, reward and sensorimotor-affective areas in music listening, and is likewise quite sensitive to familiarity. In the Familiar > Unfamiliar contrast, we found activation of cerebellum, IPL, DLPFC, IFG, DMPFC, amygdala, anterior paracingulate, dorsal striatum, OFC and lingual gyrus, and a variation on this general pattern for the Familiar liked > Unfamiliar liked and interaction contrasts. Activation of bilateral IPL and IFG is consistent with mirror accounts of empathy (Shamay-Tsoory, 2010). Furthermore, the ACC, paracingulate, and areas that extend dorsally (SMA, DMPFC) have been proposed as the core of the empathy network (Fan et al., 2011): our result would seem to extend support for the primacy of this region using an experimental task that is not explicitly social in the manner of most empathy studies. Lastly, DLPFC is an important executive control area in cognitive empathy (Christov-Moore and Iacoboni, 2016), and has been implicated in emotional regulation (Ochsner et al., 2004; Quirk and Beer, 2006). Activation of this region may reflect top-down control over affective responses to familiar music, both in terms of up-regulation to liked music and down-regulation to disliked (or possibly up-regulation to negative stimuli, as open-minded empathic listeners try to “see something positive” in the disliked music). In further research, connectivity analysis between DLPFC and limbic/reward areas may help to specify the neurophysiological mechanisms underlying empathy-modulated emotional regulation during music listening.

In addition to motor, cingulate and prefrontal activity, we found the recruitment of emotion and reward processing areas as a function of EC and musical familiarity: dorsal striatum (the whole extent of the caudate nucleus, plus thalamus) may reflect increased pleasure in response to familiar music among empathic listeners. It is not surprising that the reward system would show preferential activation to familiar music (Pereira et al., 2011), as confirmed in the basic group Liked > Disliked contrast. Prevalence of basal ganglia for both EC and FS suggests that trait empathy may effectively sensitize people to the music they already know. This even appears to be the case for disliked music, which showed dorsal striatum activation (along with OFC) in the Familiar disliked > Unfamiliar disliked contrast. This could be interpreted to indicate that empathic people may experience heightened musical pleasure even when listening to the music they self-select as “hating,” provided it is familiar. By way of contrast, no striatum activation was found for any of the unfamiliar music conditions. In concert with limbic circuitry, then, it is apparent that musical familiarity recruits a broad region of the affect-reward system in high EC listeners.

Activation of inferior parts of the lingual gyrus and occipital lobe was another novel finding, and may also be linked to musical affect. These areas are associated with visual processing, including perception and recognition of familiar sights and emotional facial expressions (Kitada et al., 2010), as well as visual imagery (Kosslyn et al., 2001). It is reasonable to think that empathic listeners may be more prone to visual imagery while listening to familiar music. Visual responses are an important mechanism of musical affect more generally, and are a fairly reliable index of musical engagement and attention (Juslin and Västfjäll, 2008): if high-EC people are more susceptible to musical affect, as suggested by our results, they may also show a greater tendency towards visual imagery in music listening. To be clear, we did not explicitly operationalize visual imagery in this study: in the future, it would be interesting to follow up on this result by comparing visual imagery and music listening tasks using the EC scale as a covariate.

The behavioral data resonate in interesting and sometimes contradictory ways with these imaging findings. We found that EC is strongly associated with preference for liked music and unfamiliar music, and negative responses to familiar disliked music. Results suggest that high-EC people are more responsive to the affective components of music, as reflected in polarity of preference responses. EC was also associated with open-mindedness to new music (i.e., higher ratings for unfamiliar music), though imaging results for this contrast did not reach significance, and might appear to be contradicted by the clear familiarity effect discussed previously. We must be cautious in the interpretation of these findings owing to the small sample size, but this resonance between behavioral and imaging evidence is nonetheless suggestive in demonstrating a role for EC in affective responsiveness to familiar music. This conclusion is broadly consistent with previous behavioral studies (Egermann and McAdams, 2013), especially regarding pleasurable responses to sad music (Garrido and Schubert, 2011; Vuoskoski et al., 2012; Eerola et al., 2016).

In sum, the present results provide complementary neural evidence that involvement of prefrontal areas and limbic/basal ganglia in music listening covaries with individual trait differences in empathy, with sensorimotor engagement playing a smaller role. How do we account for the prominence of cognitive, prefrontal areas in music listening but not musical timbre in isolation? It must be noted that a broad swath of the emotional empathy system was involved in the basic task > baseline contrast (used to mask all IRI covariates): in other words, it is clear that music in aggregate is processed with some level of sensorimotor, paralimbic, and limbic involvement, regardless the empathy level of the listeners or the valence/familiarity of the music (Zatorre et al., 2007). However, our results seem to suggest that empathic people tend to be more attuned to the attribution of human agency and affective intention in the musical signal, as indicated by preferential engagement of cognitive empathy networks including PFC (MPF and DLPFC) and TPJ (Banissy et al., 2012), as well as reward areas. In other words, what seems to best characterize the high-empathy response to musical stimuli is the tendency to take an extra cognitive step towards identification with some agentive quality of the music, over and above the work of emotional contagion mechanisms alone. Thus while patterns of neural resonance consistent with emotional contagion appear to be common to most experiences of music—and were also found among high-empathy participants in Experiment 1—activation of prefrontal cognitive empathy systems for the PT and EC scales may indicate the tendency of empathic listeners to try to “get into the heads” of composers, performers, and/or the virtual persona of the music (Levinson, 2006). This top-down process is effortful, imaginative, and self-aware, in contrast to the automatic and pre-reflective mechanisms undergirding emotional contagion. Accordingly, as suggested by Schubert (2017), the involvement of cognitive systems may not strictly speaking be required for affective musical response, which can largely be accounted for by emotional contagion circuitry alone. A number of studies have shown that mental imagery may be supported by sensorimotor and affective components without the contribution of prefrontal areas (Decety and Grèzes, 2006; Ogino et al., 2007). Nevertheless, they could betoken a more social cognitive mode of listening, a deliberative attempt on the part of listeners to project themselves into the lived experience of the musical agent. This imaginative projection is more intense, understandably, for music that empathic people already know, and also appears to interact with musical preference.

General Implications

The present study has a number of implications for social and affective neuroscience, music psychology, and musicology. For neuroscientific empathy research, we demonstrate the involvement of the core empathy network and mirror neuron system outside of tasks that are explicitly social cognitive. Most studies use transparently social experimental tasks and stimuli to assess neural correlates of state and trait empathy; for example, viewing pictures or videos of other people (for review, see Singer and Lamm, 2009). This study demonstrates that musical sound, which is perhaps not an obvious social stimulus, can elicit neural responses consistent with theories of empathy. By doing so, this study highlights the potential value of operationalizing artistic and aesthetic experience as a window into social cognitive and affective processing, a perspective that is arguably the historical progenitor of contemporary empathy research (Lipps, 1907).

For music psychology, this research has at least three main implications. First, this study demonstrates that trait empathy may modulate the neurophysiology of music listening. Although there is mounting behavioral and psychophysiological evidence pointing to this conclusion (Miu and Vuoskoski, 2017), this is the first study to investigate the effects of empathy on the musical brain. Second, this study confirms and extends empirical claims that music cognition is inextricably linked to social cognition (Huron, 2001; North and Hargreaves, 2008). Following Schubert’s (2017) common coding model, our results suggest that aspects of affective music processing can be viewed as a specialized subprocess of general social-affective perception and cognition. This may begin to explain the neural bases for how music can function as a “virtual social agent” (Leman, 2007). Third, in demonstrating neural differences in music processing as a function of empathy, we highlight the possible significance of looking at other trait features when assessing the functional neural correlates of musical tasks and stimuli. Many neurophysiological music studies take only a few trait features into account in sampling procedures and analysis, most notably sex, age, and musical training: the latter has been well explored (e.g., Alluri et al., 2017), but other factors—such as personality factors and mood—are not frequently addressed. Individual differences in music processing may relate to dispositional characteristics that can be captured by psychosocial questionnaires, indirect observational techniques, or other methods. Exploring the role of such trait variables in musical behaviors and brain processing could provide a more detailed and granular account of music cognition.

Finally, these results enrich the humanistic study of music in providing a plausible psychobiological account for the social valence of musical experience observed in diverse cultural and historical settings. As music theorist Clifton (1983) claims, “the ‘other’ need not be a person: it can be music.” In a very rough sense, this study provides empirical support for this statement: areas implicated in trait empathy and social cognition also appear to be involved in music processing, and to a significantly greater degree for individuals with high trait empathy. If music can function something like a virtual “other,” then it might be capable of altering listeners’ views of real others, thus enabling it to play an ethically complex mediating role in the social discourse of music (Rahaim, 2017). Indeed, musicologists have historically documented moments of tense cultural encounter wherein music played an instrumental role in helping one group to realize the other’s shared humanity (Cruz, 1999). Recent research would seem to provide behavioral ballast for this view: using an implicit association task, Vuoskoski et al. (2016) showed that listening to the music of another culture could positively modulate attitudes towards members of that culture among empathic listeners. Though we do not in this study explicitly address whether music can alter empathic brain circuits, it is suggestive that certain attitudes toward musical sound may have behavioral and neural bases in individual differences in trait empathy.

Limitations

A few important limitations must be considered in interpreting these results. First, this study was correlational: no causative links can thus be determined in the relationship between music and trait empathy. In the future, it would be interesting to use an empathy priming paradigm (Miu and Balteş, 2012) in an fMRI context to compare neurophysiological correlates of trait empathy with primed “state” empathy in music listening; this could provide a powerful method for disentangling possible differences in processing between dispositional attributes of empathy and contextual factors (e.g., socially conditioned attitudes about a performer, mood when listening). As a corollary to the above, moreover, this study does not address whether our results are specific to music listening: perhaps high-empathy people utilize more of these areas when performing other non-musical yet not explicitly social tasks as well (e.g., viewing abstract art). Additionally, we do not explore whether there could be other mediating trait factors in music processing besides empathy and sex: personality and temperament, for instance, have been shown to modulate responses to music (Rentfrow and Gosling, 2003). Finally, this study will need to be replicated with a larger sample size, and with participants who do not self-select based on strong emotional reactions to music, in order to strengthen the statistical power and generalizability of the results (Yarkoni, 2009).

Conclusion

In two experiments using fMRI, this article demonstrates that trait empathy modulates music processing. Replicating previous findings in the social neuroscience literature, isolated musical timbres are related to sensorimotor and paralimbic activation; in actual music listening, however, empathy is primarily associated with activity in prefrontal and reward areas. Empathic participants were found to be particularly sensitive to abrasive, “noisy” qualities of musical timbre, showing preferential activation of the SMA, possibly reflecting heightened motor-mimetic susceptibility to sounds signaling high-arousal, low-valence affective states. In the music listening task, empathic subjects demonstrated enhanced responsiveness to familiar music, with musical preference playing a mediating role. Taken together, these results confirm and extend recent research on the link between music and empathy, and may help bring us closer to understanding the social cognitive basis for music perception and cognition.

Author Contributions

ZW and MI conceptualized and designed the experiments and wrote the article. ZW performed the experiments. CD and ZW analyzed the data and visualized the data. MI and CD consulted on the article.

Funding

This work was supported by the Brain Mapping Medical Research Organization, Brain Mapping Support Foundation, Pierson-Lovelace Foundation, The Ahmanson Foundation, William M. and Linda R. Dietel Philanthropic Fund at the Northern Piedmont Community Foundation, Tamkin Foundation, Jennifer Jones-Simon Foundation, Capital Group Companies Charitable Foundation, Robson Family and Northstar Fund and a UCLA Transdisciplinary Seed Grant.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

We wish to acknowledge Katy Cross, Robert Fink, Roger Kendall, and Marita Meyer for their assistance at various stages of this study. Preliminary results were presented at the 2016 International Conference on Music Perception and Cognition (ICMPC14) in San Francisco, and selected findings from Experiment 1 appear in the proceedings of that conference.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fnbeh.2018.00066/full#supplementary-material

References

Aglioti, S. M., and Pazzaglia, M. (2010). Representing actions through their sound. Exp. Brain Res. 206, 141–151. doi: 10.1007/s00221-010-2344-x

PubMed Abstract | CrossRef Full Text | Google Scholar

Alluri, V., Toiviainen, P., Burunat, I., Kliuchko, M., Vuust, P., and Brattico, E. (2017). Connectivity patterns during music listening: evidence for action-based processing in musicians. Hum. Brain Mapp. 38, 2955–2970. doi: 10.1002/hbm.23565

PubMed Abstract | CrossRef Full Text | Google Scholar

Aucouturier, J.-J., and Canonne, C. (2017). Musical friends and foes: the social cognition of affiliation and control in improvised interactions. Cognition 161, 94–108. doi: 10.1016/j.cognition.2017.01.019

PubMed Abstract | CrossRef Full Text | Google Scholar

Avenanti, A., Minio-Paluello, I., Bufalari, I., and Aglioti, S. M. (2009). The pain of a model in the personality of an onlooker: influence of state-reactivity and personality traits on embodied empathy for pain. Neuroimage 44, 275–283. doi: 10.1016/j.neuroimage.2008.08.001

PubMed Abstract | CrossRef Full Text | Google Scholar

Aziz-Zadeh, L., Sheng, T., and Gheytanchi, A. (2010). Common premotor regions for the perception and production of prosody and correlations with empathy and prosodic ability. PLoS One 5:e8759. doi: 10.1371/journal.pone.0008759

PubMed Abstract | CrossRef Full Text | Google Scholar

Balconi, M., and Canavesio, Y. (2013). Emotional contagion and trait empathy in prosocial behavior in young people: the contribution of autonomic (facial feedback) and balanced emotional empathy scale (BEES) measures. J. Clin. Exp. Neuropsychol. 35, 41–48. doi: 10.1080/13803395.2012.742492

PubMed Abstract | CrossRef Full Text | Google Scholar

Bangert, M., Peschel, T., Schlaug, G., Rotte, M., Drescher, D., Hinrichs, H., et al. (2006). Shared networks for auditory and motor processing in professional pianists: evidence from fMRI conjunction. Neuroimage 30, 917–926. doi: 10.1016/j.neuroimage.2005.10.044

PubMed Abstract | CrossRef Full Text | Google Scholar

Banissy, M. J., Kanai, R., Walsh, V., and Rees, G. (2012). Inter-individual differences in empathy are reflected in human brain structure. Neuroimage 62, 2034–2039. doi: 10.1016/j.neuroimage.2012.05.081

PubMed Abstract | CrossRef Full Text | Google Scholar

Banks, S. J., Eddy, K. T., Angstadt, M., Nathan, P. J., and Phan, K. L. (2007). Amygdala-frontal connectivity during emotion regulation. Soc. Cogn. Affect. Neurosci. 2, 303–312. doi: 10.1093/scan/nsm029

PubMed Abstract | CrossRef Full Text | Google Scholar

Batson, C. D. (1991). The Altruism Question: Toward a Social-Psychological Answer. Hillsdale, NJ: Erlbaum.

Google Scholar

Batson, C. D. (2009). “These things called empathy: eight related but distinct phenomena,” in The Social Neuroscience of Empathy, eds J. Decety and W. Ickes (Cambridge, MA: MIT Press), 3–16.

Google Scholar

Beckmann, C., Jenkinson, M., and Smith, S. M. (2003). General multi-level linear modelling for group analysis in fMRI. Neuroimage 20, 1052–1063. doi: 10.1016/s1053-8119(03)00435-x

PubMed Abstract | CrossRef Full Text | Google Scholar

Belin, P., Zatorre, R. J., Lafaille, P., Ahad, P., and Pike, B. (2000). Voice-selective areas in the human auditory cortex. Nature 403, 309–312. doi: 10.1038/35002078

PubMed Abstract | CrossRef Full Text | Google Scholar

Benjamini, Y., and Hochberg, Y. (1995). Controlling the false discovery rate: a practical and powerful approach to multiple testing. J. R. Stat. Soc. 57, 289–300.

Google Scholar

Blood, A. J., Zatorre, R. J., Bermudez, P., and Evans, A. C. (1999). Emotional responses to pleasant and unpleasant music correlate with activity in paralimbic brain regions. Nat. Neurosci. 2, 382–387. doi: 10.1038/7299

PubMed Abstract | CrossRef Full Text | Google Scholar

Brown, S., Ngan, E., and Liotti, M. (2008). A larynx area in the human motor cortex. Cereb. Cortex 18, 837–845. doi: 10.1093/cercor/bhm131

PubMed Abstract | CrossRef Full Text | Google Scholar

Bufalari, I., Aprile, T., Avenanti, A., Di Russo, F., and Aglioti, S. M. (2007). Empathy for pain and touch in the human somatosensory cortex. Cereb. Cortex 17, 2553–2561. doi: 10.1093/cercor/bhl161

PubMed Abstract | CrossRef Full Text | Google Scholar

Carr, L., Iacoboni, M., Dubeau, M.-C., Mazziotta, J. C., and Lenzi, G. L. (2003). Neural mechanisms of empathy in humans: a relay from neural systems for imitation to limbic systems. Proc. Natl. Acad. Sci. U S A 100, 5497–5502. doi: 10.1073/pnas.0935845100

PubMed Abstract | CrossRef Full Text | Google Scholar

Chakrabarti, B., Bullmore, E., and Baron-Cohen, S. (2006). Empathizing with basic emotions: common and discrete neural substrates. Soc. Neurosci. 1, 364–384. doi: 10.1080/17470910601041317

PubMed Abstract | CrossRef Full Text | Google Scholar

Cheetham, M., Hänggi, J., and Jancke, L. (2014). Identifying with fictive characters: structural brain correlates of the personality trait ‘fantasy’. Soc. Cogn. Affect. Neurosci. 9, 1836–1844. doi: 10.1093/scan/nst179

PubMed Abstract | CrossRef Full Text | Google Scholar

Christov-Moore, L., Conway, P., and Iacoboni, M. (2017a). Deontological dilemma response tendencies and sensorimotor representations of harm to others. Front. Integr. Neurosci. 11:34. doi: 10.3389/fnint.2017.00034

PubMed Abstract | CrossRef Full Text | Google Scholar

Christov-Moore, L., Sugiyama, T., Grigaityte, K., and Iacoboni, M. (2017b). Increasing generosity by disrupting prefrontal cortex. Soc. Neurosci. 12, 174–181. doi: 10.1080/17470919.2016.1154105

PubMed Abstract | CrossRef Full Text | Google Scholar

Christov-Moore, L., and Iacoboni, M. (2016). Self-other resonance, its control and prosocial inclinations: brain-behavior relationships. Hum. Brain Mapp. 37, 1544–1558. doi: 10.1002/hbm.23119

PubMed Abstract | CrossRef Full Text | Google Scholar

Clarke, E. F., DeNora, T., and Vuoskoski, J. K. (2015). Music, empathy and cultural understanding. Phys. Life Rev. 15, 61–88. doi: 10.1016/j.plrev.2015.09.001

PubMed Abstract | CrossRef Full Text | Google Scholar

Clifton, T. (1983). Music As Heard: A Study in Applied Phenomenology. New Haven, CT: Yale University Press.

Google Scholar

Coplan, A., and Goldie, P. (Eds). (2011). Empathy: Philosophical and Psychological Perspectives. New York, NY: Oxford University Press.

Google Scholar

Cox, A. (2016). Music and Embodied Cognition: Listening, Moving, Feeling, and Thinking. Bloomington: Indiana University Press.

Google Scholar

Cross, I. (2001). Music, cognition, culture, and evolution. Ann. N Y Acad. Sci. 930, 28–42. doi: 10.1111/j.1749-6632.2001.tb05723.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Cruz, J. (1999). Culture on The Margins: The Black Spiritual and The Rise of American Cultural Interpretation. Princeton, NJ: Princeton University Press.

Google Scholar

Davies, S. (2011). “Infectious music: music-listener emotional contagion,” in Empathy: Philosophical and Psychological Perspectives, eds A. Coplan and P. Goldie (Oxford: Oxford University Press), 134–148.

Google Scholar

Davis, M. H. (1980). A multidimensional approach to individual differences in empathy. JSAS Catalog Sel. Doc. Psychol. 10:85.

Google Scholar

Davis, M. H. (1996). Empathy: A Social Psychological Approach. Madison, WI: Westview.

Google Scholar

de Vignemont, F., and Singer, T. (2006). The empathic brain: how, when and why? Trends Cogn. Sci. 10, 435–441. doi: 10.1016/j.tics.2006.08.008

PubMed Abstract | CrossRef Full Text | Google Scholar

Decety, J., and Grèzes, J. (2006). The power of simulation: imagining one’s own and other’s behavior. Brain Res. 1079, 4–14. doi: 10.1016/j.brainres.2005.12.115

PubMed Abstract | CrossRef Full Text | Google Scholar

Decety, J., and Ickes, W. (Eds). (2009). The Social Neuroscience of Empathy. Cambridge, MA: MIT Press.

Google Scholar

Decety, J., and Lamm, C. (2006). Human empathy through the lens of social neuroscience. ScientificWorldJournal 6, 1146–1163. doi: 10.1100/tsw.2006.221

PubMed Abstract | CrossRef Full Text | Google Scholar

Derntl, B., Finkelmeyer, A., Eickhoff, S., Kellermann, T., Falkenberg, D. I., Schneider, F., et al. (2010). Multidimensional assessment of empathic abilities: neural correlates and gender differences. Psychoneuroendocrinology 35, 67–82. doi: 10.1016/j.psyneuen.2009.10.006

PubMed Abstract | CrossRef Full Text | Google Scholar

Eerola, T., Vuoskoski, J. K., and Kautiainen, H. (2016). Being moved by unfamiliar sad music is associated with high empathy. Front. Psychol. 7:1176. doi: 10.3389/fpsyg.2016.01176

PubMed Abstract | CrossRef Full Text | Google Scholar

Egermann, H., and McAdams, S. (2013). Empathy and emotional contagion as a link between recognized and felt emotions in music listening. Music Percept. 31, 139–156. doi: 10.1525/mp.2013.31.2.139

CrossRef Full Text | Google Scholar

Eisenberg, N., Shea, C. L., Carlo, G., and Knight, G. P. (1991). “Empathy-related responding and cognition: a “chicken and the egg” dilemma,” in Handbook of Moral Behavior and Developoment: Vol. 2. Research, ed. W. M. Kurtines (Hillsdale, NJ: Erlbaum), 63–88.

Google Scholar

Fan, Y., Duncan, N. W., de Greck, M., and Northoff, G. (2011). Is there a core neural network in empathy? An fMRI based quantitative meta-analysis. Neurosci. Biobehav. Rev. 35, 903–911. doi: 10.1016/j.neubiorev.2010.10.009

PubMed Abstract | CrossRef Full Text | Google Scholar

Filkowski, M. M., Olsen, R. M., Duda, B., Wanger, T. J., and Sabatinelli, D. (2017). Sex differences in emotional perception: meta analysis of divergent activation. Neuroimage 147, 925–933. doi: 10.1016/j.neuroimage.2016.12.016

PubMed Abstract | CrossRef Full Text | Google Scholar

Forman, S. D., Cohen, J. D., Fitzgerald, M., Eddy, W. F., Mintun, M. A., and Noll, D. C. (1995). Improved assessment of significant activation in functional magnetic resonance imaging (fMRI): use of a cluster-size threshold. Magn. Reson. Med. 33, 636–647. doi: 10.1002/mrm.1910330508

PubMed Abstract | CrossRef Full Text | Google Scholar

Freedberg, D., and Gallese, V. (2007). Motion, emotion and empathy in esthetic experience. Trends Cogn. Sci. 11, 197–203. doi: 10.1016/j.tics.2007.02.003

PubMed Abstract | CrossRef Full Text | Google Scholar

Friston, K. J., Worsley, K. J., Frackowiak, R. S., Mazziotta, J. C., and Evans, A. C. (1994). Assessing the significance of focal activations using their spatial extent. Hum. Brain Mapp. 1, 210–220. doi: 10.1002/hbm.460010306

PubMed Abstract | CrossRef Full Text | Google Scholar

Frith, C. D., and Frith, U. (2006). The neural basis of mentalizing. Neuron 50, 531–534. doi: 10.1016/j.neuron.2006.05.001

PubMed Abstract | CrossRef Full Text | Google Scholar

Gallese, V. (2003). The roots of empathy: the shared manifold hypothesis and the neural basis for intersubjectivity. Psychopathology 36, 171–180. doi: 10.1159/000072786

PubMed Abstract | CrossRef Full Text | Google Scholar

Garrido, S., and Schubert, E. (2011). Individual differences in the enjoyment of negative emotion in music: a literature review and experiment. Music Percept. 28, 279–296. doi: 10.1525/mp.2011.28.3.279

CrossRef Full Text | Google Scholar

Gazzola, V., Aziz-Zadeh, L., and Keysers, C. (2006). Empathy and the somatotopic auditory mirror system in humans. Curr. Biol. 16, 1824–1829. doi: 10.1016/j.cub.2006.07.072

PubMed Abstract | CrossRef Full Text | Google Scholar

Goldman, A. I. (2006). Simulating Minds: The Philosophy, Psychology, and Neuroscience of Mindreading. New York, NY: Oxford University Press.

Google Scholar

Goldman, A. I. (2011). “Two routes to empathy: insights from cognitive neuroscience,” in Empathy: Philosophical and Psychological Perspectives, eds A. Coplan and P. Goldie (New York, NY: Oxford University Press), 31–44.

Halpern, A. R., Zatorre, R. J., Bouffard, M., and Johnson, J. A. (2004). Behavioral and neural correlates of perceived and imagined musical timbre. Neuropsychologia 42, 1281–1292. doi: 10.1016/j.neuropsychologia.2003.12.017

PubMed Abstract | CrossRef Full Text | Google Scholar

Hatfield, E., Cacioppo, J., and Rapson, R. L. (1994). Emotional Contagion. New York, NY: Cambridge University Press.

Google Scholar

Hellige, J. B. (1993). Hemispheric Asymmetry: What’s Right and What’s Left. Cambridge, MA: Harvard University Press.

Google Scholar

Huron, D. (2001). Is music an evolutionary adaptation? Ann. N Y Acad. Sci. 930, 43–61. doi: 10.1111/j.1749-6632.2001.tb05724.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Iacoboni, M. (2009). Imitation, empathy, and mirror neurons. Annu. Rev. Psychol. 60, 653–670. doi: 10.1146/annurev.psych.60.110707.163604

PubMed Abstract | CrossRef Full Text | Google Scholar

Iacoboni, M., Woods, R. P., Brass, M., Bekkering, H., Mazziotta, J. C., and Rizzolatti, G. (1999). Cortical mechanisms of human imitation. Science 286, 2526–2528. doi: 10.1126/science.286.5449.2526

PubMed Abstract | CrossRef Full Text | Google Scholar

Ishida, H., Nakajima, K., Inase, M., and Muraka, A. (2009). Shared mapping of own and others′ bodies in visuotactile bimodal area of monkey parietal cortex. J. Cogn. Neurosci. 22, 83–96. doi: 10.1162/jocn.2009.21185

PubMed Abstract | CrossRef Full Text | Google Scholar

Jabbi, M., Swart, M., and Keysers, C. (2007). Empathy for positive and negative emotions in the gustatory cortex. Neuroimage 34, 1744–1753. doi: 10.1016/j.neuroimage.2006.10.032

PubMed Abstract | CrossRef Full Text | Google Scholar

Jackson, P. L., Brunet, E., Meltzoff, A. N., and Decety, J. (2006). Empathy examined through the neural mechanisms involved in imagining how I feel versus how you feel pain. Neuropsychologia 44, 752–761. doi: 10.1016/j.neuropsychologia.2005.07.015

PubMed Abstract | CrossRef Full Text | Google Scholar

Jackson, P. L., Meltzoff, A. N., and Decety, J. (2005). How do we perceive the pain of others? A window into the neural processes involved in empathy. Neuroimage 24, 771–779. doi: 10.1016/j.neuroimage.2004.09.006

PubMed Abstract | CrossRef Full Text | Google Scholar

Janata, P., Tomic, S. T., and Rakowski, S. K. (2007). Characterisation of music-evoked autobiographical memories. Memory 15, 845–860. doi: 10.1080/09658210701734593

PubMed Abstract | CrossRef Full Text | Google Scholar

Jenkinson, M., Bannister, P., Brady, J. M., and Smith, S. M. (2002). Improved optimisation for the robust and accurate linear registration and motion correction of brain images. Neuroimage 17, 825–841. doi: 10.1016/s1053-8119(02)91132-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Juslin, P. N., and Västfjäll, D. (2008). Emotional responses to music: the need to consider underlying mechanisms. Behav. Brain Sci. 31, 559–575; 575–621. doi: 10.1017/s0140525x08005293

PubMed Abstract | CrossRef Full Text | Google Scholar

Kaplan, J. T., and Iacoboni, M. (2006). Getting a grip on other minds: mirror neurons, intention understanding, and cognitive empathy. Soc. Neurosci. 1, 175–183. doi: 10.1080/17470910600985605

PubMed Abstract | CrossRef Full Text | Google Scholar

Kawakami, A., and Katahira, K. (2015). Influence of trait empathy on the emotion evoked by sad music and on the preference for it. Front. Psychol. 6:1541. doi: 10.3389/fpsyg.2015.01541

PubMed Abstract | CrossRef Full Text | Google Scholar

Kitada, R., Johnsrude, I. S., Kochiyama, T., and Lederman, S. J. (2010). Brain networks involved in haptic and visual identification of facial expressions of emotion: an fMRI study. Neuroimage 49, 1677–1689. doi: 10.1016/j.neuroimage.2009.09.014

PubMed Abstract | CrossRef Full Text | Google Scholar

Koelsch, S., Fritz, T., Cramon, D. Y. V., Müller, K., and Friederici, A. D. (2006). Investigating emotion with music: an fMRI study. Hum. Brain Mapp. 27, 239–250. doi: 10.1002/hbm.20180

PubMed Abstract | CrossRef Full Text | Google Scholar

Kosslyn, S. M., Ganis, G., and Thompson, W. L. (2001). Neural foundations of imagery. Nat. Rev. Neurosci. 2, 635–642. doi: 10.1038/35090055

PubMed Abstract | CrossRef Full Text | Google Scholar

Kreutz, G., Schubert, E., and Mitchell, L. A. (2008). Cognitive styles of music listening. Music Percept. 26, 57–73. doi: 10.1525/mp.2008.26.1.57

CrossRef Full Text | Google Scholar

Lamm, C., Decety, J., and Singer, T. (2011). Meta-analytic evidence for common and distinct neural networks associated with directly experienced pain and empathy for pain. Neuroimage 54, 2492–2502. doi: 10.1016/j.neuroimage.2010.10.014

PubMed Abstract | CrossRef Full Text | Google Scholar

Lawrence, E. J., Shaw, P., Giampietro, V. P., Surguladze, S., Brammer, M. J., and David, A. S. (2006). The role of ‘shared representations’ in social perception and empathy: an fMRI study. Neuroimage 29, 1173–1184. doi: 10.1016/j.neuroimage.2005.09.001

PubMed Abstract | CrossRef Full Text | Google Scholar

Leman, M. (2007). Embodied Music Cognition and Mediation Technology. Cambridge, MA: IT Press.

Google Scholar

Levinson, J. (2006). “Musical expressiveness as hearability-as-expression,” in Contemporary Debates in Aesthetics and the Philosophy of Art, ed. M. Kieran (Oxford: Blackwell), 192–206.

Google Scholar

Lima, C. F., Lavan, N., Evans, S., Agnew, Z., Halpern, A. R., Shanmugalingam, P., et al. (2015). Feel the noise: relating individual differences in auditory imagery to the structure and function of sensorimotor systems. Cereb. Cortex 4638–4650. doi: 10.1093/cercor/bhv134

PubMed Abstract | CrossRef Full Text | Google Scholar

Lipps, T. (1907). Ästhetik. Berlin: B.G. Teubner.

Litvack-Miller, W., McDougall, D., and Romney, D. M. (1997). The structure of empathy during middle childhood and its relationship to prosocial behavior. Genet. Soc. Gen. Psychol. Monogr. 123, 303–324.

PubMed Abstract | Google Scholar

Livingstone, S. R., and Thompson, W. F. (2009). The emergence of music from the theory of mind. Music. Sci. 13, 83–115. doi: 10.1177/1029864909013002061

CrossRef Full Text | Google Scholar

Margulis, E. H., Mlsna, L. M., Uppunda, A. K., Parrish, T. B., and Wong, P. C. M. (2009). Selective neurophysiologic responses to music in instrumentalists with different listening biographies. Hum. Brain Mapp. 30, 267–275. doi: 10.1002/hbm.20503

PubMed Abstract | CrossRef Full Text | Google Scholar

Mazziotta, J., Toga, A., Evans, A., Fox, P., Lancaster, J., Zilles, K., et al. (2001). A probabilistic atlas and reference system for the human brain: international consortium for brain mapping (ICBM). Philos. Trans. R. Soc. Lond. B Biol. Sci. 356, 1293–1322. doi: 10.1098/rstb.2001.0915

PubMed Abstract | CrossRef Full Text | Google Scholar

Mehrabian, A., Young, A. L., and Sato, S. (1988). Emotional empathy and associated individual differences. Curr. Psychol. 7, 221–240. doi: 10.1007/bf02686670

CrossRef Full Text | Google Scholar

Miu, A. C., and Balteş, F. R. (2012). Empathy manipulation impacts music-induced emotions: a psychophysiological study on opera. PLoS One 7:e30618. doi: 10.1371/journal.pone.0030618

PubMed Abstract | CrossRef Full Text | Google Scholar

Miu, A. C., and Vuoskoski, J. K. (2017). “The social side of music listening: empathy and contagion in music-induced emotions,” in Music and Empathy, eds E. King and C. Waddington (London: Routledge), 124–138.

Google Scholar

Molnar-Szakacs, I., and Overy, K. (2006). Music and mirror neurons: from motion to ‘e’motion. Soc. Cogn. Affect. Neurosci. 1, 235–241. doi: 10.1093/scan/nsl029

PubMed Abstract | CrossRef Full Text | Google Scholar

Mukamel, R., Ekstrom, A. D., Kaplan, J., Iacoboni, M., and Fried, I. (2010). Single-neuron responses in humans during execution and observation of actions. Curr. Biol. 20, 750–756. doi: 10.1016/j.cub.2010.02.045

PubMed Abstract | CrossRef Full Text | Google Scholar

Nguyen, V. T., Breakspear, M., and Cunnington, R. (2014). Reciprocal interactions of the SMA and cingulate cortex sustain premovement activity for voluntary actions. J. Neurosci. 34, 16397–16407. doi: 10.1523/JNEUROSCI.2571-14.2014

PubMed Abstract | CrossRef Full Text | Google Scholar

North, A. C., and Hargreaves, D. J. (2008). The Social and Applied Psychology of Music. Oxford: Oxford University Press.

Google Scholar

Ochsner, K. N., Ray, R. D., Cooper, J. C., Robertson, E. R., Chopra, S., Gabrieli, J. D. E., et al. (2004). For better or for worse: neural systems supporting the cognitive down- and up-regulation of negative emotion. Neuroimage 23, 483–499. doi: 10.1016/j.neuroimage.2004.06.030

PubMed Abstract | CrossRef Full Text | Google Scholar

Ogino, Y., Nemoto, H., Inui, K., Saito, S., Kakigi, R., and Goto, F. (2007). Inner experience of pain: imagination of pain while viewing images showing painful events forms subjective pain representation in human brain. Cereb. Cortex 17, 1139–1146. doi: 10.1093/cercor/bhl023

PubMed Abstract | CrossRef Full Text | Google Scholar

Overy, K., and Molnar-Szakacs, I. (2009). Being together in time: musical experience and the mirror neuron system. Music Percept. 26, 489–504. doi: 10.1525/mp.2009.26.5.489

CrossRef Full Text | Google Scholar

Pereira, C. S., Teixeira, J., Figueiredo, P., Xavier, J., Castro, S. L., and Brattico, E. (2011). Music and emotions in the brain: familiarity matters. PLoS One 6:e27241. doi: 10.1371/journal.pone.0027241

PubMed Abstract | CrossRef Full Text | Google Scholar

Pfeifer, J. H., Iacoboni, M., Mazziota, J. C., and Dapretto, M. (2008). Mirroring others’ emotions relates to empathy and interpersonal competence in children. Neuroimage 39, 2076–2085. doi: 10.1016/j.neuroimage.2007.10.032

PubMed Abstract | CrossRef Full Text | Google Scholar

Phan, K. L., Wager, T., Taylor, S. F., and Liberzon, I. (2002). Functional neuroanatomy of emotion: a meta-analysis of emotion activation studies in PET and fMRI. Neuroimage 16, 331–348. doi: 10.1006/nimg.2002.1087

PubMed Abstract | CrossRef Full Text | Google Scholar

Preston, S. D., and de Waal, F. B. M. (2002). Empathy: its ultimate and promixate bases. Behav. Brain Sci. 25, 1–20; discussion 20–71. doi: 10.1017/s0140525x02000018

PubMed Abstract | CrossRef Full Text | Google Scholar

Quirk, G. J., and Beer, J. S. (2006). Prefrontal involvement in the regulation of emotion: convergence of rat and human studies. Curr. Opin. Neurobiol. 16, 723–727. doi: 10.1016/j.conb.2006.07.004

PubMed Abstract | CrossRef Full Text | Google Scholar

Rahaim, M. (2017). “Otherwise than participation: unity and alterity in musical encounters,” in Music and Empathy, eds C. Waddington and E. King (London: Routledge), 175–193.

Google Scholar

Rentfrow, P. J., and Gosling, S. D. (2003). The do re mi’s of everyday life: the structure and personality correlates of music preferences. J. Pers. Soc. Psychol. 84, 1236–1256. doi: 10.1037/0022-3514.84.6.1236

PubMed Abstract | CrossRef Full Text | Google Scholar

Salimpoor, V. N., Benovoy, M., Larcher, K., Dagher, A., and Zatorre, R. J. (2011). Anatomically distinct dopamine release during anticipation and experience of peak emotion to music. Nat. Neurosci. 14, 257–262. doi: 10.1038/nn.2726

PubMed Abstract | CrossRef Full Text | Google Scholar

Salimpoor, V. N., van den Bosch, I., Kovacevic, N., McIntosh, A. R., Dagher, A., and Zatorre, R. J. (2013). Interactions between the nucleus accumbens and auditory cortices predict music reward value. Science 340, 216–219. doi: 10.1126/science.1231059

PubMed Abstract | CrossRef Full Text | Google Scholar

Saxe, R., and Kanwisher, N. (2003). People thinking about thinking people: the role of the temporo-parietal junction in “theory of mind”. Neuroimage 19, 1835–1842. doi: 10.1016/S1053-8119(03)00230-1

PubMed Abstract | CrossRef Full Text | Google Scholar

Scherer, K. R., and Zentner, M. R. (2001). “Emotional effects of music: production rules,” in Music and Emotion: Theory and Research, eds P. N. Juslin and J. A. Sloboda (New York, NY: Oxford University Press), 361–392.

Google Scholar

Schubert, E. (2017). “Musical identity and individual differences in empathy,” in Handbook of Musical Identities, eds R. Macdonald, D. J. Hargreaves and D. Miell (Oxford: Oxford University Press), 322–344.

Google Scholar

Shamay-Tsoory, S. G. (2010). The neural bases for empathy. Neuroscientist 17, 18–24. doi: 10.1177/1073858410379268

PubMed Abstract | CrossRef Full Text | Google Scholar

Shepherd, S. V., Klein, J. T., Deaner, R. O., and Platt, M. L. (2009). Mirroring of attention by neurons in macaque parietal cortex. Proc. Natl. Acad. Sci. U S A 106, 9489–9494. doi: 10.1073/pnas.0900419106

PubMed Abstract | CrossRef Full Text | Google Scholar

Singer, T., and Lamm, C. (2009). The social neuroscience of empathy. Ann. N Y Acad. Sci. 1156, 81–96. doi: 10.1111/j.1749-6632.2009.04418.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Singer, T., Seymour, B., O’Doherty, J., Kaube, H., Dolan, R. J., and Frith, C. D. (2004). Empathy for pain involves the affective but not sensory components of pain. Science 303, 1157–1162. doi: 10.1126/science.1093535

PubMed Abstract | CrossRef Full Text | Google Scholar

Taruffi, L., Allen, R., Downing, J., and Heaton, P. (2017). Individual differences in music-perceived emotions. Music Percept. 34, 253–266. doi: 10.1525/mp.2017.34.3.253

CrossRef Full Text | Google Scholar

Templeton, G. F. (2011). A two-step approach for transforming continuous variables to normal: implications and recommendations for IS research. Commun. Assoc. Inf. Syst. 28, 41–58. Available online at: http://aisel.aisnet.org/cais/vol28/iss1/4

Google Scholar

Tervaniemi, M., Winkler, I., and Näätänen, R. (1997). Pre-attentive categorization of sounds by timbre as revealed by event-related potentials. Neuroreport 8, 2571–2574. doi: 10.1097/00001756-199707280-00030

PubMed Abstract | CrossRef Full Text | Google Scholar

Tkach, D., Reimer, J., and Hatsopoulos, N. G. (2007). Congruent activity during action and action observation in motor cortex. J. Neurosci. 27, 13241–13250. doi: 10.1523/JNEUROSCI.2895-07.2007

PubMed Abstract | CrossRef Full Text | Google Scholar

Trost, W., Frühholz, S., Cochrane, T., Cojan, Y., and Vuilleumier, P. (2015). Temporal dynamics of musical emotions examined through intersubject synchrony of brain activity. Soc. Cogn. Affect. Neurosci. 10, 1705–1721. doi: 10.1093/scan/nsv060

PubMed Abstract | CrossRef Full Text | Google Scholar

Tsai, C.-G., Wang, L.-C., Wang, S.-F., Shau, Y.-W., and Hsiao, T.-Y. (2010). Aggressiveness of the growl-like timbre: acoustic characteristics, musical implications, and biomechanical mechanisms. Music Percept. 27, 209–221. doi: 10.1525/mp.2010.27.3.209

CrossRef Full Text | Google Scholar

Vuoskoski, J. K., and Eerola, T. (2011). Measuring music-induced emotion: a comparison of emotion models, personality biases, and intensity of experiences. Music. Sci. 15, 159–173. doi: 10.1177/102986491101500203

CrossRef Full Text | Google Scholar

Vuoskoski, J. K., Clarke, E. F., and Denora, T. (2016). Music listening evokes implicit affiliation. Psychol. Music 45, 584–599. doi: 10.1177/0305735616680289

CrossRef Full Text | Google Scholar

Vuoskoski, J. K., Thompson, B., Mcilwain, D., and Eerola, T. (2012). Who enjoys listening to sad music and why? Music Percept. 29, 311–317. doi: 10.1525/mp.2012.29.3.311

CrossRef Full Text | Google Scholar

Wallmark, Z., Iacoboni, M., Deblieck, C., and Kendall, R. A. (2018). Embodied listening and timbre: perceptual, acoustical and neural correlates. Music Percept. 35, 332–363. doi: 10.1525/mp.2018.35.3.332

CrossRef Full Text | Google Scholar

Warren, J. E., Sauter, D. A., Eisner, F., Wiland, J., Dresner, M. A., Wise, R. J. S., et al. (2006). Positive emotions preferentially engage an auditory-motor ‘mirror’ system. J. Neurosci. 26, 13067–13075. doi: 10.1523/JNEUROSCI.3907-06.2006

PubMed Abstract | CrossRef Full Text | Google Scholar

Watkins, K., and Paus, T. (2004). Modulation of motor excitability during speech perception: the role of Broca’s area. J. Cogn. Neurosci. 16, 978–987. doi: 10.1162/0898929041502616

PubMed Abstract | CrossRef Full Text | Google Scholar

Watt, R. J., and Ash, R. L. (1998). A psychological investigation of meaning in music. Music. Sci. 11, 33–53. doi: 10.1177/102986499800200103

CrossRef Full Text | Google Scholar

Wicker, B., Keysers, C., Plailly, J., Royet, J.-P., Gallese, V., and Rizzolatti, G. (2003). Both of us disgusted in my insula: the common neural basis of seeing and feeling disgust. Neuron 40, 655–664. doi: 10.1016/S0896-6273(03)00679-2

PubMed Abstract | CrossRef Full Text | Google Scholar

Wöllner, C. (2012). Is empathy related to the perception of emotional expression in music? A multimodal time-series analysis. Psychol. Aesthet. Creat. Arts 6, 214–223. doi: 10.1037/a0027392

CrossRef Full Text | Google Scholar

Yarkoni, T. (2009). Big correlations in little studies: inflated fMRI correlations reflect low statistical power—commentary on Vul et al. (2009). Perspect. Psychol. Sci. 4, 294–298. doi: 10.1111/j.1745-6924.2009.01127.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Young, L., Dodell-Feder, D., and Saxe, R. (2010). What gets the attention of the temporo-parietal junction? an fMRI investigation of attention and theory of mind. Neuropsychologia 48, 2658–2664. doi: 10.1016/j.neuropsychologia.2010.05.012

PubMed Abstract | CrossRef Full Text | Google Scholar

Zatorre, R. J., Chen, J. L., and Penhune, V. B. (2007). When the brain plays music: auditory-motor interactions in music perception and production. Nat. Rev. Neurosci. 8, 547–558. doi: 10.1038/nrn2152

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: empathy, music cognition, social neuroscience, affective neuroscience, fMRI

Citation: Wallmark Z, Deblieck C and Iacoboni M (2018) Neurophysiological Effects of Trait Empathy in Music Listening. Front. Behav. Neurosci. 12:66. doi: 10.3389/fnbeh.2018.00066

Received: 04 December 2017; Accepted: 21 March 2018;
Published: 06 April 2018.

Edited by:

Robin W. Wilkins, University of North Carolina at Greensboro, United States

Reviewed by:

Martin Lotze, University of Greifswald, Germany
Claudio Lucchiari, Università degli Studi di Milano, Italy

Copyright © 2018 Wallmark, Deblieck and Iacoboni. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Zachary Wallmark, zwallmark@smu.edu

Download