Original Research ARTICLE
Emotional Empathy and Facial Mimicry for Static and Dynamic Facial Expressions of Fear and Disgust
- 1Laboratory of Psychophysiology, Department of Neurophysiology, Nencki Institute of Experimental Biology of Polish Academy of Sciences, Warsaw, Poland
- 2Department of Experimental Psychology, Faculty of Psychology, Institute of Cognitive and Behavioural Neuroscience, University of Social Sciences and Humanities, Warsaw, Poland
Facial mimicry is the tendency to imitate the emotional facial expressions of others. Increasing evidence suggests that the perception of dynamic displays leads to enhanced facial mimicry, especially for happiness and anger. However, little is known about the impact of dynamic stimuli on facial mimicry for fear and disgust. To investigate this issue, facial EMG responses were recorded in the corrugator supercilii, levator labii, and lateral frontalis muscles, while participants viewed static (photos) and dynamic (videos) facial emotional expressions. Moreover, we tested whether emotional empathy modulated facial mimicry for emotional facial expressions. In accordance with our predictions, the highly empathic group responded with larger activity in the corrugator supercilii and levator labii muscles. Moreover, dynamic compared to static facial expressions of fear revealed enhanced mimicry in the high-empathic group in the frontalis and corrugator supercilii muscles. In the low-empathic group the facial reactions were not differentiated between fear and disgust for both dynamic and static facial expressions. We conclude that highly empathic subjects are more sensitive in their facial reactions to the facial expressions of fear and disgust compared to low empathetic counterparts. Our data confirms that personal characteristics, i.e., empathy traits as well as modality of the presented stimuli, modulate the strength of facial mimicry. In addition, measures of EMG activity of the levator labii and frontalis muscles may be a useful index of empathic responses of fear and disgust.
The term facial mimicry (FM) describes the automatic (Dimberg and Thunberg, 1998) and unintentional imitation of emotional expressions in human faces. An increasing number of the studies have argued that FM may be dependent on many particular factors (for review see Hess and Fischer, 2013; Seibt et al., 2015), including the type of task the participant is engaged in Korb et al. (2010) and Murata et al. (2016), properties of the stimulus (dynamic vs. static presentation) (Weyers et al., 2006; Sato et al., 2008; Rymarczyk et al., 2011, 2016) and personal characteristics of the perceiver (e.g., empathic traits) (Sonnby-Borgström et al., 2003; Dimberg et al., 2011; Balconi and Canavesio, 2016). Recently, it has been shown that hormonal level, i.e., administration of oxytocin (Korb et al., 2016) as well as cultural norms (Wood et al., 2016) influence FM expression.
Dynamic facial expressions resemble those that occur in everyday life, so they constitute a powerful medium for emotional communication compared to static expressions, which are mostly used in EMG studies using passive paradigms (Lundquist and Dimberg, 1995; Dimberg et al., 2000). There is considerable evidence that dynamic information is beneficial for various aspects of face processing, e.g., emotion recognition or judgments of intensity and arousal (for review see Krumhuber et al., 2013). Moreover, some studies have reported stronger emotion-specific responses to dynamic as opposed to static expressions, mainly the zygomaticus major muscle (ZM) (Weyers et al., 2006; Rymarczyk et al., 2011) and the corrugator supercilii muscle (CS) for happiness and anger (Sato et al., 2008), respectively; however, the available data is not consistent (for review see Seibt et al., 2015). This may be associated with the different methodologies that were used, e.g., different kinds of stimuli used across studies. Most published studies used some kind of artificial stimuli, e.g., dynamic avatars (Weyers et al., 2006) or image morphing to generate a videos of faces changing from a neutral to emotional expressions (Sato et al., 2008; Rymarczyk et al., 2011). In reference to recent work (Reinl and Bartels, 2015), it could be argued that such stimuli do not contain natural temporal asymmetry typical of authentic emotional facial expressions. These authors reported that “deviations from the natural timeline of expressions lead to a reduction of perceived emotional intensity as well convincingness, and to an increase of perceived artificialness of the dynamic facial expression” (Reinl and Bartels, 2015). In our previous EMG study (Rymarczyk et al., 2016), we used authentic stimuli, i.e., videos showing the emotional facial expressions of actors, and found that subjects responded with stronger EMG activity in the ZM and orbicularis oculi (OO) for dynamic, compared to static displays of happiness, and conclude that the subjects experienced positive emotions. In line with this, neuroimaging data (Trautmann et al., 2009; Arsalidou et al., 2011; Kessler et al., 2011) has revealed that the perception of dynamic compared to static stimuli engaged not only motor- (e.g., inferior frontal gyrus) (Carr et al., 2003), but also brain regions associated with emotion (e.g., amygdala, insula). These regions are also considered to be part of the extended mirror neuron system (MNS) (van der Gaag et al., 2007; Likowski et al., 2012), a neuronal network linked to empathy (Jabbi and Keysers, 2008; Decety, 2010a; Decety et al., 2014).
There is ongoing debate over whether facial mimicry and emotional empathy are associated phenomena (Hatfield et al., 1992; McIntosh, 2006). Some investigators have proposed that facial muscle activity provides proprioceptive information, and that facial expressions can influence internal emotional experiences (Hess and Fischer, 2014). Conversely, it has been suggested that the emotional state of the observer may influence the degree of mimicry such that observed expressions congruent with the perceiver’s emotional state are more quickly and easily mimicked (e.g., Niedenthal et al., 2001). Furthermore, it has been shown that emotional empathy, i.e., process when perception of other’s emotions generates the same emotional state in the perceiver (e.g., de Waal, 2008; Jankowiak-Siuda et al., 2015), is related to the magnitude of facial muscle activity (e.g., Sonnby-Borgstrom, 2002; Sonnby-Borgström et al., 2003; Dimberg et al., 2011; Balconi and Canavesio, 2013; Balconi et al., 2014). For example, using static prototypical facial expressions of happiness and anger, Dimberg et al. (2011) reported that high-empathic subjects responded with greater CS activity to angry compared to happy faces and with larger ZM activity to happy faces compared to angry faces. No differences between expressions of emotions in facial muscle activity were found in the low empathic group. The authors concluded that highly empathic people are particularly responsive to facial expressions. Recently, Balconi and Canavesio (2016) confirmed that empathic traits assessed through questionnaires modulate FM. These authors showed that highly empathic subjects were facially more responsive to happiness compared to subjects with low empathic traits, demonstrated by increased activity in ZM and decreased activity in CS. Moreover, they found that highly empathic participants showed general increased CS responses to negative emotions, i.e., anger and fear, compared with happy and neutral faces. Based on these findings, it is reasonable to assume that the ability to react to the emotional expressions in other people constitutes an important aspect of emotional empathy.
Furthermore, many EMG studies support the phenomenon of facial mimicry, however, most have tested mimicry using presentations of happy and angry faces. There is some evidence for specific facial muscle response patterns for other emotions, i.e., fear and disgust, although the evidence is relatively weak (Hess and Fischer, 2014). A number of studies have characterized ‘fear mimicry’ by increased activity of the CS (Lundquist and Dimberg, 1995; Magnee et al., 2007; Magnée et al., 2007; Balconi et al., 2011; van der Schalk et al., 2011; Balconi and Canavesio, 2013). However, the CS response does not appear to be specific for fear since frowning was also associated with angry (Dimberg and Petterson, 2000; Sato et al., 2008), sad (Lundquist and Dimberg, 1995; Likowski et al., 2008; Weyers et al., 2009) and disgusted faces (Lundquist and Dimberg, 1995; Hess and Blairy, 2001). Recently, (Murata et al., 2016) the activity of CS muscle has been associated with six discrete emotions (anger, disgust, fear, happiness, sadness, and surprise) when participants watched facial expressions as well as when they were specifically instructed to infer a target’s emotional state. To the contrast to the CS, activity of LF muscle, which draws the eyebrows up, was indexed as being typical for fear mimicry. Moreover, little is known about FM for disgust. It appears that apart from CS (Balconi and Canavesio, 2013) and OO (Wolf et al., 2005), levator labii (LL) which creates wrinkles on both sides of the nose, is indexed for “disgust mimicry” (Vrana, 1993; Lundquist and Dimberg, 1995; Cacioppo et al., 2007). The activity of LL during the mimicry of disgusted facial expressions has been reported only in a few studies (for review see Seibt et al., 2015).
The present study has two main aims. Firstly, we assessed whether there is an emotion-specific facial mimicry for fear and disgust facial expressions. Secondly, to examine whether modality of the stimuli (static vs. dynamic) and emotional empathy modulates the strength of FM in a specific setting. Facial EMG responses were measured from three muscles, the CS, LL, and lateral frontalis (LF), while the participants passively viewed static and dynamic displays. We played videos presenting emotional facial expressions of actors. Actors were chosen because of their proficiency in expressing emotional signals (Ekman and Rosenberg, 2005). Based on earlier EMG findings showing that the CS activates during the perception of various negative emotions (e.g., Murata et al., 2016), we did not expect between-emotion-specific activity of this muscle for fear and disgust displays. However, we assumed that emotion-specific activity occur, i.e., the LF activity for fear and LL for disgust. Regarding to modality of the stimuli, we hypothesized that the perception of dynamic compared to static displays, would lead to enhanced FM in all the examined muscles, in particular to the increased activity of the LF for fear and the LL for disgust. In the light of published studies regarding a link between empathy and facial mimicry, we expected that high compared to low-scoring empathic subjects would elicit stronger facial muscle responses, especially for dynamic stimuli. This study is an original attempt to test whether the stimulus modality, together with empathic traits, modulate the facial mimicry for fear and disgust facial expressions.
Materials and Methods
Thirty two healthy individuals (14 females, mean age = 24.2 ± 3.7 years) participated in this study. The subjects had normal or corrected to normal eyesight and none of them reported neurological diseases. The study was conducted in accordance with guidelines for ethical research and approved by the Ethics Committee at the University of Social Sciences and Humanities. An informed consent form was signed by each participant after the experimental procedures had been clearly explained.
We used four videos and four static pictures illustrating facial emotional expressions of disgust and fear. The process of creation and emotional rating of stimuli was described in our previous study (Rymarczyk et al., 2016). Stimuli clips of two actresses and actors were used in the experimental procedure. Static pictures depicted the same characters as presented in dynamic ones. Each stimulus clip presented the human face (shown from the front), starting with a view of the neutral, relaxed face of the model (no emotional expressions visible). Dynamic stimulus presentation lasted 2 s and ended with peak expression of a single emotion as the last frame of the stimulus. This occurred at approximately 1 s and remained visible for another second. Conversely, static stimuli consisted of a single frame of the peak expression and lasted 2 s. Stimuli were 880 pixels in height and 720 pixels in width. Emotional characteristics of the stimuli are provided in the Table 1.
TABLE 1. Summary statistics of emotional intensity ratings performed for each of emotional labels content of each dynamic facial expression stimuli.
The participants were tested individually, sitting in front of a 19-inch LCD screen in a sound-attenuated room. To disguise the real purpose of the study we informed each participant that sweat gland activity would be measured while they watched the actors selected for commercials by an external marketing company. Participants signed a written consent form and EMG electrodes were attached. Later, to enhance the comfort of the subjects, we asked the participants to complete a dummy questionnaire and verbally encouraged them to behave naturally.
Consistent with the methodology of Dimberg (1982), randomized blocks of eight stimuli were presented. Participants were asked to passively view stimuli on a gray background in the center of a screen. In each block pparticipants saw either fear or disgust expressions, each of eight stimuli was either static or dynamic. In other words, four kinds of blocks were created (disgust static, disgust dynamic, fear static, and fear dynamic). Each display started with a white fixation cross, 80 pixels in diameter, appearing for 2 s accompanied by a sound (standard windows reminder – ding.wav). Inter-stimulus intervals with a blank gray screen lasted 8.75–11.25 s. Within each block, randomized stimuli of two opposite-sex pairs of each trial type were presented. No facial expression from the same actor was shown consecutively and within each block each stimulus was repeated once. In summary, each stimulus was shown four times within each block, for a total of 16 presentations within each condition.
After the recording session, the participants completed the questionnaires assessing empathy. The Questionnaire Measure of Emotional Empathy (QMEE) developed by Mehrabian and Epstein (1972) was used. The QMEE contains 33-items to be completed using a 9-point ratings from -4 (=very strong disagreement) to +4 (=very strong agreement). The authors defined empathy as “a vicarious emotional response to the perceived emotional experiences of others” (Mehrabian and Epstein, 1972, 1). We used a Polish translation of the QMEE that had been recommended for this type of scientific research (Rembowski, 1989). Finally participants completed a demographics questionnaire, and were informed of the real purpose of the study.
Experimental events were controlled using Presentation® software (version 14.6) running on an IBM computer with Microsoft Windows 7 operating system. Procedure was displayed on a 19-inch LCD monitor (NEC multisync LCD 1990 FX; 1280 pixels × 1024 pixels resolution; 32 bit color rate; 75 Hz refresh rate) from a viewing distance of approximately 65 cm.
EMG Recording and Analysis
Data were recorded using Ag/AgCl electrodes with a diameter of 4 mm filled with electrode paste. The electrodes were positioned in pairs over three muscles – the CS, LL, and LF- on the left side of the face (Fridlund and Cacioppo, 1986). A reference electrode, 10 mm in diameter, was attached to the forehead. Before the electrodes were attached, the skin was cleaned with alcohol and a thin coating of electrode paste was applied. This procedure was repeated until electrode impedance was reduced to 5 kΩ or less. The EMG signals were recorded using a BrainAmp amplifier (Brain Products) and BrainVision Recorder. The hardware low-pass filtered the signal at 560 Hz. Finally, data was digitized using a 24-bit A/D converter with a sampling rate of 2 kHz, and stored on a computer running MS Windows XP for oﬄine analysis.
The BrainVision Analyser 2 (version 184.108.40.2067) re-referenced the data to bipolar measures and filtered it at 30 Hz high-pass, 500 Hz low-pass, and 50 Hz notch filters. After rectification and integration over 125 ms, the signal was resampled to 10 Hz. Artifacts in the data were detected in two ways. Firstly, when single muscle activity was above 8 μV at baseline (visibility of fixation cross), the trial was classified as an artifact and excluded from further analysis. All remaining trials were blind-coded and visually checked for artifacts. Later, trials were baseline corrected such that the EMG response was measured as the difference of averaged signal activity between the stimuli duration (2 s) and baseline period (2 s). Finally, the signal was averaged for each condition for each participant and imported to SPSS 21 for statistical analysis.
For testing differences in EMG responses, a two-way repeated-measures ANOVAs with two within-subjects factors (emotion: disgust, fear; stimulus modality: static, dynamic) and one between-subjects factor [emotional empathy: high empathy (HE), low empathy (LE)] were used. Between-subjects factor was created by dividing the subjects according to their median score of QMEE questionnaire. Separate ANOVAs were calculated for responses from a single muscle. Results were reported with a Bonferroni correction for multiple comparisons. In order to confirm that EMG activity changed from baseline and facial mimicry occurred, the EMG data were tested for a difference from a zero (baseline) using one-sample, two-tailed t-tests.
Subjects were differentiated by their empirically established median score on the QMEE questionnaire into HE and LE groups. The QMEE scores of the two groups were significantly different [t(30) = 7.660, p = 0.000, MHE = 63,440, SEHE = 4,163; MLE = 20,500 SELE = 3,754].
For the CS muscle, ANOVA showed significant main effect of emotional empathy groups [F(1,30) = 9.440, p = 0.004, η2 = 0.239]. HE (M = 0.500, SE = 0.101) compared to LE (M = 0.060, SE = 0.101) subjects reacted with stronger EMG activity. Significant interactions of emotion × modality [F(1,30) = 4.353, p = 0.046, η2 = 0.127] and emotion x modality × emotional empathy groups [F(1,30) = 4.978, p = 0.033, η2 = 0.142] were found. The latter interaction showed that (see Figure 1, for statistics see Table 2; Supplementary Table S1): (1) HE compared with LE people reacted with stronger CS response for dynamic and static disgust as well as for dynamic and static fear facial expressions; (2) HE subjects reacted with stronger CS for static disgust compared to static fear; (3) HE subjects reacted with higher EMG activity for dynamic than static fear expressions (trend effect); (4) HE subjects reacted with higher EMG activity for static than dynamic disgust expressions (trend effect).
FIGURE 1. Mean (±SE) EMG activity changes and corresponding statistics for corrugator supercilii during presentation conditions moderated by empathy groups. Asterisks with lines beneath indicate significant differences between conditions (simple effects) in EMG responses: +p < 0.1, ∗p < 0.05, ∗∗p < 0.01. Asterisks followed “b” indicate significant differences from baseline EMG responses: b∗p < 0.05.
TABLE 2. Summary statistics and statistics of EMG activity changes for corrugator supercilii during presentation conditions moderated by empathy groups.
No significant differences were found for main effect of emotion [F(1,30) = 1,348, p = 0.255, η2 = 0.043], modality [F(1,30) = 0.006, p = 0.937, η2 = 0.000] as well as no interactions: emotion × emotional empathy groups [F(1,30) = 0.619, p = 0.438, η2 = 0.020], modality × emotional empathy groups [F(1,30) = 0.028, p = 0.869, η2 = 0.001].
For the LL muscle, ANOVA showed significant main effect of emotional empathy groups [F(1,30) = 6.255, p = 0.018, η2 = 0.173], emotion [F(1,30) = 17.405, p = 0.000, η2 = 0.367] and significant interactions of emotion × emotional empathy groups [F(1,30) = 8.061, p < 0.008, η2 = 0.212]. Main effect of emotional empathy groups have shown that HE (M = 0.302, SE = 0.060) compared to LE (M = 0.088, SE = 0.060) subjects reacted with stronger LL activity. Main effect of emotion revealed that subjects responded with higher LL activity to disgust than fear. Emotional empathy groups x emotion interaction (see Figure 2, for statistics see Table 3; Supplementary Table S2) indicated: (1) LL reaction to disgust facial expressions was higher in HE compared to LE subjects; (2) in HE group LL activity was higher for disgust than fear expressions.
FIGURE 2. Mean (±SE) EMG activity changes and corresponding statistics for levator labii in pooled disgust and fear conditions moderated by empathy groups. Asterisks with lines beneath indicate significant differences between conditions (simple effects) in EMG responses: ∗∗p < 0.01, ∗∗∗p < 0.001. Asterisks followed “b” indicate significant differences from baseline EMG responses: b∗p < 0.05, b∗∗∗p < 0.001.
TABLE 3. Summary statistics and statistics of EMG activity changes for levator labii in pooled disgust and fear conditions moderated by empathy groups.
No significant differences for main effect of modality [F(1,30) = 0.397, p = 0.533, η2 = 0.013] were found as well as no interactions: modality × emotional empathy groups [F(1,30) = 0.949, p = 0.338, η2 = 0.031], emotion × modality [F(1,30) = 0.012, p = 0.912, η2 = 0.000] and emotion × modality × emotional empathy groups [F(1,30) = 0.016, p = 0.900, η2 = 0.001].
For the LF muscle, ANOVA showed significant main effect of emotion [F(1,30) = 10.395, p = 0.003, η2 = 0.257], and significant interactions of emotion × emotional empathy groups [F(1,30) = 7.805, p = 0.009, η2 = 0.206], modality × emotional empathy groups [F(1,30) = 5.098, p = 0.031, η2 = 0.145] and emotion × modality × emotional empathy groups [F(1,30) = 5.211, p = 0.030, η2 = 0.148].
Main effect of emotion showed that subjects reacted to fear compared to disgust with stronger LF activity. Emotional empathy groups × emotion interaction showed: (1) higher LF reaction in HE group to fear than disgust facial expressions; (2) HE compared to LE subjects reacted to fear expressions with higher LF activity. Emotional empathy groups × modality interaction indicated that higher LF reaction in HE group to dynamic than static facial expressions. Interaction of emotional empathy groups × emotion × modality showed (see Figure 3, for statistics see Table 4; Supplementary Table S3): (1) HE compared with LE people reacted with stronger LF response for dynamic fear; (2) HE subjects reacted with stronger LF response for dynamic fear compared to dynamic disgust and with stronger LF activity for static fear compared to static disgust (trend effect); (3) HE subjects reacted with higher EMG activity for dynamic than static fear expressions.
FIGURE 3. Mean (±SE) EMG activity changes and corresponding statistics for lateral frontalis during presentation conditions moderated by empathy groups. Asterisks with lines beneath indicate significant differences between conditions (simple effects) in EMG responses: +p < 0.1, ∗p < 0.05. Asterisks followed “b” indicate significant differences from baseline EMG responses: b+p < 0.1, b∗p < 0.05.
TABLE 4. Summary statistics and statistics of EMG activity changes for lateral frontalis during presentation conditions moderated by empathy groups.
Trend effects were observed for modality factor [F(1,30) = 3.438, p = 0.074, η2 = 0.103] as well as well as for interaction of emotion × modality [F(1,30) = 3.734, p = 0.063, η2 = 0.111]. Main effect of modality revealed that subjects responded with higher LF activity to dynamic than static facial expressions. Emotion × modality interaction showed: (1) higher LF activity for dynamic than static facial expression of fear; (2) LF activity for dynamic fear was higher than EMG reaction for dynamic disgust; (3) LF activity for static fear was higher than EMG reaction for static disgust.
No significant differences were observed for main effect of emotional empathy groups [F(1,30) = 1.076, p = 0.308, η2 = 0.103].
The present study had two aims. First, we assessed whether facial mimicry is found in the emotional expressions of fear and disgust, i.e., we tested emotion-specific activity of the LF muscle for fear and LL for disgust presentations. As we hypothesized we have showed that fear presentations induced activity in the LF muscle, while perception of disgust produced facial activity in the LL muscle, moreover both emotions induced activity of CS muscle. As noted in the introduction, the pattern of increased CS muscle activity for fear and disgust may indicate a contraction of this muscle is associated with negative emotional valence. As well as contraction of the CS muscle associated with anger, fear, disgust or surprise (Murata et al., 2016), contraction of this muscle was observed during disapproval (Cannon et al., 2011) or mental effort (Neumann and Strack, 2000). Thus, the activity of CS could be a general index of global negative affect (Larsen et al., 2003). More importantly, our results demonstrate some emotion–specific patterns of EMG activity, i.e., LF muscle activity for fear and LL muscle activity for disgust. A possible interpretation of fear mimicry is an emotional process indicating fear elicited by a social threat. For example, it has been shown (Moody et al., 2007) that after experiencing fear (watching fear-inducing film clips) subjects presented fearful expressions, as measured by increased frontalis activity. However, to date, it remains to be shown conclusively that activity of the frontalis muscle is a valid indicator of fearful expression. With respect to facial mimicry of disgust, our findings are in line with previous EMG studies demonstrating contraction of the LL muscle during observation of disgust (Vrana, 1993; Lundquist and Dimberg, 1995). Recently, Hühnel et al. (2014) showed increased activity of the LL muscle using dynamic facial expressions of disgust, however, this response was observed only in older compared to younger age group. To conclude, our results suggest that individuals mimic not only smiling and frowning to positive emotions and negative emotions, respectively, but also mimic discrete emotions such as fear and disgust. This supports the theory that facial mimicry is an automatic and innate reflex-like mechanism that is activated in response to emotional states.
Our next goal was to investigate whether stimulus modality and empathic traits are associated with the magnitude of facial muscle activity during mimicry of fear and disgust. As we hypothesized, the HE, compared to LE, group reacted with larger CS activity for all presented conditions. Moreover, in the HE group, change in the activity of the CS muscle was greater in response to dynamic compared to static fear stimuli. The same activity pattern, i.e., a stronger response to dynamic stimuli, was observed in the LF muscle for fear expressions in HE group. On the other hand, the LE group responses were not differentiable between static and dynamic emotions of fear and disgust stimuli in the CS and LF muscles. In the HE group the change in activity of LL was greater in response to disgust compared to fear stimuli regardless of modality (dynamic vs. static) stimuli. In the LE group, the activity of LL was not different between fear and disgust stimuli.
The results concerning empathy traits are in agreement with previous EMG studies, in which highly empathic subjects showed greater mimicry of emotional expressions for happiness and anger (Sonnby-Borgstrom, 2002; Sonnby-Borgström et al., 2003; Dimberg et al., 2011). Recently, it has been shown in highly empathic individuals large amplitude EMG responses were associated with CS muscles not only for facial expression of anger, but also for fear (Balconi and Canavesio, 2016) and disgust (Balconi and Canavesio, 2013). The authors conclude that facial EMG measures may function as a biological marker for the processes associated with to sharing emotion. Our results are broadly in line with the hypothesis (MacDonald, 2003; Dimberg et al., 2011) that automatic mimicry may be a component of emotional empathy.
A recent series of studies examining empathy (Balconi and Canavesio, 2013, 2016) has shown a direct relationship between EMG facial responses and the activity of the prefrontal cortex. Therefore, many neuroimaging studies investigating empathy report that people with higher empathic dispositions have higher activation-levels of empathy-related brain structures such as, the anterior insula (Hein and Singer, 2008), inferior frontal gyrus (Saarela et al., 2006), amygdala (Decety, 2010b) and prefrontal areas (Rameson et al., 2012). Furthermore, it has been shown (Balconi et al., 2011) that temporary inhibition of the medial prefrontal cortex (MPFC) by repeated transcranial magnetic stimulation (rTMS) impairs facial mimicry to angry and fearful faces through the ZM and CS muscles. On the other hand, excitatory high-frequency rTMS of the MPFC enhances mimicry of facial expressions in CS and ZM muscles during an empathic, emotional task (Balconi and Canavesio, 2013). Recently, Korb et al. (2015) have found that inhibition (rTMS) of both right primary motor cortex (M1) and the right primary somatosensory cortex (S1), considered as a part of MNS (for review see Pineda, 2008), also led to reduced facial mimicry. Together, these data suggest that the increased mimicry of facial expressions in highly empathic individuals is mediated by greater activation of empathy-related neural networks.
In our study, EMG responses for facial expressions of fear and disgust were not different in the LE group. A similar finding was reported by Dimberg et al. (2011) for expressions of happiness and anger. It is still an open question whether the lack of EMG activity reflects an inability in this group to both mimic and to react emotionally to facial stimuli. Some explanation comes from a recent study in which BOLD and facial EMG were simultaneously measured in an MRI scanner (Likowski et al., 2012). It was shown that congruent facial reactions recorded from CS and ZM during passive perception of static happy, sad, and angry facial expressions corresponded to activity in prominent parts of the MNS (i.e., the inferior frontal gyrus), as well as areas responsible for emotional processing (i.e., the insula). Thus, the authors suggested that facial mimicry not only involves motor, but affective neural systems simultaneously. Recently, Wood et al. (2016) have proposed that automatic mimicry reflects underlying “sensorimotor simulation” that may support understanding the emotion of others. The authors suggested that processing facial expressions in others activates motor as well as somatosensory neuronal processes involved in producing the facial expression. Moreover, this sensorimotor simulation may elicit an associated emotional state, resulting in accurate understanding of emotion in others. Furthermore, it seems that automatic mimicry does not always occur, e.g., when subject is not motivated to engage in understanding the perceiver (Carr et al., 2014). Therefore, it could be suggested that weaker facial mimicry in low empathy subjects neither imitate facial expressions nor share the emotions of others. On the other hand, it is thought that highly empathic individuals are more likely to imitate and show facial mimicry, because they ‘feel’ the emotions of others. In line with neuroimaging studies examining the perception of facial emotional expressions (van der Gaag et al., 2007; Kessler et al., 2011), it may be assumed that the stronger facial muscle activity in response to dynamic vs. static stimuli may mean that sensorimotor and emotion-related brain structures were activated more strongly in highly empathic subjects. Future studies, such as ones simultaneous measures of BOLD and facial EMG using an MRI scanner with high and low empathic subjects, are warranted to address this issue.
In this study, dynamic stimuli lead to enhanced FM in the HE group only, in particular for expressions of fear in the CS and LF muscles. Contrary to our assumption, the dynamic compared to static disgust displays did not lead to enhanced facial muscle responses in any of the muscles. Moreover, we found that HE subjects elicited stronger CS response for static compared to dynamic disgust representation. This finding is not straightforward to interpret because disgust, similar to fear, conveys information that potentially affects survival (Rozin and Haidt, 2013), so the dynamic modality could be an important factor favoring the avoidance of danger. On the other hand, it has been suggested that fear and disgust often involve divergent mechanisms at the physiological level (Krusemark and Li, 2011). Fear tends to activate sympathetic pathways, prompting the fight-or-flight response, while disgust activates parasympathetic responses, reducing heart rate, blood pressure, and respiration (Ekman et al., 1983). Accordingly, Susskind et al. (2008) reported that subjects have enhanced sensory acquisition (e.g., faster eye movements, air velocity inspiration) when expressing fear, and the opposite pattern associated with facial expressions of disgust. Importantly, both emotions are represented by different neural networks. It has been shown that fear is associated with activation of brain structures involved in the automatic detection of evolutionarily threats, mainly the amygdala (van der Zwaag et al., 2012), while disgust increases activity in the insula, among others structures connected to the sensory domain, i.e., sensation of bad taste (Nieuwenhuys, 2012). Based on the aforementioned studies, it could be debated whether the dynamic modality of stimuli plays a different role in the perception of fear and disgust. It is possible that dynamic fear expressions, in particular, convey higher complexity cues important for avoiding threats. According to this assumption, Hoffmann et al. (2013) have found that fear was more accurately recognized when using dynamic compared to static stimuli, however, the modality factor did not improve recognition in the case of disgust. Some authors have proposed that that certain expressions rely more on motion representation than others (Cunningham and Wallraven, 2009; Fiorentini and Viviani, 2011). To sum up, the stronger mimicry we observed in CS to static compared to dynamic disgust in HE may simply result from an interaction of two factors. Highly empathic people are more prone to mimic facial emotions and/or properties of the stimuli itself, i.e., static images of disgust are more mimicry-engaging than dynamic ones. Further studies are warranted to evaluate the role of stimulus modality and empathy traits in facial mimicry for disgust.
Our findings have partially confirmed the influence of dynamic facial expressions in facial mimicry, i.e., there was increased mimicry for dynamic compared to static fear expressions, especially for the HE group. These results are broadly consistent with the notion that the benefits of using dynamic stimuli arise from the motion representation itself, i.e., unfolding the emotion can provide a stronger intention and enrichment of emotional expressions when compared to static displays (Ambadar et al., 2005). This explanation is in line with neuroimaging findings that have shown that the perception of dynamic facial displays engages brain regions sensitive to processing of emotional stimuli (Kilts et al., 2003), signaling intentions (Gallagher et al., 2000), as well as the MNS, e.g., inferior frontal gyrus (e.g., LaBar et al., 2003; Sato et al., 2004; van der Gaag et al., 2007; Trautmann et al., 2009; Kessler et al., 2011). Based on these finding, we propose that highly empathic subjects, because of their personal characteristics, are particularly sensitive and responsive to the dynamic facial emotional expressions of others.
In summary, our results highlight the importance of the stimulus modality and empathy traits of subjects, in the mimicry of facial expressions for biologically relevant emotions, i.e., fear and disgust facial expressions. Together with others findings, our data confirms an emotion-specific pattern response of the LF for fearful (Lundquist and Dimberg, 1995) and LL for facial expressions of disgust (Vrana, 1993). Consistent with our prediction, there was no between-emotion-specific effect for the CS, thereby indicating that activity of this muscle is generally related to negatively valenced stimuli (Bradley et al., 2001). Our results further show that the EMG recording of the LL and LF provide useful measures of empathic emotional responses. Future studies in natural settings are warranted to understand the mutual links between emotional empathy and FM.
Conceived and designed the experiments: KR and ŁŻ. Performed the experiments: KR and ŁŻ. Analyzed the data: KR and ŁŻ. Contributed materials: KR and ŁŻ. Wrote the paper: KR, ŁŻ, KJ-S, and IS.
This study was supported by grant no. 2011/03/B/HS6/05161 from the Polish National Science Centre provided to the KR.
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/article/10.3389/fpsyg.2016.01853/full#supplementary-material
Conflict of Interest Statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Ambadar, Z., Schooler, J. W., and Cohn, J. F. (2005). Deciphering the enigmatic face: the importance of facial dynamics in interpreting subtle facial expressions. Psychol. Sci. 16, 403–410. doi: 10.1111/j.0956-7976.2005.01548.x
Balconi, M., Bortolotti, A., and Gonzaga, L. (2011). Emotional face recognition, EMG response, and medial prefrontal activity in empathic behaviour. Neurosci. Res. 71, 251–259. doi: 10.1016/j.neures.2011.07.1833
Balconi, M., and Canavesio, Y. (2013). High-frequency rTMS improves facial mimicry and detection responses in an empathic emotional task. Neuroscience 236, 12–20. doi: 10.1016/j.neuroscience.2012.12.059
Balconi, M., and Canavesio, Y. (2016). Empathy, approach attitude, and rTMs on Left DLPFC affect emotional face recognition and facial feedback (EMG). J. Psychophysiol. 30, 17–28. doi: 10.1027/0269-8803/a000150
Balconi, M., Vanutelli, M. E., and Finocchiaro, R. (2014). Multilevel analysis of facial expressions of emotion and script: self-report (arousal and valence) and psychophysiological correlates. Behav. Brain Funct. 10, 32. doi: 10.1186/1744-9081-10-32
Bradley, M. M., Codispoti, M., Cuthbert, B. N., and Lang, P. J. (2001). Emotion and motivation I: defensive and appetitive reactions in picture processing. Emotion 1, 276–298. doi: 10.1037//1528-35220.127.116.116
Cacioppo, J. T., Tassinary, L. G., and Berntson, G. G. (2007). “Psychophysiological science: interdisciplinary approaches to classic questions about the mind,” in Handbook of Psychophysiology, eds J. T. Cacioppo, L. G. Tassinary, and G. Berntson (Cambridge: Cambridge University Press), 1–17. doi: 10.1017/CBO9780511546396
Cannon, P. R., Schnall, S., and White, M. (2011). Transgressions and expressions: affective facial muscle activity predicts moral judgments. Soc. Psychol. Personal. Sci. 2, 325–331. doi: 10.1177/1948550610390525
Carr, E. W., Winkielman, P., and Oveis, C. (2014). Transforming the mirror: power fundamentally changes facial responding to emotional expressions. J. Exp. Psychol. Gen. 143, 997–1003. doi: 10.1037/a0034972
Carr, L., Iacoboni, M., Dubeau, M.-C., Mazziotta, J. C., and Lenzi, G. L. (2003). Neural mechanisms of empathy in humans: a relay from neural systems for imitation to limbic areas. Proc. Natl. Acad. Sci. 100, 5497–5502. doi: 10.1073/pnas.0935845100
Cunningham, D. W., and Wallraven, C. (2009). The interaction between motion and form in expression recognition. Proceedings Of the 6th Symposium Applied Perception in Graphics and Visualization, Chania, 41–44. doi: 10.1145/1620993.1621002
Ekman, P., and Rosenberg, E. L. (2005). What the Face Reveals. Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS). Oxford: Oxford University Press, doi: 10.1093/acprof:oso/9780195179644.001.0001
Gallagher, H., Happé, F., Brunswick, N., Fletcher, P., Frith, U., and Frith, C. (2000). Reading the mind in cartoons and stories: an fMRI study of “theory of mind” in verbal and nonverbal tasks. Neuropsychologia 38, 11–21. doi: 10.1016/S0028-3932(99)00053-6
Hess, U., and Blairy, S. (2001). Facial mimicry and emotional contagion to dynamic emotional facial expressions and their influence on decoding accuracy. Int. J. Psychophysiol. 40, 129–141. doi: 10.1016/S0167-8760(00)00161-6
Hoffmann, H., Traue, H. C., Limbrecht-Ecklundt, K., Walter, S., and Kessler, H. (2013). Static and dynamic presentation of emotions in different facial areas: fear and surprise show influences of temporal and spatial properties. Psychology 4, 663–668. doi: 10.4236/psych.2013.48094
Hühnel, I., Fölster, M., Werheid, K., and Hess, U. (2014). Empathic reactions of younger and older adults: no age related decline in affective responding. J. Exp. Soc. Psychol. 50, 136–143. doi: 10.1016/j.jesp.2013.09.011
Jankowiak-Siuda, K., Rymarczyk, K., Żurawski,Ł, Jednoróg, K., and Marchewka, A. (2015). Physical attractiveness and sex as modulatory factors of empathic brain responses to pain. Front. Behav. Neurosci. 9:236. doi: 10.3389/fnbeh.2015.00236
Kessler, H., Doyen-Waldecker, C., Hofer, C., Hoffmann, H., Traue, H. C., and Abler, B. (2011). Neural correlates of the perception of dynamic versus static facial expressions of emotion. Psychosoc. Med. 8, 8. doi: 10.3205/psm000072
Kilts, C. D., Egan, G., Gideon, D. A., Ely, T. D., and Hoffman, J. M. (2003). Dissociable neural pathways are involved in the recognition of emotion in static and dynamic facial expressions. Neuroimage 18, 156–168. doi: 10.1006/nimg.2002.1323
Korb, S., Grandjean, D., and Scherer, K. R. (2010). Timing and voluntary suppression of facial mimicry to smiling faces in a Go/NoGo task—An EMG study. Biol. Psychol. 85, 347–349. doi: 10.1016/j.biopsycho.2010.07.012
Korb, S., Malsert, J., Rochas, V., Rihs, T. A., Rieger, S. W., Schwab, S., et al. (2015). Gender differences in the neural network of facial mimicry of smiles–An rTMS study. Cortex 70, 101–114. doi: 10.1016/j.cortex.2015.06.025
Korb, S., Malsert, J., Strathearn, L., Vuilleumier, P., and Niedenthal, P. (2016). Sniff and mimic–Intranasal oxytocin increases facial mimicry in a sample of men. Horm. Behav. 84, 64–74. doi: 10.1016/j.yhbeh.2016.06.003
Krusemark, E. A., and Li, W. (2011). Do all threats work the same way? Divergent effects of fear and disgust on sensory perception and attention. J. Neurosci. 31, 3429–3434. doi: 10.1523/JNEUROSCI.4394-10.2011
LaBar, K. S., Crupain, M. J., Voyvodic, J. T., and McCarthy, G. (2003). Dynamic perception of facial affect and identity in the human brain. Cereb. Cortex 13, 1023–1033. doi: 10.1093/cercor/13.10.1023
Larsen, J. T., Norris, C. J., and Cacioppo, J. T. (2003). Effects of positive and negative affect on electromyographic activity over zygomaticus major and corrugator supercilii. Psychophysiology 40, 776–785. doi: 10.1111/1469-8986.00078
Likowski, K. U., Mühlberger, A., Gerdes, A. B. M., Wieser, M. J., Pauli, P., and Weyers, P. (2012). Facial mimicry and the mirror neuron system: simultaneous acquisition of facial electromyography and functional magnetic resonance imaging. Front. Hum. Neurosci. 6:214. doi: 10.3389/fnhum.2012.00214
Magnée, M. J. C. M., de Gelder, B., van Engeland, H., and Kemner, C. (2007). Facial electromyographic responses to emotional information from faces and voices in individuals with pervasive developmental disorder. J. Child Psychol. Psychiatry 48, 1122–1130. doi: 10.1111/j.1469-7610.2007.01779.x
Magnee, M. J. C. M., Stekelenburg, J. J., Kemner, C., and de Gelder, B. (2007). Similar facial electromyographic responses to faces, voices, and body expressions. Neuroreport 18, 369–372. doi: 10.1097/WNR.0b013e32801776e6
Moody, E. J., McIntosh, D. N., Mann, L. J., and Weisser, K. R. (2007). More than mere mimicry? The influence of emotion on rapid facial reactions to faces. Emotion 7, 447–457. doi: 10.1037/1528-3518.104.22.1687
Murata, A., Saito, H., Schug, J., Ogawa, K., and Kameda, T. (2016). Spontaneous facial mimicry is enhanced by the goal of inferring emotional states: evidence for moderation of “automatic” mimicry by higher cognitive processes. PLoS ONE 11:e0153128. doi: 10.1371/journal.pone.0153128
Neumann, R., and Strack, F. (2000). Approach and avoidance: the influence of proprioceptive and exteroceptive cues on encoding of affective information. J. Pers. Soc. Psychol. 79, 39–48. doi: 10.1037/0022-3522.214.171.124
Niedenthal, P. M., Brauer, M., Halberstadt, J. B., and Innes-Ker, ÅH. (2001). When did her smile drop? Facial mimicry and the influences of emotional state on the detection of change in emotional expression. Cogn. Emot. 15, 853–864. doi: 10.1080/02699930143000194
Pineda, J. A. (2008). Sensorimotor cortex as a critical component of an “extended” mirror neuron system: does it solve the development, correspondence, and control problems in mirroring? Behav. Brain Funct. 4, 47. doi: 10.1186/1744-9081-4-47
Rameson, L. T., Morelli, S. A., and Lieberman, M. D. (2012). The neural correlates of empathy: experience, automaticity, and prosocial behavior. J. Cogn. Neurosci. 24, 235–245. doi: 10.1162/jocn_a_00130
Rymarczyk, K., Biele, C., Grabowska, A., and Majczynski, H. (2011). EMG activity in response to static and dynamic facial expressions. Int. J. Psychophysiol. 79, 330–333. doi: 10.1016/j.ijpsycho.2010.11.001
Rymarczyk, K., Żurawski,Ł, Jankowiak-Siuda, K., and Szatkowska, I. (2016). Do dynamic compared to static facial expressions of happiness and anger reveal enhanced facial mimicry? PLoS One 11:15. doi: 10.1371/journal.pone.0158534
Saarela, M. V., Hlushchuk, Y., Williams, A. C. D. C., Schurmann, M., Kalso, E., and Hari, R. (2006). The compassionate brain: humans detect intensity of pain from another’s face. Cereb. Cortex 17, 230–237. doi: 10.1093/cercor/bhj141
Sato, W., Kochiyama, T., Yoshikawa, S., Naito, E., and Matsumura, M. (2004). Enhanced neural activity in response to dynamic facial expressions of emotion: an fMRI study. Cogn. Brain Res. 20, 81–91. doi: 10.1016/j.cogbrainres.2004.01.008
Sonnby-Borgström, M., Jönsson, P., and Svensson, O. (2003). Emotional empathy as related to mimicry reactions at different levels of information processing. J. Nonverbal Behav. 27, 3–23. doi: 10.1023/A:1023608506243
Trautmann, S. A., Fehr, T., and Herrmann, M. (2009). Emotions in motion: dynamic compared to static facial expressions of disgust and happiness reveal more widespread emotion-specific activations. Brain Res. 1284, 100–115. doi: 10.1016/j.brainres.2009.05.075
van der Schalk, J., Fischer, A., Doosje, B., Wigboldus, D., Hawk, S., Rotteveel, M., et al. (2011). Convergent and divergent responses to emotional displays of ingroup and outgroup. Emotion 11, 286–298. doi: 10.1037/a0022582
van der Zwaag, W., Da Costa, S. E., Zürcher, N. R., Adams, R. B., and Hadjikhani, N. (2012). A 7 tesla fMRI study of amygdala responses to fearful faces. Brain Topogr. 25, 125–128. doi: 10.1007/s10548-012-0219-0
Weyers, P., Muhlberger, A., Hefele, C., and Pauli, P. (2006). Electromyographic responses to static and dynamic avatar emotional facial expressions. Psychophysiology 43, 450–453. doi: 10.1111/j.1469-8986.2006.00451.x
Weyers, P., Mühlberger, A., Kund, A., Hess, U., and Pauli, P. (2009). Modulation of facial reactions to avatar emotional faces by nonconscious competition priming. Psychophysiology 46, 328–335. doi: 10.1111/j.1469-8986.2008.00771.x
Wolf, K., Mass, R., Ingenbleek, T., Kiefer, F., Naber, D., and Wiedemann, K. (2005). The facial pattern of disgust, appetence, excited joy and relaxed joy: an improved facial EMG study. Scand. J. Psychol. 46, 403–409. doi: 10.1111/j.1467-9450.2005.00471.x
Keywords: facial mimicry, empathy, fear, disgust, static, dynamic, facial expressions
Citation: Rymarczyk K, Żurawski Ł, Jankowiak-Siuda K and Szatkowska I (2016) Emotional Empathy and Facial Mimicry for Static and Dynamic Facial Expressions of Fear and Disgust. Front. Psychol. 7:1853. doi: 10.3389/fpsyg.2016.01853
Received: 03 August 2016; Accepted: 09 November 2016;
Published: 23 November 2016.
Edited by:Maurizio Codispoti, University of Bologna, Italy
Reviewed by:Sebastian Korb, University of Vienna, Austria
Phoebe E. Bailey, Western Sydney University, Australia
Copyright © 2016 Rymarczyk, Żurawski, Jankowiak-Siuda and Szatkowska. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.