Impact Factor 2.323

The 1st most cited journal in Multidisciplinary Psychology

Mini Review ARTICLE

Front. Psychol., 13 March 2015 | https://doi.org/10.3389/fpsyg.2015.00255

Independence of face identity and expression processing: exploring the role of motion

  • 1School of Psychological Sciences, University of Manchester, Manchester, UK
  • 2School of Social Sciences, Business and Law, Teesside University, Middlesbrough, UK

According to the classic Bruce and Young (1986) model of face recognition, identity and emotional expression information from the face are processed in parallel and independently. Since this functional model was published, a growing body of research has challenged this viewpoint and instead support an interdependence view. In addition, neural models of face processing emphasize differences in terms of the processing of changeable and invariant aspects of faces. This article provides a critical appraisal of this literature and discusses the role of motion in both expression and identity recognition and the intertwined nature of identity, expression and motion processing. We conclude by discussing recent advancements in this area and research questions that still need to be addressed.

Introduction

A controversial issue in studies of face processing is whether facial identity and emotion are processed independently or interactively (see Posamentier and Abdi, 2003; Calder and Young, 2005). Early functional models of face recognition, like the Bruce and Young (1986) model, suggest that facial identity and emotional expression are processed in parallel and independently. However, there is evidence to support both the independence and interdependence of identity and expression processing.

Independence between Identity and Expression Processing

Support for the independent parallel route viewpoint comes from different sources. Firstly, neuropsychological studies show double dissociations whereby some patients show impaired recognition of face identity (prosopagnosia) but not emotional expression, or vice versa (e.g., Kurucz and Feldmar, 1979; Bruyer et al., 1983; Tranel et al., 1988). Whilst these results are compelling, they may be biased by methodological difficulties (unusual methods of scoring, absence of control data; Calder and Young, 2005) or patients may adopt atypical strategies (see Adolphs et al., 2005).

Secondly, studies with non-impaired participants also provide some support for independence. For example, Young et al. (1986) found no difference in reaction times when making expression matching decisions to familiar and unfamiliar faces. Additionally Strauss and Moscovitch (1981) found that while face identity and expression perception both show Left Visual Field superiority, they could be differentiated in terms of overall processing time. Furthermore, Etcoff (1984) found evidence for independence using the Garner (1974) selective attention paradigm (but see later work outlined in Interdependence section).

Thirdly, studies using non-human primates have suggested that different cortical cell populations are sensitive to facial identity and facial expression (e.g., Perrett et al., 1984; Hasselmo et al., 1989; Hadj-Bouziane et al., 2008). This suggestion has also been supported in human studies using positron emission tomography (Sergent et al., 1994) and fMRI (Haxby et al., 2000; Winston et al., 2004). These findings are consistent (but not necessarily conclusive) with the idea of independent facial identification and expression processes.

Interdependence between Identity and Expression Processing

Despite substantial evidence supporting the existence of dissociable systems, there are a growing number of studies suggesting that the processing of facial identity and emotional expression is interdependent (see Fitousi and Wenger, 2013 for review). In order to fully understand the dependence or independence of information processing during a given task, it is first important to know which information is required for that task. For example, to resolve the different tasks of identity and expression categorization (using the same stimulus), different face information is needed (e.g., Morrison and Schyns, 2001; Schyns et al., 2002). Before considering this issue, we outline more classic research on interdependence.

Schweinberger and Soukup (1998) suggested that an asymmetric relationship exists between identity processing and expression processing. Using the Garner (1974) selective attention paradigm, they found that the speed of identity classification judgments does not increase with irrelevant variations in expression, but the opposite is not the case (also see Schweinberger et al., 1999; Goshen-Gottstein and Ganel, 2000; Baudouin et al., 2002; Wang et al., 2013). In addition, previous work does not take into account the possibility that an interaction between dimensions could happen at the level of decision processes instead of perceptual representations. Multidimensional signal detection can be used to explore this issue (Fitousi and Wenger, 2013). Soto et al. (2015), using this technique found that the perception of emotional expressions was not affected by changes in identity, but the perception of identity was affected by changes in emotional expression. Thus, besides any decisional interactions arising from the data, emotional expression and identity were also perceptually interactive.

Interestingly, a “smiling” effect has been found whereby happy expressions impact identity judgements. Specifically, seeing smiling faces has been found to aid the recognition and/or encoding of identity (Kottoor, 1989) and the naming of famous faces (Gallegos and Tranel, 2005). Kaufmann and Schweinberger (2004) demonstrated that famous faces were recognized more quickly when displaying moderately positive expressions, relative to more intense happy or angry faces. Later work found reduced judgements of face familiarity for negative-expression faces, compared with neutral-expression or positive-expression faces (Lander and Metcalfe, 2007). These results support the notion of interdependence between expression and identity processing from faces.

More recent studies using an adaptation methodology further support this viewpoint. Results have shown that emotion aftereffects in individual expressions (when one of the target expressions matches the adapting face) are modulated by identity, with aftereffects in the same-identity condition larger than in the different-identity condition (Campbell and Burke, 2009; Vida and Mondloch, 2009). These results were taken as evidence for visual representations of expression faces that are both independent and dependent on identity (Fox and Barton, 2007; Ellamil et al., 2008; Pell and Richards, 2013).

Finally, computational work has also supported the possible overlap between representations of identity and expression (see Calder et al., 2001; Calder and Young, 2005) and imaging studies have found overlap in activation patterns during identity and facial expression recognition tasks (e.g., LaBar et al., 2003; Ganel et al., 2005). These converging results suggest an effect of facial expression on recognition, and disagree with the original Bruce and Young (1986) model, which proposes that changes in facial expressions should not influence identity recognition.

Changeable and Invariant Aspects of Faces

Newer models of face perception refer to neural processing. Haxby et al. (2000) propose two functionally and neurologically distinct pathways to face analysis, the lateral pathway that preferentially responds to changeable aspects of faces (including expressions) and the ventral pathway that preferentially responds to invariant aspects of faces (identity). Visuo-perceptual representations of changeable facial aspects, including expressions, are thought to be mediated by the superior temporal sulcus, while visuo-perceptual representations of invariant characteristics of a face, like the recognition of identity, are coded by the lateral fusiform gyrus (Haxby et al., 2000). Here, as in the Bruce and Young (1986) functional account, independence is proposed between the processing of identity and expression, but a weaker anatomical (rather than functional) distinction is made between changeable (expression) and invariant (identity) aspects of face processing.

While it is clear that face expression processing can impact identity processing, almost all previous work has utilized static images as stimuli. Since faces are normally seen in motion, we argue that this approach is limiting. To demonstrate this issue, we first outline research looking at the impact of motion on face identity and expression processing, before assessing the intertwined nature of identity, expression and motion processing. Indeed, a familiar person’s characteristic facial expressions (for example, their wry smile) aids recognition of their identity, just as the unique structure of an individual’s face influences the way their emotions are expressed. Here, we note that facial expressions contain static and dynamic components. Similarly, when recognizing identity, a dynamic clip also contains static and dynamic components. Importantly, the dynamic component present in both expression and identity processing may be intrinsically linked and may involve the same information. We conclude by reviewing the questions that remain to be answered in this research area.

Movement and the Recognition of Identity

Much previous research has assumed that only invariant aspects of the face provide identity relevant information. However, a substantial body of research has demonstrated that changeable aspects of a face also affect identity recognition. This effect is referred to as the “motion advantage” (e.g., Schiff et al., 1986; Knight and Johnston, 1997; Pike et al., 1997; Lander et al., 1999; O’Toole et al., 2002; Lander and Davies, 2007). A face can produce rigid or non-rigid motion. During rigid facial movements the face maintains its three-dimensional form, while the whole head changes its relative position and/or orientation. During non-rigid motion, individual parts of the face move in relation to one another, for example during speech/expressions. Both types of motion information are posited to be independent of identity processing in the Bruce and Young (1986) account, yet seeing a face move facilitates the encoding and recognition of facial identity (e.g., Hill and Johnston, 2001; Knappmeyer et al., 2003; Pilz et al., 2006). More specifically, non-rigid facial movement aids accurate and faster face matching (Thornton and Kourtzi, 2002); better learning of unfamiliar faces (Lander and Bruce, 2003; Butcher et al., 2011; rigid motion — Pike et al., 1997), and helps accurate identification of degraded familiar faces (Knight and Johnston, 1997; Lander et al., 2001).

Several theories have explained why movement facilitates identity recognition (O’Toole et al., 2002). Firstly, movement may allow people to build a better three-dimensional representation of the face and head via structure-from-motion processes (representation enhancement hypothesis); secondly, people may learn the characteristic motion patterns of the face and head of a person (supplemental information hypothesis); thirdly, the social cues carried in movement (emotional expressions, speech) may attract attention to the identity specific areas of the face, facilitating identity processing (social signals hypothesis).

Although findings of a movement advantage are robust, several studies have found that movement is primarily useful when static face recognition is impaired in some way (e.g., negation, Knight and Johnston, 1997; blurring, Lander et al., 2001). Interestingly, recent research has also demonstrated that developmental prosopagnosics are able to match, recognize and learn moving faces better than static ones (Steede et al., 2007; Longmore and Tree, 2013; Bennetts et al., 2015). Taken together, these findings suggest that changeable aspects of a face can constitute a useful supplementary cue for face recognition, particularly when recognition is impaired by degradation of stimuli or by perceiver impairment (also see Xiao et al., 2014).

Movement and the Recognition of Expression

Similarly to identity research, past research on facial expression processing has utilized static facial images. However, expressions are changeable and dynamic in nature. Ordinarily people view dynamic facial expressions that make rapid changes over time, rather than static images of an expression “apex.” It is known that we are extremely sensitive to subtle dynamic cues (Edwards, 1998) and to changes of natural facial dynamics (Dobs et al., 2014). Furthermore, dynamic aspects (e.g., speed of onset/offset) of facial movement are useful when distinguishing genuine from posed expressions (Hess and Kleck, 1990) and often differences between expressions are reflected in their temporal dynamic properties (Ekman et al., 1985). Jack et al. (2014) propose that there are four basic emotional expressions, perceptually segmented across time. Furthermore, dynamic facial expressions are known to be recognized more accurately (Trautmann et al., 2009) and quickly (Recio et al., 2011, but see Fiorentini and Viviani, 2011) than static expressions (see Krumhuber et al., 2013 for review).

Further experimental evidence for the importance of dynamic information during expression recognition has been found in point-light experiments (Matsuzaki and Sato, 2008), experiments using subtle expressions (Ambadar et al., 2005) and those that impose time pressures (Zhongqing et al., 2014). Interestingly, Kamachi et al. (2001) found that the dynamic characteristics of the observed motion affected how well different morphed expressions could be recognized. Sadness was most accurately identified from slow sequences, with happiness and surprise, most accurately recognized from fast sequences. Angry expressions were best recognized from medium speed sequences and dynamic characteristics may be important in the “angry superiority effect” (Ceccarini and Caudek, 2013). Work by Pollick et al. (2003) found that changing the duration of an expression had an effect on ratings of emotional intensity, with a trend for expressions with shorter durations to have lower ratings of intensity (also see Bould and Morris, 2008). Finally, Gill et al. (2014) show that dynamic facial expressions override the social judgements made based on static face morphology.

In early work, Humphreys et al. (1993) report the case of a prosopagnosic patient who could make expression judgements from moving (but not static) displays, consistent with the idea of dissociable static and dynamic expression processing. Trautmann et al. (2009) used an fMRI methodology to examine the neural networks involved in the perception of static and dynamic facial expressions. Dynamic faces indicated enhanced emotion-specific brain activation patterns in the parahippocampal gyrus, including the amygdala, fusiform gyrus, superior temporal gyrus, inferior frontal gyrus, and occipital and orbitofrontal cortex. Post hoc ratings of the dynamic stimuli revealed a better recognisability in comparison to the static stimuli (but see Trautmann-Lengsfeld et al., 2013).

Concluding Comments and Future Directions

Thus, the literature reviewed demonstrates that expression processing can impact face identification, and that movement more broadly influences both face identification and expression recognition. It seems plausible to suggest that this is because facial motion concurrently contains both identity-specific and expression information which, on an everyday basis, are processed simultaneously. Indeed, understanding the emotional facial expressions of others, and being able to identify those individuals are both important for daily social functioning. Typically a face moves in a complex manner, combining rigid rotational and non-rigid movements (O’Toole et al., 2002). However, in most studies investigating the role of motion in identity recognition, relatively unspecified speaking and expressive movements are utilized. Future research should systematically investigate the effect of different types of motion on both identity and expression recognition. In addition, it is difficult to separate out the impact of motion and expression, as it is possible that even seeing a static facial expression may activate the brain areas associated with producing that action ourselves. This notion is concordant with research that has found that the “classical” mirror neuron system (premotor and parietal areas), limbic regions, and the somatosensory system become spontaneously active during the monitoring of facial expressions and the production of similar facial expressions (van der Gaag et al., 2007). van der Gaag et al. (2007) used only moving stimuli, so it remains unclear whether similar mirror neuron activation is evident when the perceiver sees only the consequence of an expressive action (e.g., smiling action) in the form of a static expression (e.g., a smile). It is interesting to consider what additional questions remain in this rapidly progressing research area.

Firstly, given the importance of motion for the recognition of both identity and expressions, we need to determine whether neural models like Haxby et al. (2000) can account for the importance of motion when recognizing identity. This question is the focus of neuroimaging work that aims to determine the neural activities when processing moving and static faces (see Fox et al., 2009; Schultz and Pilz, 2009; Ichikawa et al., 2010; Pitcher et al., 2011a; Schultz et al., 2013). Indeed, recent research by Pitcher et al. (2014) suggests that the dynamic motor and static components of a face are processed via dissociable cortical pathways. Pitcher et al. (2014) revealed a double dissociation between the response to moving and static faces as thetaburst transcranial magnetic stimulation (TBS) delivered over the right occipital face area (OFA) reduced the response to static but not moving faces in the right posterior STS (rpSTS), while TBS delivered over the rpSTS itself reduced the response to dynamic but not static faces. Interestingly, they found that these dissociable pathways originate early in the visual cortex, not in the OFA, a finding that opposes prevailing models of face perception (Haxby et al., 2000; Calder and Young, 2005; Pitcher et al., 2011b), indicating that we may need to reconsider how faces are cortically represented.

A second issue concerns whether motion mediates the relationship between identity and expression processing. Stoesz and Jakobson (2013) used a speeded Garner paradigm task and found a difference between static and moving stimuli. There was no support for independence with static faces. However, when the faces were moving, participants’ identity and expression judgments were unaffected by modifications in the irrelevant dimension, supporting independence with moving faces. Moreover, using similar methods Rigby et al. (2013) found that dynamic facial information reduced the interference between upright facial identity and emotion processing. These findings indicate that static facial identity information and emotional information may interfere with one another. However, moving faces seem to promote the separation of facial identity and emotion expression processing. Future experimental work needs to investigate the role of motion in mediating the independence of identity and expression processing from faces, by specifically comparing independence using different methodologies with both static and moving stimuli.

A third issue links to the fact that in order to fully understand dependence or independence during a given task, it is first necessary to know which information is required for that task. It is known that different visual categorisation tasks (e.g., face identity, expression or gender) are sensitive to distinct visual characteristics of the same image (Schyns et al., 2002). For example, research suggests that a central band of spatial frequencies is particularly useful for identifying faces (e.g., Fiorentini et al., 1983; Parker and Costen, 1999). Specific methods (e.g., bubbles; Schyns et al., 2002) have been used to isolate information required for identity/expression recognition. Whilst some of the diagnostic cues required to identify an expression and a face may be distinct, others like facial motion may overlap. Similar methodologies should be adopted in the future to isolate what aspects of facial motion are diagnostic of face identity and expression.

A further issue concerns further distinctions that can be made regarding the type of motion shown by a face. Facial movements can involve expressions or not, and expressional movements may have a significant emotional content or have little affective content. In future work it may be possible to uncouple the impact of both expressional and non-expressional movement on the processing of facial identity. Furthermore, current findings may be modulated by other factors such as gender (see Herlitz and Lovén, 2013) or race (e.g., Hugenberg et al., 2007) and these should also be explored to gain a more representative understanding of the question (e.g., Henrich et al., 2010). For example, cultural specificities in static (Elfenbein and Ambady, 2003; Marsh et al., 2003) or dynamic facial expressions (Jack et al., 2012) may produce different patterns of information independence across cultures. Lastly, future neuroscience investigations are needed to probe the distinctive neural activities associated with moving face processing, focusing on expressional and non-expressional movements. These lines of enquiry will be important as they address how expression and identity processing are intertwined, and how motion mediates this relationship.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

Adolphs, R., Gosselin, F., Buchanan, T. W., Tranel, D., Schyns, P., and Damasio, A. R. (2005). A mechanism for impaired fear recognition after amygdala damage. Nature 433, 68–72. doi: 10.1038/nature03086

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Ambadar, Z., Schooler, J. W., and Cohn, J. F. (2005). Deciphering the enigmatic face—the importance of facial dynamics in interpreting subtle facial expressions. Psychol. Sci. 16, 403–410. doi: 10.1111/j.0956-7976.2005.01548.x

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Baudouin, J. Y., Martin, F., Tiberghien, G., Verlut, I., and Franck, N. (2002). Selective attention to facial emotion and identity in schizophrenia. Neuropsychologia 40, 503–511. doi: 10.1016/S0028-3932(01)00114-2

CrossRef Full Text | Google Scholar

Bennetts, R. J., Butcher, N., Lander, K., and Bate, S. (2015). Movement cues aid face recognition in developmental prosopagnosia. Neuropsychology (in press).

Google Scholar

Bould, E., and Morris, N. (2008). Role of motion signals in recognizing subtle expressions of emotion. Br. J. Psychol. 99, 167–189.

PubMed Abstract | Full Text | Google Scholar

Bruce, V., and Young, A. (1986). Understanding face recognition. Br. J. Psychol. 77, 305–327. doi: 10.1111/j.2044-8295.1986.tb02199.x

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Bruyer, R., Laterre, C., Seron, X., Feyereisen, P., Strypstein, E., Pierrard, E., et al. (1983). A case of prosopagnosia with some preserved covert remembrance of similar faces. Brain Cogn. 2, 257–284. doi: 10.1016/0278-2626(83)90014-3

CrossRef Full Text | Google Scholar

Butcher, N., Lander, K., Fang, H., and Costen, N. (2011). The effect of motion at encoding and retrieval for same and other race face recognition. Br. J. Psychol. 102, 931–942. doi: 10.1111/j.2044-8295.2011.02060.x

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Calder, A. J., Burton, A. M., Miller, P., Young, A. W., and Akamatsu, S. (2001). A principal component analysis of facial expressions. Vision Res. 41, 1179–1208. doi: 10.1016/S0042-6989(01)00002-5

CrossRef Full Text | Google Scholar

Calder, A. J., and Young, A. W. (2005). Understanding the recognition of facial identity and facial expression. Nat. Rev. Neurosci. 6, 641–651. doi: 10.1038/nrn1724

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Campbell, J., and Burke, D. (2009). Evidence that identity-dependent and identity-independent neural populations are recruited in the perception of five basic emotional facial expressions. Vision Res. 49, 1532–1540. doi: 10.1016/j.visres.2009.03.009

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Ceccarini, F., and Caudek, C. (2013). Angry superiority effect: the importance of dynamic emotional facial expressions. Vis. Cogn. 21, 498–540. doi: 10.1080/13506285.2013.807901

CrossRef Full Text | Google Scholar

Dobs, K., Bulthoff, I., Breidt, M., Vuong, Q. C., Curio, C., and Schultz, J. (2014). Quantifying human sensitivity to spatio-temporal information in dynamic faces. Vision Res. 100, 78–87. doi: 10.1016/j.visres.2014.04.009

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Edwards, K. (1998). The face of time: temporal cues in facial expression of emotion. Psychol. Sci. 9, 270–276. doi: 10.1111/1467-9280.00054

CrossRef Full Text | Google Scholar

Ekman, P., Friesen, W. V., and Simons, R. C. (1985). Is the startle reaction an emotion? J. Pers. Soc. Psychol. 49, 1416–1426. doi: 10.1037/0022-3514.49.5.1416

CrossRef Full Text | Google Scholar

Elfenbein, H. A., and Ambady, N. (2003). Universals and cultural differences in recognizing emotions. Curr. Dir. Psychol Sci. 12, 159–164. doi: 10.1111/1467-8721.01252

CrossRef Full Text | Google Scholar

Ellamil, M., Susskind, J. M., and Anderson, A. K. (2008). Examinations of identity invariance in facial expression adaptation. Cogn. Affect. Behav. Neurosci. 8, 273–281. doi: 10.3758/CABN.8.3.273

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Etcoff, N. L. (1984). Selective attention to facial identity and facial emotion. Neuropsychologia 22, 281–295. doi: 10.1016/0028-3932(84)90075-7

CrossRef Full Text | Google Scholar

Fiorentini, A., Maffei, L., and Sandini, G. (1983). The role of high spatial frequencies in face perception. Perception 12, 195–201. doi: 10.1068/p120195

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Fiorentini, C., and Viviani, P. (2011). Is there a dynamic advantage for facial expressions? J. Vis. 11:17. doi: 10.1167/11.3.17

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Fitousi, D., and Wenger, M. J. (2013). Variants in independence in the perception of facial identity and expression. J. Exp. Psychol. Hum. Percept. Perform. 39, 133–155. doi: 10.1037/a0028001

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Fox, C. J., and Barton, J. J. S. (2007). What is adapted in face adaptation? The neural representations of expression in the human visual system. Brain Res. 1127, 80–89. doi: 10.1016/j.brainres.2006.09.104

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Fox, C. J., Iaria, G., and Barton, J. J. S. (2009). Defining the face processing network: optimization of the functional localizer in fMRI. Hum. Brain Mapp. 30, 1637–1651. doi: 10.1002/hbm.20630

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Gallegos, D. R., and Tranel, D. (2005). Positive facial affect facilitates the identification of famous faces. Brain Lang. 93, 338–348. doi: 10.1016/j.bandl.2004.11.001

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Ganel, T., Valyear, K. F., Goshen-Gottstein, Y., and Goodale, M. A. (2005). The involvement of the “fusiform face area” in processing facial expression. Neuropsychologia 43, 1645–1654. doi: 10.1016/j.neuropsychologia.2005.01.012

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Garner, W. R. (1974). The Processing of Information and Structure. Potomac, MD: Lawrence Erlbaum.

Google Scholar

Gill, D., Garrod, O., Jack, R., and Schyns, P. (2014). Facial movements strategically camouflage involuntary social signals of face morphology. Psychol. Sci. 25, 1079–1086. doi: 10.1177/0956797614522274

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Goshen-Gottstein, Y., and Ganel, T. (2000). Repetition priming for familiar and unfamiliar faces in a sex-judgment task: evidence for a common route for the processing of sex and identity. J. Exp. Psychol. Learn. Mem. Cogn. 26, 1198–1214. doi: 10.1037/0278-7393.26.5.1198

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Hadj-Bouziane, F., Bell, A. H., Knusten, T. A., Ungerleider, L. G., and Tootell, R. B. (2008). Perception of emotional expressions is independent of face selectivity in monkey inferior temporal cortex. Proc. Natl. Acad. Sci. U.S.A. 105, 5591–5596. doi: 10.1073/pnas.0800489105

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Hasselmo, M. E., Rolls, E. T., and Baylis, G. C. (1989). The role of expression and identity in the face-selective responses of neurons in the visual cortex of the monkey. Behav. Brain Res. 32, 203–218. doi: 10.1016/S0166-4328(89)80054-3

CrossRef Full Text | Google Scholar

Haxby, J. V., Hoffman, E. A., and Gobbini, M. I. (2000). The distributed human neural system for face perception. Trends Cogn. Sci. 4, 223–233. doi: 10.1016/S1364-6613(00)01482-0

CrossRef Full Text | Google Scholar

Henrich, J., Heine, S. J., and Norenzayan, A. (2010). The weirdest people in the world? Behav. Brain Sci. 33, 61–135. doi: 10.1017/S0140525X0999152X

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Herlitz, A., and Lovén, J. (2013). Sex differences and the own-gender bias in face recognition: a meta-analytic review. Vis. Cogn. 21, 1306–1336. doi: 10.1080/13506285.2013.823140

CrossRef Full Text | Google Scholar

Hess, U., and Kleck, R. E. (1990). Differentiating emotion elicited and deliberate emotional facial expressions. Eur. J. Soc. Psychol. 20, 369–385. doi: 10.1002/ejsp.2420200502

CrossRef Full Text | Google Scholar

Hill, H., and Johnston, A. (2001). Categorizing sex and identity from the biological motion of faces. Curr. Biol. 11, 880–885. doi: 10.1016/S0960-9822(01)00243-3

CrossRef Full Text | Google Scholar

Hugenberg, K., Miller J., and Claypool, H. M. (2007). Categorization and individuation in the cross-race recognition deficit: toward a solution to an insidious problem. J. Exp. Soc. Psychol. 43, 334–340. doi: 10.1016/j.jesp.2006.02.010

CrossRef Full Text | Google Scholar

Humphreys, G. W., Donnelly, N., and Riddoch, M. J. (1993). Expression is computed separately from facial identity, and it is computed separately for moving and static faces: neuropsychological evidence. Neuropsychologia 31, 173–181. doi: 10.1016/0028-3932(93)90045-2

CrossRef Full Text | Google Scholar

Ichikawa, H., Kanazawa, S., Yamaguchi, M. K., and Kakigi, R. (2010). Infant brain activity while viewing facial movement of point-light displays as measured by near-infrared spectroscopy (NIRS). Neurosci. Lett. 482, 90–94. doi: 10.1016/j.neulet.2010.06.086

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Jack, R. E., Garrod, O. G. B., and Schyns, P. (2014). Dynamic facial expressions of emotion transmit an evolving hierarchy of signals over time. Curr. Biol. 24, 187–192. doi: 10.1016/j.cub.2013.11.064

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Jack, R. E., Garrod, O. G. B., Yu, H., Caldara, R., and Schyns, P. (2012). Facial expressions of emotion are not culturally universal. Proc. Natl. Acad. Sci. U.S.A. 109, 7241–7244. doi: 10.1073/pnas.1200155109

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Kamachi, M., Bruce, V., Mukaida, S., Gyoba, J., Yoshikawa, S., and Akamatsu, S. (2001). Dynamic properties influence the perception of facial expressions. Perception 30, 875–887. doi: 10.1068/p3131

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Kaufmann, J. M., and Schweinberger, S. R. (2004). Expression influences the recognition of familiar faces. Perception 33, 399–408. doi: 10.1068/p5083

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Knappmeyer, B., Thornton, I., and Bülthoff, H. (2003). The use of facial motion and facial form during the processing of identity. Vision Res. 43, 1921–1936. doi: 10.1016/S0042-6989(03)00236-0

CrossRef Full Text | Google Scholar

Knight, B., and Johnston, A. (1997). The role of movement in face recognition. Vis. Cogn. 4, 265–273. doi: 10.1080/713756764

CrossRef Full Text | Google Scholar

Kottoor, T. M. (1989). Recognition of faces by adults. Psychol. Stud. 34, 102–105.

Google Scholar

Krumhuber, E. G., Kappas, A., and Manstead, A. S. R. (2013). Effects of dynamic aspects of facial expressions: a review. Emot. Rev. 5, 41–46. doi: 10.1177/1754073912451349

CrossRef Full Text | Google Scholar

Kurucz, J., and Feldmar, G. (1979). Prosopo-affective agnosia as a symptom of cerebral organic disease. J. Am. Geriatr. Soc. 27, 225–230.

PubMed Abstract | Full Text | Google Scholar

LaBar, K. S., Crupain, M. J., Voyvodic, J. T., and McCarthy, G. (2003). Dynamic perception of facial affect and identity in the human brain. Cereb. Cortex 13, 1023–1033. doi: 10.1093/cercor/13.10.1023

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Lander, K., and Bruce, V. (2003). The role of motion in learning new faces. Vis. Cogn. 10, 897–912. doi: 10.1080/13506280344000149

CrossRef Full Text | Google Scholar

Lander, K., Bruce, V., and Hill, H. (2001). Evaluating the effectiveness of pixelation and blurring on masking the identity of familiar faces. Appl. Cogn. Psychol. 15, 101–116. doi: 10.1002/1099-0720(200101/02)15:1<101::AID-ACP697>3.0.CO;2-7

CrossRef Full Text | Google Scholar

Lander, K., Christie, F., and Bruce, V. (1999). The role of movement in the recognition of famous faces. Mem. Cogn. 27, 974–985. doi: 10.3758/BF03201228

CrossRef Full Text | Google Scholar

Lander, K., and Davies, R. (2007). Exploring the role of characteristic motion when learning new faces. Q. J. Exp. Psychol. 60, 519–526. doi: 10.1080/17470210601117559

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Lander, K., and Metcalfe, S. (2007). The influence of positive and negative facial expressions on face familiarity. Memory 15, 63–69. doi: 10.1080/09658210601108732

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Longmore, C., and Tree, J. (2013). Motion as a cue to face recognition: evidence from congenital prosopagnosia. Neuropsychologia 51, 864–875. doi: 10.1016/j.neuropsychologia.2013.01.022

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Marsh, A. A., Elfenbein, H. A., and Ambady, N. (2003). Nonverbal “accents”: cultural differences in facial expressions of emotion. Psychol. Sci. 14, 373–376. doi: 10.1111/1467-9280.24461

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Matsuzaki, N., and Sato, T. (2008). The perception of facial expression from two-frame apparent motion. Perception 37, 1560. doi: 10.1068/p5769

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Morrison, D. J., and Schyns, P. (2001). Usage of spatial scales for the categorization of faces, objects, and scenes. Psychon. Bull. Rev. 8, 454–469. doi: 10.3758/BF03196180

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

O’Toole, A. J., Roark, D. A., and Abdi, H. (2002). Recognizing moving faces: a psychological and neural synthesis. Trends Cogn. Sci. 6, 261–266. doi: 10.1016/S1364-6613(02)01908-3

CrossRef Full Text | Google Scholar

Parker, D., and Costen, N. (1999). One extreme or the other, or perhaps the Golden Mean: issues of spatial resolution in face processing. Curr. Psychol. 18, 118–127. doi: 10.1007/s12144-999-1021-3

CrossRef Full Text | Google Scholar

Pell, P. J., and Richards, A. (2013). Overlapping facial expression representations are identity-dependent. Vision Res. 79, 1–7. doi: 10.1016/j.visres.2012.12.009

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Perrett, D. I., Smith, P. A. J., Potter, D. D., Mistlin, A. J., Head, A. S., Milner, A. D., et al. (1984). Neurons responsive to faces in the temporal cortex: studies of functional organization, sensitivity, to identity and relation to perception. Hum. Neurobiol. 3, 197–208.

PubMed Abstract | Full Text | Google Scholar

Pike, G. E., Kemp, R. I., Towell, N. A., and Phillips, K. C. (1997). Recognizing moving faces: the relative contribution of motion and perspective view information. Vis. Cogn. 4, 409–437. doi: 10.1080/713756769

CrossRef Full Text | Google Scholar

Pilz, K. S., Thornton, I. M., and Bülthoff, H. H. (2006). A search advantage for faces learned in motion. Exp. Brain Res. 171, 436–447. doi: 10.1007/s00221-005-0283-8

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Pitcher, D., Dilks, D. D., Saxe, R. R., Triantafyllou, C., and Kanwisher, N. (2011a). Differential selectivity for dynamic versus static information in face-selective cortical regions. Neuroimage 56, 2356–2363. doi: 10.1016/j.neuroimage.2011.03.067

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Pitcher, D., Duchaine, B., and Walsh, V. (2014). Combined TMS and fMRI reveals dissociable cortical pathways for dynamic and static face perception. Curr. Biol. 24, 2066–2070. doi: 10.1016/j.cub.2014.07.060

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Pitcher, D., Walsh, V., and Duchaine, B. (2011b). The role of the occipital face area in the cortical face perception network. Exp. Brain Res. 209, 481–493. doi: 10.1007/s00221-011-2579-1

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Pollick, F. E., Hill, H., Calder, A., and Paterson, H. (2003). Recognising facial expression from spatially and temporally modified movements. Perception 32, 813–826. doi: 10.1068/p3319

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Posamentier, M. T., and Abdi, H. (2003). Processing faces and facial expressions. Neuropsychol. Rev. 13, 113–143. doi: 10.1023/A:1025519712569

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Recio, G., Sommer, W., and Schacht, A. (2011). Electrophysiological correlates of perceiving and evaluating static and dynamic facial emotional expressions. Brain Res. 1376, 66–75. doi: 10.1016/j.brainres.2010.12.041

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Rigby, S., Stoesz, B., and Jakobson, L. (2013). How dynamic facial cues, stimulus orientation and processing biases influence identity and expression interference. J. Vis. 13, 413–418. doi: 10.1167/13.9.413

CrossRef Full Text | Google Scholar

Schiff, W., Banka, L., and Galdi, G. D. (1986). Recognizing people seen in events via dynamic “mug shots”. Am. J. Psychol. 99, 219–231. doi: 10.2307/1422276

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Schultz, J., Brockhaus, M., Bülthoff, H. H., and Pilz, K. S. (2013). What the human brain likes about facial motion. Cereb. Cortex 23, 1167–1178. doi: 10.1093/cercor/bhs106

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Schultz, J., and Pilz, K. S. (2009). Natural facial motion enhances cortical responses to faces. Exp. Brain Res. 194, 465–475. doi: 10.1007/s00221-009-1721-9

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Schweinberger, S. R., Burton, A. M., and Kelly, S. W. (1999). Asymmetric dependencies in perceiving identity and emotion: experiments with morphed faces. Percept. Psychophys. 61, 1102–1115. doi: 10.3758/BF03207617

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Schweinberger, S. R., and Soukup, G. R. (1998). Asymmetric relationships among perceptions of facial identity, emotion, and facial speech. J. Exp. Psychol. Hum. Percept. Perform. 24, 1748–1765. doi: 10.1037/0096-1523.24.6.1748

CrossRef Full Text | Google Scholar

Schyns, P. G., Bonnar, L., and Gosselin, F. (2002). Show me the features! Understanding recognition from the use of visual information. Psychol. Sci. 13, 402–409. doi: 10.1111/1467-9280.00472

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Sergent, J., Ohta, S., MacDonald, B., and Zuck, E. (1994). Segregated processing of facial identity and emotion in the human brain: a pet study. Vis. Cogn. 1, 349–369. doi: 10.1080/13506289408402305

CrossRef Full Text | Google Scholar

Soto, F. A., Vucovich, L., Musgrave, R., and Ashby, F. G. (2015). General recognition theory with individual differences: a new method for examining perceptual and decisional interactions with an application to face perception. Psychon. Bull. Rev. 22, 88–111. doi: 10.3758/s13423-014-0661-y

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Steede, L., Tree, J., and Hole, G. (2007). Dissociating mechanisms involved in accessing identity by dynamic and static cues. Vis. Cogn. 15, 116–119.

Google Scholar

Stoesz, B. M., and Jakobson, L. S. (2013). A sex difference in interference between identity and expression judgments with static but not dynamic faces. J. Vis. 13, 1–14. doi: 10.1167/13.5.26

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Strauss, E., and Moscovitch, M. (1981). Perception of facial expressions. Brain Lang. 13, 308–332. doi: 10.1016/0093-934X(81)90098-5

CrossRef Full Text | Google Scholar

Thornton, I. M., and Kourtzi, Z. (2002). A matching advantage for dynamic human faces. Perception 31, 113–132. doi: 10.1068/p3300

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Tranel, D., Damasio, A. R., and Damasio, H. (1988). Intact recognition of facial expression, gender, and age in patients with impaired recognition of face identity. Neurology 38, 690–696. doi: 10.1212/WNL.38.5.690

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Trautmann-Lengsfeld, S. A., Dominguez-Vorras, J., Escera, C., Herrmann, M., and Fehr, T. (2013). The perception of dynamic and static facial expressions of happiness and disgust investigated by ERPs and fMRI constrained source analysis. PLoS ONE 8:e66997. doi: 10.1371/journal.pone.0066997

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Trautmann, S. A., Fehr, T., and Herrmann, M. (2009). Emotions in motion: dynamic compared to static facial expressions of disgust and happiness reveal more widespread emotion-specific activations. Brain Res. 1284, 100–115. doi: 10.1016/j.brainres.2009.05.075

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

van der Gaag, C., Minderaa, R. B., and Keysers, C. (2007). Facial expressions: what the mirror neuron system can and cannot tell us. Soc. Neurosci. 2, 179–222. doi: 10.1080/17470910701376878

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Vida, M. A., and Mondloch, C. J. (2009). Children’s representations of facial expression and identity: identity-contingent expression aftereffects. J. Exp. Child Psychol. 104, 326–345. doi: 10.1016/j.jecp.2009.06.003

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Wang, Y., Fu, X., Johnston, R. A., and Yan, Z. (2013). Discriminability effect on Garner interference: evidence from recognition of facial identity and expression. Front. Psychol. Emot. Sci. 4:943. doi: 10.3389/fpsyg.2013.00943

CrossRef Full Text | Google Scholar

Winston, J. S., Henson, R. N. A., Fine-Goulden, M. R., and Dolan, R. J. (2004). fMRI-adaptation reveals dissociable neural representations of identity and expression in face perception. J. Neurophysiol. 92, 1830–1839. doi: 10.1152/jn.00155.2004

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Xiao, N. G., Perrotta, S., Quinn, P. C., Wang, Z., Sun, Y. H. P., and Lee, K. (2014). On the facilitative effects of face motion on face recognition and its development. Front. Psychol. Emot. Sci. 5:633. doi: 10.3389/fpsyg.2014.00633

CrossRef Full Text | Google Scholar

Young, A. W., McWeeny, K. H., Hay, D. C., and Ellis, A. W. (1986). Matching familiar and unfamiliar faces on identity and expression. Psychol. Res. 48, 63–68. doi: 10.1007/BF00309318

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Zhongqing, J., Wenhui, L., Recio, G., Ying, L., Wenbo, L., Doufei, Z., et al. (2014). Pressure inhibits dynamic advantage in the classification of facial expressions of emotion. PLOS ONE 9:e100162. doi: 10.1371/journal.pone.0100162

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Keywords: face identity, facial expression, independence, interdependence, motion

Citation: Lander K and Butcher N (2015) Independence of face identity and expression processing: exploring the role of motion. Front. Psychol. 6:255. doi: 10.3389/fpsyg.2015.00255

Received: 29 October 2014; Accepted: 20 February 2015;
Published: 13 March 2015.

Edited by:

Chang H. Liu, Bournemouth University, UK

Reviewed by:

Rachael E. Jack, University of Glasgow, UK
Fabian A. Soto, University of California, Santa Barbara, USA

Copyright © 2015 Lander and Butcher. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Karen Lander, School of Psychological Sciences, University of Manchester, Oxford Road, Manchester M13 9PL, UK karen.lander@manchester.ac.uk