MINI REVIEW article

Front. Psychol., 15 July 2025

Sec. Perception Science

Volume 16 - 2025 | https://doi.org/10.3389/fpsyg.2025.1645218

This article is part of the Research TopicProcessing of Face and Other Animacy Cues in the BrainView all 11 articles

Something in the way they move: characteristics of identity present in faces, voices, body movements, and actions

  • 1Division of Psychology, Communication and Human Neuroscience, University of Manchester, Manchester, United Kingdom
  • 2Department of Psychology, College of Health, Medicine, and Life Sciences, Brunel University of London, Uxbridge, United Kingdom

The recognition of familiar individuals relies not only on static features of the person but also on dynamic characteristics unique to each person’s movements. This mini review synthesizes current research on the role of motion in identity recognition, examining how characteristic dynamic cues from the face, voice, and body may contribute to perceivers’ ability to recognize familiar individuals. We highlight corresponding dynamic covariances that may be present across different aspects of an individual’s motion, such as those linking facial and vocal motion. We evaluate the extent to which dynamic patterns might form a coherent ‘dynamic fingerprint.’ Finally, we consider how variability, distinctiveness, and perceiver-related factors (e.g., individual differences and neural mechanisms) shape the recognition of identity through motion. We outline open questions and propose new directions for understanding the integration of dynamic information in person perception.

1 Introduction

The way a person moves reflects their underlying anatomy and changes in the positions of their bones and muscles (Mileva and Burton, 2018; Vick et al., 2007). Yovel and O'Toole (2016) argue that ‘motion acts as the key element for binding together faces, bodies, and voices into a coherent representation of a person that supports recognition’ (p. 383). Indeed, seeing a person move may provide a general ‘form-from-motion’ advantage for recognition, by providing additional views of the person, as well as enhanced structural information about the viewed individual (Johansson, 1973). A complementary idea is that for familiar people, the idiosyncrasies of their observed motion contribute to identity recognition. For example, individuals may have a characteristic smile or way of shaking their head, that serves as a cue to identity. This may also be true of idiosyncrasies present in other aspects of their biological motion, for example gait or gestures (Loula et al., 2005).

2 Dynamic characteristics of identity from face and voice

Faces move in rigid and non-rigid ways (Lander et al., 1999). During rigid motion, the face moves as a single object, for example during head nodding and shaking. In contrast, non-rigid motion is deformable or elastic (Xiao et al., 2013), with parts of the face moving in relation to one another, for example when expressing, talking etc. Individuals vary in the amount and way they move their faces, and this can influence how they are perceived by others. Indeed, more facially expressive participants are rated as being more likeable, agreeable, and successful (Cavanagh et al., 2024). Conversely, people who do not move their faces very much are often perceived as uninterested. For example, see work on the impact of reduced facial expression in Parkinson’s (Tickle-Degnen et al., 2011).

Research has shown that seeing a face move leads to better face matching (speaking, Bennetts et al., 2013; expressions, Thornton and Kourtzi, 2002), learning of unfamiliar faces (speaking, Butcher et al., 2011; non-rigid and rigid, Lander and Bruce, 2003; rigid, Pike et al., 1997) and identification of familiar faces (speaking, Butcher and Lander, 2017; non-rigid, Lander et al., 2001). The ‘movement advantage’ may be particularly pronounced when viewing conditions are difficult (Lander et al., 2001) or there is reduced recognition by the observer due to impairment (Bennetts et al., 2015; Longmore and Tree, 2013) or age (Otsuka et al., 2009; Xiao et al., 2014). Dynamic cues may be used flexibly when static cues are insufficient for identification, with facial form and motion information optimally integrated to support recognition (Dobs et al., 2017).

Seminal work by O'Toole et al. (2002) formalizes, for faces, the distinction between a general advantage of motion and one linked to the characteristics of the observed motion. Indeed, the ‘representation enhancement hypothesis’ (O'Toole et al., 2002; O'Toole and Roark, 2011) suggests that seeing a face move aids recognition by facilitating perception of the three-dimensional face structure. Here, there is thought to be a generic benefit (over any advantage of multiple statics) of seeing a face move, that is useful both when learning a face or when recognizing it (Pike et al., 1997; Butcher et al., 2011). The ‘supplemental information hypothesis’ (O'Toole et al., 2002) proposes that we represent characteristic facial motions of individual faces, in addition to the invariant structure of the face. These characteristic facial motions are referred to as ‘dynamic identity signatures’ (Simhi and Yovel, 2020) and are typically found through characteristic expressions, manner of speaking (Dobs et al., 2016) or ways of looking (Peterson et al., 2025). This theory is supported by studies that manipulate the temporal characteristics of the observed facial motion by slowing, speeding or reversing clips (e.g., Lander and Bruce, 2000; Lander et al., 2006). These manipulations disrupt the characteristic patterns of movement and reduce the movement advantage for familiar faces. Such characteristic information may be inherent to ‘dynamic’ representations (Freyd, 1987) or be stored alongside a static-based representation.

Importantly, dynamic facial signatures are thought to be learnt over time, providing a reliable cue to identity for familiar faces, and one that is increasingly useful the more familiar the face is. Accordingly, Butcher and Lander (2017) found that the magnitude of the motion advantage observed for an individual face correlated with how familiar that face was (but see Bennetts et al., 2013). Further, more distinctive facial movement patterns were associated with a greater movement advantage in familiar faces (Lander and Chuang, 2005). Here, distinctive refers to movement characteristics that differ from average or typical movements – they are unique, unusual, or idiosyncratic to an individual. This finding supports the idea that dynamic facial signatures are more relevant for familiar than unfamiliar face recognition.

Interestingly, dynamic characteristics in the way a face moves may also be present in the way a person sounds (Kamachi et al., 2003; Lander et al., 2007; Munhall and Buchan, 2004). Kamachi et al. (2003) found that participants could match unfamiliar faces to voices (or voices to faces) above chance and that matching performance was best with dynamic face stimuli (but see Lavan et al., 2021, who found chance-level dynamic face-voice matching). Face to voice matching tasks demonstrate that dynamic covariances of identity are present in the movement of faces and voices. Similar to visual-only dynamic identity signatures, these identity covariances are likely based on relative timing information: reversing or transforming speech in a non-linear manner disrupts cross-modal matching performance (Lachs and Pisoni, 2004a,b).

3 Dynamic characteristics of identity from body movement and actions

Body motion is a pivotal factor in human perception and the recognition of identity (Troje, 2002). Perceivers use body motion to help categorize others’ social identities, and these categorizations may carry important consequences such as mate selection (Lick et al., 2013) and prejudice (Johnson et al., 2007). Simplistically, non-rigid body motion can be categorized into: (i) biological motion, which refers to the natural movements of people, like gait or gestures and (ii) motions associated with specific purposeful activities like drinking or sports type actions (Dittrich, 1993).

One of the most studied aspects of body motion in identity recognition is gait analysis. Gait refers to an individual’s unique pattern of walking (Whittle, 2007), which possesses measurable properties that remain consistent over time, are observable from a distance and difficult to camouflage (Zhang et al., 2011). These individualized parameters include stride length, step frequency, limb movement, posture and rhythm, which may be used as biometric markers for identity verification or identification, especially within automated security settings (Bastos and Tavares, 2025).

Early work focusing on the recognition of identity from gait used point-light displays (PLDs), where ‘lights’ are placed on key areas of the body with all other visual cues removed. When static the image appears like a collection of spots, but when the image moves the body becomes apparent. Cutting and Kozlowski (1977) showed that participants were able to correctly identify an individual walker from six friends 38% of the time (also see Troje et al., 2005). Loula et al. (2005) asked participants to make forced choice decisions about whether a PLD was displaying themselves, a friend or a stranger. Results found that self-recognition was best (69% correct) with friend recognition also significantly above chance. Interestingly, the greatest advantage for accurate recognition of self was from more expressive movements like dancing and boxing.

Further work on the role of gait in identity recognition has used impoverished ‘natural’ image sequences. For example, Stevenage et al. (1999) found that participants were able to use gait to distinguish between six individuals. Additionally, Baragchizadeh et al. (2020) found that participants’ were able to make identity matching decisions to unfamiliar people performing the same action (e.g., both walking) or different actions (e.g., walking and boxing) above chance. Further, Simhi and Yovel (2017) asked participants to study people in motion and recognise them from dynamic or multi-static images. Results suggested that dynamic identity signatures may contribute to person recognition, but only of familiar people previously seen in motion. Finally, Simhi and Yovel (2020) used a virtual reality recognition memory task, with participants learning dynamic identities at study. At test, images were shown dynamic or as a series of multi-statics. At test, dynamic identities with distinctive gaits were recognized more accurately, from a greater distance away, compared to less distinctive walkers. No such effect was found in the multi-static condition, highlighting the importance of dynamic gait to person recognition.

Beyond gait, other elements of body motion, such as hand gestures, posture shifts, and head movements, may also contribute to identity recognition (Pilz and Thornton, 2016). Hand gestures facilitate communication for both the speaker and listener (Wagner et al., 2014). They are known to be idiosyncratic, influenced by cultural and personal habits (Gawne, 2025), making them distinguishable between individuals (see Gillespie et al., 2014). Further, exaggeration of body actions may be particularly important for identification of an individual (Hill and Pollick, 2000). It seems likely that when viewing bodies in motion we are able to use characteristic motion signatures to help identify the individual shown. To summarize, research has established a beneficial role of motion when recognizing familiar people from body movements and actions, centered around idiosyncratic patterns of movement that aid identification.

4 Considerations and future directions

We have outlined the sources of evidence that support the idea of characteristic motion patterns, useful in the recognition of identity. Such characteristics seem to present in the movement of our faces, voices, bodies and actions. Several issues for consideration remain.

First, we need to understand more clearly what we exactly mean by ‘characteristic motion patterns’. Here, it is not clear whether ‘characteristic’ is synonymous with ‘distinctive’ – in other words, whether characteristic movement patterns need to be unusual or unique to the individual in some way to support recognition. One way to better understand the extent to which the two parameters are related is to examine how variation in the distinctiveness of movement patterns affects the movement advantage. Some people naturally move more distinctively than others, which may mediate the size of any motion advantage (Lander and Chuang, 2005), supporting a possible effect of natural between-person variability in distinctiveness. Other studies have examined whether the movement advantage is affected by manipulating distinctiveness artificially. As with spatially-based distinctiveness (Valentine, 1988), we can also manipulate the distinctiveness of observed motion by caricaturing motion relative to a ‘norm’. Furl et al. (2022) used a face space account (Valentine, 1991) where the axes in the multi-dimensional space reflect spatiotemporal dimensions such as speed, displacement, and relative timing. In this work, spatiotemporal caricatures of unfamiliar faces had a minimal effect on identity processing, regardless of whether presented at learning or test. In contrast, Hill and Pollick (2000) found a benefit of caricatures for the recognition of body motion. They trained participants to recognise individuals’ arm movements, and then tested them on temporally exaggerated movements made by the same actors. Recognition levels were higher for increasing levels of exaggeration, suggesting that time-based cues were important for identification. Further studies with familiar people and matched methodologies are required to compare the role of distinctiveness of movement cues in face and body identification. Current disparate findings raise the possibility that movement cues might be integrated into identity judgements differently for faces and bodies – at least, when they are unfamiliar.

Second, we should also consider whether there are common dynamic characteristics found across different aspects of a person that are identity specific – a dynamic fingerprint, if you like, that acts as a cue to identity. Research has generally supported a link between visible face motion and the audible sound of the voice, although it is important to note that some people look and sound more similar than others (Smith et al., 2016). But what about other possible links between person specific motion? At the most basic level, for example, does a person who has particularly pronounced facial movements also have similar style body movements. Future work needs to look at whether such commonalities in motion exist – and if they do, what they look like – and how they might be used to create a dynamic fingerprint that aids identification of a person. Future work may explore between-person variability in the usefulness of dynamic signatures for identification. As reviewed above, there is preliminary evidence that some between-person variability in movement characteristics affects the extent to which they benefit recognition (Lander and Chuang, 2005), but there is little research on other factors that might make some people easier to recognise than others (or, conversely, that lead us to perceive their motion as very similar). Here, multivariate time-series modelling of the dynamic parameters of the whole person may facilitate intra- and inter-subject comparison of dynamic movement patterns (Joo et al., 2018). Understanding the relative reliability and usefulness of dynamic information from the face, body, and voice when making identity judgements – and how this relates to their actual use in identification scenarios – could inform human- and computer-based person identification. Crucially, research investigating the integration of different cues (e.g., face and body) needs to focus specifically on moving stimuli: previous work has shown that people allocate attention to faces and bodies differently when they are static (attention primarily to the face) and dynamic (attention to both the face and body) (O’Toole et al., 2011).

Further, in order for dynamic fingerprints to be useful for recognition, we might expect these to be relatively stable across time and context. However, as well as being more useful for some compared with others, the movement of a person might also vary between different viewing instances of the same person. On some occasions a person might move in their typical way, whereas on other occasions they may not. For example, they may be tired, flattening the characteristics of their observed motion. Alternatively, people might naturally exaggerate their movements, either intentionally (e.g., overenunciating speech) or unintentionally (e.g., intense emotional expressions). Surprisingly little work has addressed how this natural variation affects characteristic motion patterns: for example, whether it increases or hinders the usefulness of movement cues for identification, or whether there are certain dynamic cues that remain consistently available across situations. In the domain of emotion recognition, the increased physical movements associated with higher emotional intensity improve emotion recognition performance (e.g., Hess et al., 1997), but to date there has been no research directly examining the effects of natural variations (exaggerations or reductions) of movement on identification.

Third, if we accept the idea of dynamic fingerprints then we need to consider where such information is integrated in the brain. Early neural models of person perception drew a distinction between the processing of invariant and changeable aspects of a person (Haxby et al., 2000). Invariant features like identity were thought to be processed in the occipital and fusiform face areas (OFA and FFA) and the fusiform and extrastriate body areas (FBA and EBA). Whereas the processing of changeable aspects of a person (like eye gaze, expressions etc.) were linked to the posterior superior temporal sulcus (pSTS; O'Toole et al., 2002; Yovel and O'Toole, 2016). Importantly, research has shown the pSTS responds more strongly to dynamic than static faces, while the FFA and OFA show similar responses to static and dynamic faces (Pitcher et al., 2011; Bernstein et al., 2018). The pSTS is also strongly activated in response to biological motion and to human voices and audiovisual speech (Deen et al., 2015), making it likely that this area plays a key role in the processing and integration of dynamic fingerprints (Yovel and O'Toole, 2016). However, the pSTS is likely only one part of a broader network involved in dynamic person representations. Preliminary evidence (with emotional face expressions) suggests that spatiotemporal facial cues may be represented throughout the face-selective and motion-selective networks in the brain in a spatiotemporal version of ‘face space’ (Furl et al., 2020). Other work has found a relationship between biological motion perception and activation in both pSTS and the ventral premotor cortex (Gilaie-Dotan et al., 2013). Further work examining other forms of facial, biological, and cross-modal dynamic information (Küçük et al., 2024) is needed to confirm the regions, interactions, and mechanisms involved in processing whole-person dynamic cues.

Finally, research needs to take individual differences of perceivers into account when considering the usefulness of dynamic fingerprints for identification. It is well-established that some people are better at static face identification than others (Wilmer, 2017); likewise, there is individual variation in biological motion perception (Miller and Saygin, 2013), and the movement advantage for face recognition (Butcher and Lander, 2017). However, the extent and consistency of individual differences in the movement advantage have not yet been examined. Interestingly, individuals with prosopagnosia – a severe deficit in face recognition – still show a movement advantage for faces (Bennetts et al., 2015; Longmore and Tree, 2013; Steede et al., 2007). This supports the idea, discussed above, that movement cues might act as a complementary source of information when static cues are less reliable. Super-recognizers, who show exceptional face recognition ability (Russell et al., 2009), also show a movement advantage for famous face recognition (Davis et al., 2016). Thus, findings suggest that the ability to extract and use static cues to identity does not align directly with the ability to extract and use facial movement as a cue to identity (notably, there is also no relationship between static face recognition and identification of biological motion in bodies; Noyes et al., 2018). Nor can the movement advantage be linked to underpinning visual processing strategies: recent work found no association between the movement advantage for famous face recognition and differences in eye-movements to static and dynamic faces (Butcher et al., 2025). It may be that other factors, such as sensitivity to biological motion or other spatiotemporal information, might predict individual differences in this skill. Research into these factors, applying not only to faces but recognition of identity from other aspects like gait, body movement etc. is needed. The development of reliable and consistent measures of individual differences in identifying dynamic signatures may be particularly important in applied contexts, where it may be useful to screen for individuals who excel at specific recognition-based tasks (e.g., identifying known suspects on poor-quality video footage; Bate et al., 2021).

5 Conclusion

This mini review explores how characteristic motions contribute to recognizing familiar people. Movement provides structural and identity-specific cues that enhance recognition, especially under challenging conditions or when static information is limited. Research shows that individuals have dynamic identity signatures, which are learned over time and aid recognition. These cues may be consistent across face, body, and voice, forming a ‘dynamic fingerprint.’ However, more research is needed to clarify the importance of distinctiveness, how stable these motion cues are between- and within-people, how they are processed in the brain, and how individual differences in perceiver affect their use.

Author contributions

KL: Writing – review & editing, Writing – original draft. RB: Writing – review & editing.

Funding

The author(s) declare that no financial support was received for the research and/or publication of this article.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The authors declare that no Gen AI was used in the creation of this manuscript.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Baragchizadeh, A., Jesudasen, P. R., and O'Toole, A. J. (2020). Identification of unfamiliar people from point-light biological motion: a perceptual reevaluation. Vis. Cogn. 28, 513–522. doi: 10.1080/13506285.2020.1834039

Crossref Full Text | Google Scholar

Bastos, D. R. M., and Tavares, J. M. R. S. (2025). Advances in gait-based identification: A systematic review of deep learning models leveraging computer vision techniques. Switzerland: Springer. Available at online: https://link.springer.com/book/10.1007/978-3-031-89560-9

Google Scholar

Bate, S., Portch, E., and Mestry, N. (2021). When two fields collide: identifying “super-recognisers” for neuropsychological and forensic face recognition research. Q. J. Exp. Psychol. 74, 2154–2164. doi: 10.1177/17470218211027695

PubMed Abstract | Crossref Full Text | Google Scholar

Bennetts, R. J., Butcher, N., Lander, K., Udale, R., and Bate, S. (2015). Movement cues aid face recognition in developmental prosopagnosia. Neuropsychology 29, 855–860. doi: 10.1037/neu0000187

PubMed Abstract | Crossref Full Text | Google Scholar

Bennetts, R. J., Kim, J., Burke, D., Brooks, K. R., Lucey, S., Saragih, J., et al. (2013). The movement advantage in famous and unfamiliar faces: a comparison of point-light displays and shape-normalised avatar stimuli. Perception 42, 950–970. doi: 10.1068/p7446

PubMed Abstract | Crossref Full Text | Google Scholar

Bernstein, M., Erez, Y., Blank, I., and Yovel, G. (2018). An integrated neural framework for dynamic and static face processing. Sci. Rep. 8:7036. doi: 10.1038/s41598-018-25405-9

PubMed Abstract | Crossref Full Text | Google Scholar

Butcher, N., Bennetts, R. J., Sexton, L., Barbanta, A., and Lander, K. (2025). Eye movement differences when recognising and learning moving and static faces. Q. J. Exp. Psychol. 78, 744–765. doi: 10.1177/17470218241252145

PubMed Abstract | Crossref Full Text | Google Scholar

Butcher, N., and Lander, K. (2017). Exploring the motion advantage: evaluating the contribution of familiarity and differences in facial motion. Q. J. Exp. Psychol. 70, 919–929. doi: 10.1080/17470218.2016.1138974

PubMed Abstract | Crossref Full Text | Google Scholar

Butcher, N., Lander, K., Fang, H., and Costen, N. (2011). The effect of motion at encoding and retrieval for same- and other-race face recognition. Br. J. Psychol. 102, 931–942. doi: 10.1111/j.2044-8295.2011.02060.x

PubMed Abstract | Crossref Full Text | Google Scholar

Cavanagh, K., Whitehouse, H., and Waller, B. M. (2024). Being facially expressive is socially advantageous. Sci. Rep. 14:62902. doi: 10.1038/s41598-024-62902-6

Crossref Full Text | Google Scholar

Cutting, J. E., and Kozlowski, L. T. (1977). Recognizing friends by their walk: gait perception without familiarity cues. Bull. Psychon. Soc. 9, 353–356. doi: 10.3758/BF03337021

Crossref Full Text | Google Scholar

Davis, J. P., Lander, K., Evans, R., and Jansari, A. (2016). Use-inspired basic research on individual differences in face identification: implications for criminal investigation and security. Front. Psychol. 7:219. doi: 10.3389/fpsyg.2016.00219

Crossref Full Text | Google Scholar

Deen, B., Koldewyn, K., Kanwisher, N., and Saxe, R. (2015). Functional Organization of Social Perception and Cognition in the Superior Temporal Sulcus, Cerebral Cortex 25, 4596–4609. doi: 10.1093/cercor/bhv111

Crossref Full Text | Google Scholar

Dittrich, W. H. (1993). Action categories and the perception of biological motion. Perception 22, 15–22. doi: 10.1068/p220015

PubMed Abstract | Crossref Full Text | Google Scholar

Dobs, K., Bülthoff, I., and Schultz, J. (2016). Identity information content depends on the type of facial movement. Sci. Rep. 6. doi: 10.1038/srep34301

PubMed Abstract | Crossref Full Text | Google Scholar

Dobs, K., Ma, W. J., and Reddy, L. (2017). Near-optimal integration of facial form and motion. Sci. Rep. 7:11002. doi: 10.1038/s41598-017-10885-y

PubMed Abstract | Crossref Full Text | Google Scholar

Freyd, J. J. (1987). Dynamic mental representations. Psychol. Rev. 94, 427–438. doi: 10.1037/0033-295X.94.4.427

Crossref Full Text | Google Scholar

Furl, N., Begum, F., Ferrarese, F. P., Jans, S., Woolley, C., and Sulik, J. (2022). Caricatured facial movements enhance perception of emotional facial expressions. Perception 51, 313–343. doi: 10.1177/0301006622108645

Crossref Full Text | Google Scholar

Furl, N., Begum, F., Sulik, J., Ferrarese, F. P., Jans, S., and Woolley, C. (2020). Face space representations of movement. NeuroImage 212:116676. doi: 10.1016/j.neuroimage.2020.116676

PubMed Abstract | Crossref Full Text | Google Scholar

Gawne, L. (2025). “Gesture across cultures and languages” in Gesture: A Slim Guide (Oxford, UK: Oxford Academic).

Google Scholar

Gilaie-Dotan, S., Kanai, R., Bahrami, B., Rees, G., and Saygin, A. P. (2013). Neuroanatomical correlates of biological motion detection. Neuropsychologia 51, 457–463. doi: 10.1016/j.neuropsychologia.2012.11.027

PubMed Abstract | Crossref Full Text | Google Scholar

Gillespie, M., James, A. N., Federmeier, K. D., and Watson, D. G. (2014). Verbal working memory predicts co-speech gesture: evidence from individual differences. Cognition 132, 174–180. doi: 10.1016/j.cognition.2014.03.012

PubMed Abstract | Crossref Full Text | Google Scholar

Haxby, J. V., Hoffman, E. A., and Gobbini, M. I. (2000). The distributed human neural system for face perception. Trends Cogn. Sci. 4, 223–233. doi: 10.1016/s1364-6613(00)01482-0

PubMed Abstract | Crossref Full Text | Google Scholar

Hess, U., Blairy, S., and Kleck, R. E. (1997). The intensity of emotional facial expressions and decoding accuracy. J. Nonverb. Behav. 21, 241–257. doi: 10.1023/A:1024952730333

Crossref Full Text | Google Scholar

Hill, H., and Pollick, F. E. (2000). Exaggerating temporal differences enhances recognition of individuals from point light displays. Psychol. Sci. 11, 223–228. doi: 10.1111/1467-9280.00245

PubMed Abstract | Crossref Full Text | Google Scholar

Johansson, G. (1973). Visual perception of biological motion and a model for its analysis. Percept. Psychophys. 14, 201–211. doi: 10.3758/BF03212378

Crossref Full Text | Google Scholar

Johnson, K. L., Gill, S., Reichman, V., and Tassinary, L. G. (2007). Swagger, sway, and sexuality: judging sexual orientation from body motion and morphology. J. Pers. Soc. Psychol. 93, 321–334. doi: 10.1037/0022-3514.93.3.321

PubMed Abstract | Crossref Full Text | Google Scholar

Joo, H., Simon, T., and Sheikh, Y.. (2018). Total capture: a 3D deformation model for tracking faces, hands, and bodies. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2018), 8320–8329

Google Scholar

Kamachi, M., Hill, H., Lander, K., and Vatikiotis-Bateson, E. (2003). 'Putting the face to the voice': matching identity across modality. Curr. Biol. 13, 1709–1714. doi: 10.1016/j.cub.2003.09.005

Crossref Full Text | Google Scholar

Küçük, E., Foxwell, M., Kaiser, D., and Pitcher, D. (2024). Moving and static faces, bodies, objects, and scenes are differentially represented across the three visual pathways. J. Cogn. Neurosci. 36, 2639–2651. doi: 10.1162/jocn_a_02139

PubMed Abstract | Crossref Full Text | Google Scholar

Lachs, L., and Pisoni, D. B. (2004a). Cross-modal source information and spoken word recognition. J. Exp. Psychol. Hum. Percept. Perform. 30, 378–396. doi: 10.1037/0096-1523.30.2.378

PubMed Abstract | Crossref Full Text | Google Scholar

Lachs, L., and Pisoni, D. B. (2004b). Crossmodal source identification in speech perception. Ecol. Psychol. 16, 159–187. doi: 10.1207/s15326969eco1603_1

PubMed Abstract | Crossref Full Text | Google Scholar

Lander, K., and Bruce, V. (2000). Recognizing famous faces: exploring the benefits of facial motion. Ecol. Psychol. 12, 259–272. doi: 10.1207/S15326969ECO1204_01

Crossref Full Text | Google Scholar

Lander, K., and Bruce, V. (2003). The role of motion in learning new faces. Vis. Cogn. 10, 897–912. doi: 10.1080/13506280344000149

Crossref Full Text | Google Scholar

Lander, K., Bruce, V., and Hill, H. (2001). Evaluating the effectiveness of pixelation and blurring on masking the identity of familiar faces. Appl. Cogn. Psychol. 15, 101–116. doi: 10.1002/1099-0720(200101/02)15:1<101::AID-ACP697>3.0.CO;2-7

Crossref Full Text | Google Scholar

Lander, K., Christie, F., and Bruce, V. (1999). The role of movement in the recognition of famous faces. Mem. Cogn. 27, 974–985. doi: 10.3758/BF03201228

Crossref Full Text | Google Scholar

Lander, K., and Chuang, L. (2005). Why are moving faces easier to recognize? Vis. Cogn. 12, 429–442. doi: 10.1080/13506280444000382

Crossref Full Text | Google Scholar

Lander, K., Chuang, L., and Wickham, L. (2006). Recognizing face identity from natural and morphed smiles. Q. J. Exp. Psychol. 59, 801–808. doi: 10.1080/17470210600576136

PubMed Abstract | Crossref Full Text | Google Scholar

Lander, K., Hill, H., Kamachi, M., and Vatikiotis-Bateson, E. (2007). It's not what you say but the way you say it: matching faces and voices. J. Exp. Psychol. Hum. Percept. Perform. 33, 905–914. doi: 10.1037/0096-1523.33.4.905

PubMed Abstract | Crossref Full Text | Google Scholar

Lavan, N., Smith, H., Jiang, L., and McGettigan, C. (2021). Explaining face-voice matching decisions: the contribution of mouth movements, stimulus effects and response biases. Atten. Percept. Psychophys. 83, 2205–2216. doi: 10.3758/s13414-021-02290-5

PubMed Abstract | Crossref Full Text | Google Scholar

Lick, D. J., Johnson, K. L., and Gill, S. V. (2013). Deliberate changes to gendered body motion influence basic social perceptions. Soc. Cogn. 31, 656–671. doi: 10.1521/soco.2013.31.6.656

Crossref Full Text | Google Scholar

Longmore, C., and Tree, J. (2013). Motion as a cue to face recognition: evidence from congenital prosopagnosia. Neuropsychologia 51, 864–875. doi: 10.1016/j.neuropsychologia.2013.01.022

PubMed Abstract | Crossref Full Text | Google Scholar

Loula, F., Prasad, S., Harber, K., and Shiffrar, M. (2005). Recognizing people from their movement. J. Exp. Psychol. Hum. Percept. Perform. 31, 210–220. doi: 10.1037/0096-1523.31.1.210

PubMed Abstract | Crossref Full Text | Google Scholar

Mileva, M., and Burton, A. M. (2018). Smiles in face matching: idiosyncratic information revealed through a smile improves unfamiliar face matching performance. Br. J. Psychol. 109, 799–811. doi: 10.1111/bjop.12318

PubMed Abstract | Crossref Full Text | Google Scholar

Miller, L. E., and Saygin, A. P. (2013). Individual differences in the perception of biological motion: links to social cognition and motor imagery. Cognition 128, 140–148. doi: 10.1016/j.cognition.2013.03.013

PubMed Abstract | Crossref Full Text | Google Scholar

Munhall, K. G., and Buchan, J. N. (2004). Something in the way she moves. Trends Cogn. Sci. 8, 51–53. doi: 10.1016/j.tics.2003.12.009

PubMed Abstract | Crossref Full Text | Google Scholar

Noyes, E., Hill, M. Q., and O’Toole, A. J. (2018). Face recognition ability does not predict person identification performance: using individual data in the interpretation of group results. Cogn. Res. Princ. Implic. 3, 1–13. doi: 10.1186/s41235-018-0117-4

Crossref Full Text | Google Scholar

O’Toole, A. J., Phillips, P. J., Weimer, S., Roark, D. A., Ayyad, J., Barwick, R., et al. (2011). Recognizing people from dynamic and static faces and bodies: dissecting identity with a fusion approach. Vis. Res. 51, 74–83. doi: 10.1016/j.visres.2010.09.035

Crossref Full Text | Google Scholar

O'Toole, A., and Roark, D. (2011). “Memory for moving faces: the interplay of two recognition systems” in Dynamic faces: Insights from experiments and computation. eds. C. Curio, H. H. Bülthoff, and M. A. Giese (MIT Press, Cambridge, Mass: Boston Review), 15–29.

Google Scholar

O'Toole, A. J., Roark, D. A., and Abdi, H. (2002). Recognizing moving faces: a psychological and neural synthesis. Trends Cogn. Sci. 6, 261–266. doi: 10.1016/S1364-6613(02)01908-3

PubMed Abstract | Crossref Full Text | Google Scholar

Otsuka, Y., Konishi, Y., Kanazawa, S., Yamaguchi, M. K., Abdi, H., and O'Toole, A. J. (2009). Recognition of moving and static faces by young infants. Child Dev. 80, 1259–1271. doi: 10.1111/j.1467-8624.2009.01330.x

PubMed Abstract | Crossref Full Text | Google Scholar

Peterson, L., Clifford, C., and Palmer, C.. (2025). The role of (observed) gaze behaviour in identity recognition. Cognition 263:106222. doi: 10.1016/j.cognition.2025.106222

Crossref Full Text | Google Scholar

Pike, G. E., Kemp, R. I., Towell, N. A., and Phillips, K. C. (1997). Recognizing moving faces: the relative contribution of motion and perspective view information. Vis. Cogn. 4, 409–438. doi: 10.1080/713756769

Crossref Full Text | Google Scholar

Pilz, K. S., and Thornton, I. M. (2016). Idiosyncratic body motion influences person recognition. Vis. Cogn. 25, 539–549. doi: 10.1080/13506285.2016.1232327

Crossref Full Text | Google Scholar

Pitcher, D., Walsh, V., and Duchaine, B. (2011). The role of the occipital face area in the cortical face perception network. Exp. Brain Res. 209, 481–493. doi: 10.1007/s00221-011-2579-1

PubMed Abstract | Crossref Full Text | Google Scholar

Russell, R., Duchaine, B., and Nakayama, K. (2009). Super-recognizers: people with extraordinary face recognition ability. Psychon. Bull. Rev. 16, 252–257. doi: 10.3758/PBR.16.2.252

PubMed Abstract | Crossref Full Text | Google Scholar

Simhi, N., and Yovel, G. (2017). The role of familiarization in dynamic person recognition. Vis. Cogn. 25, 550–562. doi: 10.1080/13506285.2017.1307298

Crossref Full Text | Google Scholar

Simhi, N., and Yovel, G. (2020). Can we recognize people based on their body-alone? The roles of body motion and whole person context. Vis. Res. 176, 91–99. doi: 10.1016/j.visres.2020.07.012

PubMed Abstract | Crossref Full Text | Google Scholar

Smith, H. M., Dunn, A. K., Baguley, T., and Stacey, P. C. (2016). Matching novel face and voice identity using static and dynamic facial images. Atten. Percept. Psychophys. 78, 868–879. doi: 10.3758/s13414-015-1045-8

PubMed Abstract | Crossref Full Text | Google Scholar

Steede, L. L., Tree, J. J., and Hole, G. J. (2007). I can’t recognize your face but I can recognize its movement. Cogn. Neuropsychol. 24, 451–466. doi: 10.1080/02643290701381879

PubMed Abstract | Crossref Full Text | Google Scholar

Stevenage, S. V., Nixon, M. S., and Vince, K. (1999). Visual analysis of gait as a cue to identity. Appl. Cogn. Psychol. 13, 513–526. doi: 10.1002/(SICI)1099-0720(199912)13:6<513::AID-ACP616>3.0.CO;2-8

Crossref Full Text | Google Scholar

Thornton, I. M., and Kourtzi, Z. (2002). A matching advantage for dynamic human faces. Perception 31, 113–132. doi: 10.1068/p3300

PubMed Abstract | Crossref Full Text | Google Scholar

Tickle-Degnen, L., Zebrowitz, L. A., and Ma, H. I. (2011). Culture, gender and health care stigma: practitioners' response to facial masking experienced by people with Parkinson's disease. Soc. Sci. Med. 73, 95–102. doi: 10.1016/j.socscimed.2011.05.008

PubMed Abstract | Crossref Full Text | Google Scholar

Troje, N. F. (2002). Decomposing biological motion: a framework for analysis and synthesis of human gait patterns. J. Vis. 2:2. doi: 10.1167/2.5.2

PubMed Abstract | Crossref Full Text | Google Scholar

Troje, N. F., Westhoff, C., and Lavrov, M. (2005). Person identification from biological motion: effects of structural and kinematic cues. Percept. Psychophys. 67, 667–675. doi: 10.3758/BF03193523

PubMed Abstract | Crossref Full Text | Google Scholar

Valentine, T. (1988). Upside-down faces: a review of the effect of inversion upon face recognition. Br. J. Psychol. 79, 471–491. doi: 10.1111/j.2044-8295.1988.tb02747.x

PubMed Abstract | Crossref Full Text | Google Scholar

Valentine, T. (1991). A unified account of the effects of distinctiveness, inversion, and race in face recognition. Q. J. Exp. Psychol. A. 43, 161–204. doi: 10.1080/14640749108400966

PubMed Abstract | Crossref Full Text | Google Scholar

Vick, S. J., Waller, B. M., Parr, L. A., Smith Pasqualini, M. C., and Bard, K. A. (2007). A cross-species comparison of facial morphology and movement in humans and chimpanzees using the facial action coding system (FACS). J. Nonverbal Behav. 31, 1–20. doi: 10.1007/s10919-006-0017-z

PubMed Abstract | Crossref Full Text | Google Scholar

Wagner, P., Malisz, Z., and Kopp, S. (2014). Gesture and speech in interaction: an overview. Speech Comm. 57, 209–232. doi: 10.1016/j.specom.2013.09.008

Crossref Full Text | Google Scholar

Whittle, M. W. (2007). Gait analysis: An introduction. 4th Edn. Butterworth-Heinemann, USA: Butterworth-Heinemann.

Google Scholar

Wilmer, J. B. (2017). Individual differences in face recognition: a decade of discovery. Curr. Dir. Psychol. Sci. 26, 225–230. doi: 10.1177/0963721417710693

Crossref Full Text | Google Scholar

Xiao, N. G., Perrotta, S., Quinn, P. C., Wang, Z., Sun, Y. H. P., and Lee, K. (2014). On the facilitative effects of face motion on face recognition and its development. Front. Psychol. 5:633. doi: 10.3389/fpsyg.2014.00633

PubMed Abstract | Crossref Full Text | Google Scholar

Xiao, N. G., Quinn, P. C., Ge, L., and Lee, K. (2013). Elastic facial movement influences part-based but not holistic processing. J. Exp. Psychol. Hum. Percept. Perform. 39, 1457–1467. doi: 10.1037/a0031631

PubMed Abstract | Crossref Full Text | Google Scholar

Yovel, G., and O'Toole, A. J. (2016). Recognizing people in motion. Trends Cogn. Sci. 20, 383–395. doi: 10.1016/j.tics.2016.02.005

PubMed Abstract | Crossref Full Text | Google Scholar

Zhang, Z., Hu, M., and Wang, Y. (2011). “A survey of advances in biometric gait recognition” in Biometric recognition. CCBR 2011. eds. Z. Sun, J. Lai, X. Chen, and T. Tan, Lecture Notes in Computer Science, vol. 7098 (Berlin, Heidelberg: Springer).

Google Scholar

Keywords: person identification, biological motion, dynamic characteristics, face, voice, gait, body movements

Citation: Lander K and Bennetts R (2025) Something in the way they move: characteristics of identity present in faces, voices, body movements, and actions. Front. Psychol. 16:1645218. doi: 10.3389/fpsyg.2025.1645218

Received: 11 June 2025; Accepted: 30 June 2025;
Published: 15 July 2025.

Edited by:

Marina A. Pavlova, University Hospital Tübingen, Germany

Reviewed by:

Harriet M. J. Smith, Nottingham Trent University, United Kingdom

Copyright © 2025 Lander and Bennetts. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Karen Lander, a2FyZW4ubGFuZGVyQG1hbmNoZXN0ZXIuYWMudWs=

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.