Gesture Use and Processing: A Review on Individual Differences in Cognitive Resources
- Department of Psychology, Koç University, Istanbul, Turkey
Speakers use spontaneous hand gestures as they speak and think. These gestures serve many functions for speakers who produce them as well as for listeners who observe them. To date, studies in the gesture literature mostly focused on group-comparisons or the external sources of variation to examine when people use, process, and benefit from using and observing gestures. However, there are also internal sources of variation in gesture use and processing. People differ in how frequently they use gestures, how salient their gestures are, for what purposes they produce gestures, and how much they benefit from using and seeing gestures during comprehension and learning depending on their cognitive dispositions. This review addresses how individual differences in different cognitive skills relate to how people employ gestures in production and comprehension across different ages (from infancy through adulthood to healthy aging) from a functionalist perspective. We conclude that speakers and listeners can use gestures as a compensation tool during communication and thinking that interacts with individuals’ cognitive dispositions.
Human language occurs in a face-to-face interactional setting with the exchange of multiple multimodal cues such as eye-gaze, lip movements, body posture, and hand gestures. In this review paper, we focus on one of these multimodal cues: iconic hand gestures (henceforth, gestures) that represent objects, events, and actions. Speakers use an abundant number of gestures as they speak or think. These gestures serve many functions for speakers who produce them and for listeners who observe them (Goldin-Meadow et al., 2001; McNeill, 2005; Özyürek, 2014; Kita et al., 2017; Novack and Goldin-Meadow, 2017; Dargue et al., 2019). Although gesture and speech express meaning in a coordinated and integrated manner, gesturing is not mandatory for communication and, hence, shows variation across situations and individuals (Kita and Özyürek, 2003; Kendon, 2004; McNeill, 2005; Streeck, 2009). Speakers differ in how frequently they use gestures, how salient their gestures are, and how much they benefit from using gestures during encoding and learning. On the other hand, listeners also differ in how much they attend to the speaker’s gestures and benefit from observing gestures during comprehension and learning. The current paper will discuss individual differences in gesture use and processing.
There is individual variation in all human traits. People exhibit individual differences in cognitive abilities such as working memory (WM) capacity, attention, speech production, and processing as well as language acquisition (e.g., Daneman and Green, 1986; Just and Carpenter, 1992; Bates et al., 1995; Kane and Engle, 2002; Broadway and Engle, 2011; Huettig and Janse, 2016; Kidd et al., 2018). Current theories in cognitive science have not fully accounted for the existence as well as the causes of these individual differences for scientific gain (Underwood, 1975; Vogel and Awh, 2008). Most of the earlier studies in the gesture literature disregarded the variation among individuals and focused on group comparisons based on age (e.g., Feyereisen and Havard, 1999; Colletta et al., 2010; Austin and Sweller, 2014; Özer et al., 2017), sex (e.g., Özçalışkan and Goldin-Meadow, 2010), neuropsychological impairments (e.g., Cleary et al., 2011; Göksun et al., 2013b, 2015; Akbıyık et al., 2018; Akhavan et al., 2018; Hilverman et al., 2018; Özer et al., 2019; see Clough and Duff, 2020 for a review), culture, and the native status of the speakers and the listeners (i.e., bilinguals vs. monolinguals; e.g., Goldin-Meadow and Saltzman, 2000; Mayberry and Nicoladis, 2000; Pika et al., 2006; Kita, 2009; Nicoladis et al., 2009; Gullberg, 2010; Smithson et al., 2011; Kim and Lausberg, 2018; Azar et al., 2019, 2020) to understand how human multimodal language faculty operates at a general level. The gesture theories and current experimental practices in the gesture literature mostly downplayed the significance of individual differences and treated them as error variance. These studies create an illusionary and incorrect assumption that gesturing and the cognitive and communicative benefits of using and seeing gestures are invariant across people. However, using and observing gestures show not only across-group but also within-group variation (e.g., Hostetter and Alibali, 2007, 2011; Chu et al., 2014; Wu and Coulson, 2014a,b; Dargue et al., 2019; Özer et al., 2019; Özer and Göksun, 2020). What drives this variation?
There are external and internal sources of variation in gesture use and processing. The external sources of variation could be speech content (e.g., spatial vs. non-spatial topics; Rauscher et al., 1996; Feyereisen and Havard, 1999; Lausberg and Kita, 2003; Alibali, 2005; Hostetter, 2011), communicative context (e.g., the visibility between interlocutors, communicative intention, and audience design; Alibali et al., 2001; Trujillo et al., 2018; Schubotz et al., 2019), task difficulty and cognitive load levels (e.g., complex spatial tasks such as mental rotation; Wesp et al., 2001; Kita and Davies, 2009). There are also internal sources variation; even under the same external circumstances, people can behave differently. Insights into which mechanisms contribute to these individual differences just started to emerge (e.g., Hostetter and Alibali, 2007, 2011; Chu et al., 2014; Wu and Coulson, 2014a,b; Dargue et al., 2019; Aldugom et al., 2020; Kartalkanat and Göksun, 2020; Özer and Göksun, 2020).
Individual differences in personality characteristics, age, cognitive, and perceptual skills contribute to variation among individuals in terms of gesture use and processing (e.g., Vanetti and Allen, 1988; Cohen and Borsoi, 1996; Hostetter and Alibali, 2007, 2011; Wartenburger et al., 2010; Hostetter and Potthoff, 2012; Marstaller and Burianová, 2013; Göksun et al., 2013a; Chu et al., 2014; Gillespie et al., 2014; Wu and Coulson, 2014a,b; Pouw et al., 2016; Austin and Sweller, 2017, 2018; Eielts et al., 2018; Galati et al., 2018; Dargue and Sweller, 2020; Kartalkanat and Göksun, 2020; Özer and Göksun, 2020). However, most of the research on individual differences focused on gesture production, particularly on the cognitive correlates of variation in spontaneous gesture use and how much people benefit from using gestures during problem-solving and encoding of information. Research on individual variation in how listeners attend to speakers’ gestures and benefit from observing gestures for comprehension and learning is limited (Wu and Coulson, 2014a,b; Aldugom et al., 2020; Özer and Göksun, 2020).
In the current review paper, we discuss individual differences in (1) gesture use: how frequently speakers use gestures during spontaneous speech and how much they benefit from using gestures during task solving and learning and (2) gesture processing: how listeners attend to and process speakers’ gestures and how much they benefit from observing speakers’ gestures for online comprehension or subsequent learning. We specifically focus on individual differences in cognitive and perceptual abilities (see Hostetter and Potthoff, 2012 for personality characteristics). This review has three highlights: (1) we attempt to bring a complete picture of individual differences in gesture by bridging production (i.e., using gestures) and comprehension (e.g., seeing gestures) fields. (2) We adopt a functionalist approach to discuss possible cognitive correlates of gesture use and processing. Functionalist gesture theories (as opposed to mechanistic approaches such as McNeill, 1992, 2005; Hostetter and Alibali, 2008) discuss why speakers use gestures and what functions gestures serve for speakers and listeners during communication and thinking (e.g., Kita and Özyürek, 2003; Pouw et al., 2014; Cook and Fenn, 2017; Kita et al., 2017; Novack and Goldin-Meadow, 2017). Theories asserting for what purposes speakers and listeners employ gestures might inform us about the possible cognitive correlates of individual differences in gesture use and processing. (3) We also take on a life-span developmental perspective, which covers how gesture use and processing differ in changing cognitive skills throughout the developmental trajectory (from childhood through adulthood to healthy aging).
The literature on how different populations across ages use and process gestures during communication and learning is quite rich. It is noteworthy that the current paper is not a comprehensive review of the general literature. Instead, we specifically focus on studies investigating individual differences in these processes. We first review the functions of gestures during communication and learning (section Functions of Gestures During Communication and Learning). Then, we address evidence on individual differences in gesture use (section Individual Differences in Gesture Production) and gesture processing (section Individual Differences in Gesture Processing) for children, young adults, and elderly adults. Last, we conclude the current state of the field and discuss areas that are open to further investigation (section Conclusion and Future Directions).
Functions of Gestures During Communication and Learning
Several theories suggest how and why gestures occur during communication and thinking. Mechanistic theories mostly propose how gestures arise during communication and thinking (e.g., McNeill, 1992, 2005; Hostetter and Alibali, 2008, 2018). Functionalist theories, on the other hand, try to explain why we use gestures and the functions that gestures serve during communication and thinking, both for the speaker and the listener (e.g., Goldin-Meadow et al., 2001; Kita and Özyürek, 2003; Pouw et al., 2014; Cook and Fenn, 2017; Kita et al., 2017; Novack and Goldin-Meadow, 2017). The approach in this review will be from functionalist perspectives as they can give insight into which mechanisms might contribute to individual differences in gesture use and processing.
Gestures have several functions during communication and thinking. First, gestures affect communication between interlocutors. Speakers and listeners employ gestures for communicative purposes. Speakers produce gestures to communicate information, and listeners, in turn, benefit from these gestures to comprehend the to-be-communicated message (e.g., Beattie and Shovelton, 1999; Alibali et al., 2001; Holler and Stevens, 2007; Hostetter, 2011; Goldin-Meadow and Alibali, 2013). Speakers use gestures as an alternative channel of expression. Hence, both speakers and listeners employ gestures more in communicative challenges stemming from cognitive dispositions such as when a speaker is linguistically non-competent (e.g., bilinguals talking in their non-native language; Smithson et al., 2011; Gullberg, 2010) or has hearing impairments (Obermeier et al., 2012). The communicative function of gestures suggests that speakers and listeners with low communicative capacity (e.g., low linguistic proficiency, low semantic fluency, or the non-native status of the speaker and the listener) might employ and benefit from gestures more.
Second, gestures affect speakers’ and listeners’ cognitive processes. Gestures help activate, maintain, manipulate, and package visual, spatial, and motoric information for speaking and thinking (Kita et al., 2017). Gestures reduce cognitive load by keeping spatial-motoric information active in WM (Goldin-Meadow et al., 2001; Wesp et al., 2001; Morsella and Krauss, 2004; Ping and Goldin-Meadow, 2010; Cook et al., 2012; Marstaller and Burianová, 2013) and by projecting internal representations to an external space (e.g., Pouw et al., 2014). Producing gestures provides an external visual feedback that can be used to maintain or retrieve task-related visual-spatial information and, hence, reduces the cognitive load. Considering this, we expect that people might use gestures as a compensatory tool to manage their cognitive load. For example, people with lower visual-spatial cognitive capacity (e.g., lower visual-spatial WM capacity, lower general spatial skills assessed by mental rotation, and lower fluid intelligence assessed by Raven’s Matrices) might use gestures more frequently to compensate for their limited resources when talking and thinking, especially about spatial information (e.g., Trafton et al., 2006; Göksun et al., 2013a; Chu et al., 2014; Galati et al., 2018). In a similar vein, speakers’ gestures provide a stable visual representation for observers (i.e., listeners) and help listeners during comprehension and learning. People with lower cognitive resources might be in a greater need for external aids, and thus benefit more from seeing gestures (e.g., de Nooijer et al., 2013; Wu and Coulson, 2014a; Özer and Göksun, 2020).
Functional gesture theories assert that gestures help to convey information during communication and manage cognitive load during speaking, thinking, and learning (e.g., Kita et al., 2017; Novack and Goldin-Meadow, 2017). This suggests that gesture use and processing are sensitive to the cognitive dispositions of the speakers and the listeners. People might convey gestures to manage and compensate for their limited cognitive resources.
Mechanistic gesture theories, on the other hand, emphasize how people employ gestures. As opposed to functionalist theories, one of the first and most influential mechanistic accounts of gesture production (The Growth Point Theory, McNeill, 1992, 2005; McNeill and Duncan, 2000) posits that gestures do not compensate for thinking and speaking. According to this account, gesture and speech originate from a single representational system, where an utterance contains both linguistic and imagistic structures that cannot be separated. Speech stems from propositional linguistic representations and gestures stem from non-propositional imagistic representations and reflect visual, spatial, and motoric thinking (McNeill, 1992, 2005; Krauss et al., 2000). This account suggests that gestures are the manifestations of the imagistic component of the thought. Although mechanistic accounts would not be against the role gestures play for people to manage cognitive processes, they emphasize how people employ gestures rather than why they gesture.
In the following sections, we review evidence regarding how individual differences in cognitive domains relate with gesture use and processing from a functionalist account, mainly considering the gesture-as-a-compensation-tool view. That is, following the functionalist approach, we will illustrate the functions of gestures for speakers and listeners who use their cognitive resources differently. Gestures might not be used as a compensatory tool for every situation across different groups (e.g., So et al., 2009; Chui, 2011; de Ruiter et al., 2012); yet, the current state of the field supports the beneficial part of gestures for communication, thinking, and learning (e.g., Goldin-Meadow et al., 2001; Kita et al., 2017; Novack and Goldin-Meadow, 2017).
Individual Differences in Gesture Production
People from all ages show variation in terms of how frequently they use gestures, how salient their gestures are, and what types of gestures they use during spontaneous speech (e.g., Feyereisen and Havard, 1999; Richmond et al., 2003; Priesters and Mittelberg, 2013; Chu et al., 2014; Nagels et al., 2015; Schmalenbach et al., 2017; Arslan and Göksun, in press). People also differ in how much they benefit from using gestures during speaking, encoding, and subsequent learning (e.g., Goldin-Meadow et al., 2001; Ping and Goldin-Meadow, 2010; Galati et al., 2018). To date, studies mostly focused on two possible cognitive correlates: visual-spatial vs. verbal cognitive resources. We discuss how individual differences in visual-spatial and verbal cognitive capacities relate to gesture production in children, adults, and elderly adults.
Individual Differences in Gesture Production in Children
Babies start to use pointing gestures at around 12 months of age and iconic gestures at around 3 years of age (Iverson et al., 1999; Özçalışkan and Goldin-Meadow, 2005, 2010). Gestures open the way for the transition from prelinguistic to linguistic period, and gestures become increasingly intertwined with speech as children become older (e.g., Capirci et al., 2005; Capirci and Volterra, 2008; Liszkowski et al., 2008). Özçalışkan and Goldin-Meadow (2005) analyzed children’s gestures at 14, 18, and 22 months of ages when children are interacting spontaneously with their mothers. They showed that children used more gestures as they got older. Moreover, there was a developmental shift toward the use of more supplementary gestures (e.g., saying “ride” and pointing at the bike) as opposed to reinforcing gestures (e.g., saying “bike” and pointing at the bike) by older children. Yet, there was no difference in the quality or the quantity of the maternal input across development, suggesting that changes in children’s gestural behavior might reflect developmental changes in children’s own cognitive processes. Then, individual differences in several cognitive processes might lead to variations in how and to what extent children use gestures in spontaneous speech. Children, even as early as 14 months of age, show individual variation in whether they use iconic gestures and how frequently they use them (e.g., Iverson et al., 1999; Özçalışkan and Goldin-Meadow, 2005). What drives these very early individual differences in gesture use? To date, the gesturing behavior of young children mostly focused on how individual differences in early gesture use predicted later language development (e.g., Rowe and Goldin-Meadow, 2009; Demir et al., 2015). Studies examining the precursors of these variations, on the other hand, primarily focused on how parental language input (speech and gesture) relates with children’s spontaneous gesture production (e.g., Iverson et al., 2008; Rowe et al., 2008; Tamis-LeMonda et al., 2012). It is unknown which cognitive and perceptual abilities of these young children drive early individual differences in gesture production. Early socio-cognitive precursors of gesture use in infancy is an open area for further investigation.
How do children use gestures at later ages, such as during preschool and school-age? Children have not yet fully developed verbal skills as compared to young adults; thus, they might use gestures more during speaking as gestures provide an alternative channel of expression (e.g., Melinger and Levelt, 2004) and help facilitate speaking (Krauss et al., 2000). Indeed, studies report that preschool-aged children benefit more from gestures than older children and adults, especially when using complex language (e.g., Church et al., 2000; Austin and Sweller, 2014). Moreover, children in transitional stages (i.e., children who have the conceptual knowledge but not yet the skills to verbalize that knowledge) used more gestures to convey ideas compared to children who had necessary verbal resources to convey the same idea linguistically (e.g., Church and Goldin-Meadow, 1986; Perry et al., 1992). These gestures (so-called gesture mismatches) expressed non-redundant information that was not found in the accompanying speech. Children (ages 5–10) used more non-redundant speech-gesture combinations both at the clause and word levels compared to adults (Alibali et al., 2009). This is also evident in the expression of other linguistically challenging categories such as causal or spatial relations (e.g., Göksun et al., 2010; Austin and Sweller, 2018; Calero et al., 2019; Karadöller et al., 2019). Children used more gestures to convey additional information when they could not verbalize instruments of causal events (Göksun et al., 2010) or spatial relations such as left-right (Karadöller et al., 2019). For example, ambiguous spatial terms such as “here” can be complemented by gestures to specify the spatial relation (Karadöller et al., 2019). The multimodal discourse continues to develop during the school-age years. There is a developmental shift toward the use of a higher number of gestures per clause by 10-year-old children and adults than 6-year olds in narrative production tasks (e.g., Colletta et al., 2010; Alamillo et al., 2013).
Developmental studies suggest that children might use gestures as an alternative channel of expression to compensate for their limited linguistic proficiency (e.g., younger vs. older children or children vs. adults; Church et al., 2000; Alibali et al., 2009; Colletta et al., 2010). This is in line with bilingualism research showing that bilingual children speaking in their L2 used more gestures than monolinguals (e.g., Smithson et al., 2011; Wermelinger et al., 2020). Moreover, research on clinical populations with communication and language delays suggests that although there are delays in gesture production in the first 2 years, gesture might be used to compensate for communication and language difficulties at preschool and school ages by some children (Özçalışkan et al., 2013; LeBarton and Iverson, 2017). Children with language impairments (LI) used gestures at a higher rate and produced greater proportions of gestures that added unique information to the accompanying speech compared to typically developing (TD) peers, suggesting that children with LI employ gestures as an alternative channel of expression in the face of language difficulties (Evans et al., 2001; Blake et al., 2008; Iverson and Braddock, 2011; Mainela-Arnold et al., 2011, 2014).
Similar to children with LI, children with Down syndrome (DS) used more gesture-only expressions and expressed information uniquely in their gestures compared to TD children to compensate for spoken language delays (Stefanini et al., 2007; Dimitrova et al., 2016; Özçalışkan et al., 2017). Children with Williams syndrome (WS) also used more iconic gestures in a picture naming task compared to TD children to alleviate their word-finding difficulties (Bello et al., 2004). Yet, not all children with language delays benefit from gestures as a compensatory tool. Children with autism spectrum disorder (ASD) exhibit delays in gesture production that are apparent both in frequency and complexity (Colgan et al., 2006; Rozga et al., 2011; Watson et al., 2013; Dimitrova et al., 2016; Özçalışkan et al., 2016, 2017). Research shows that children with ASD used gestures to initiate and sustain joint attention and to compensate for speech limitations by supplementing speech to a lesser degree compared to TD peers, leading to negative consequences for learning and social interaction opportunities (Sowden et al., 2013; Watson et al., 2013; Mastrogiuseppe et al., 2015). Impairments in gesture production are more pronounced in ASD compared to other developmental delays such as DS (Mastrogiuseppe et al., 2015), LI (Stone et al., 1997), and general intellectual delay (Mundy et al., 1990) and, thus, are considered to be a central component of problems in social interactions and delays in social development in ASD. Moreover, language delays not only affect children’s gesture productions but also gestural input they receive from their caregivers, resulting in cascading consequences for language development.
Research suggests that children’s language level affects caregivers’ gestures to a greater extent when a child’s language skills are limited (Iverson et al., 2006; Talbott et al., 2015; Dimitrova et al., 2016; Özçalışkan et al., 2017, 2018). For example, mothers of non-diagnosed high-risk ASD infants gestured more frequently compared to mothers of low-risk ASD infants (Talbott et al., 2015). The evidence on the compensatory use of gestures by children with language delays indicate that gesture is a tool that should be harnessed to support learning, especially for child clinical populations (LeBarton and Iverson, 2017). Gesture is also an early diagnostic tool to foresee persistent language delay, especially for children with unilateral brain lesions (Sauer et al., 2010; Özçalışkan et al., 2013). Although these studies suggest a link between early spoken language abilities and gesture production in children, the direct evidence on how individual differences in early receptive and expressive language skills relate with spontaneous gesture use within children with and without language delays is quite limited (Kartalkanat and Göksun, 2020; Wermelinger et al., 2020).
A growing body of literature shows that using gestures benefit children’s subsequent memory and learning (e.g., Alibali and DiRusso, 1999; Wakefield et al., 2018). Do all children benefit similarly from using gestures? Post et al. (2013) showed that children who simultaneously produced and observed gestures when learning grammatical rules performed worse than children who only observed gestures. However, the adverse effects of gesturing on learning were only visible for children with lower verbal skills, suggesting that producing and observing gestures simultaneously might be too cognitively demanding, especially for children with lower verbal resources (Kalyuga, 2007). Nevertheless, it should be noted that this study tested the effects of using gestures on learning under high cognitive load. There is no direct evidence on how verbal skills relate to how much children benefit from using gestures when they are under average cognitive load (e.g., without observing gestures simultaneously).
Developmental studies mostly compared different age groups (e.g., children vs. adults or younger vs. older children; e.g., Colletta et al., 2010), bilinguals vs. monolinguals (e.g., Mayberry and Nicoladis, 2000; Smithson et al., 2011), and clinical vs. non-clinical groups (e.g., Bello et al., 2004; Dimitrova et al., 2016; LeBarton and Iverson, 2017). These studies suggest that children use gestures as a compensatory tool, and individual differences in verbal skills play a role in how much children use and benefit from using gestures during learning. Moreover, visual-spatial skills follow a protracted development, and children show individual variation in visual-spatial abilities (Newcombe et al., 2013). Given that gestures are visual-spatial entities and help activate, maintain, and manipulate visual-spatial information (Kita et al., 2017), individual differences in visual-spatial abilities during childhood might affect how much children use gestures and benefit from using gestures for learning. However, there is no direct evidence on how individual differences in verbal and visual-spatial skills relate to children’s gesture use, which begs for future research.
Individual Differences in Gesture Production in Young Adults
Most of the research on individual differences in gesture production focused on young adults. Studies showed that young adults with lower cognitive capacities used more spontaneous gestures and benefited more from using gestures (e.g., Chu et al., 2014; Gillespie et al., 2014; but see Hostetter and Alibali, 2007), supporting the functionalist accounts (Goldin-Meadow et al., 2001; Marstaller and Burianová, 2013; Kita et al., 2017) and gesture’s role as a compensation tool. The visual-spatial cognitive capacity is related to how much speakers employ gestures during speaking and thinking. People with lower visual and spatial WM capacities, mental rotation skills, and spatial conceptualization abilities (Kita and Davies, 2009) used more gestures compared to high-spatial ability individuals when explaining abstract phrases or social dilemmas (Chu et al., 2014). In a spatial gesture elicitation task, Göksun et al. (2013a) asked young adults to describe how they solved mental rotation problems and found that people with lower spatial abilities (lower mental rotation scores) used more gestures compared to people who had higher scores. However, low‐ and high-spatial ability individuals not only differed in the frequency of gestures but also in the type of gestures they used. People with low-spatial ability used more static gestures depicting objects (i.e., cubes or whole objects), whereas high-spatial ability individuals used more dynamic gestures to express motion, such as rotation or direction or static gestures referring to object pieces (e.g., the bottom part of the L shape). This finding is in line with a previous study showing that although lower‐ and higher-fluid intelligence individuals (as measured by Raven’s matrices) used an equal number of gestures when describing how to solve geometric analogies, people with higher fluid intelligence used more gestures to express motion than people with lower fluid intelligence (Wartenburger et al., 2010; Sassenberg et al., 2011).
Verbal cognitive capacity is another predictor for how and to what extent speakers use gestures (e.g., Baxter et al., 1968; Hostetter and Alibali, 2007, 2011; Nagpal et al., 2011; Smithson and Nicoladis, 2013; Gillespie et al., 2014; for cf. see Frick-Horbury, 2002 and Chu et al., 2014). Young adults with lower verbal abilities such as lower verbal WM capacity, vocabulary size, and semantic fluency (i.e., phonological and lexical retrieval ability) used more gestures during spontaneous speech than individuals with higher verbal abilities (e.g., Hostetter and Alibali, 2007, 2011; Smithson and Nicoladis, 2013; Gillespie et al., 2014; but see Chu et al., 2014). These findings corroborate with bilingual research, showing that bilinguals used more gestures when talking in their L2 compared to L1 or monolinguals (e.g., Gullberg, 1998; Nagpal et al., 2011). Verbal WM also predicted gesture frequency similarly in bilinguals and monolinguals (Smithson and Nicoladis, 2013).
Is there an interaction between verbal and spatial skills in gesture use? Hostetter and Alibali (2007) showed a quadratic relationship between verbal resources and spontaneous gesture use. People with the lowest and highest verbal skills (i.e., phonemic fluency) gestured more than people with average verbal skills when they were retelling a cartoon story and describing how to wrap a package. Moreover, low verbal/high visual-spatial individuals produced the largest number of gestures and used more non-redundant gestures (Vanetti and Allen, 1988; Hostetter and Alibali, 2007, 2011). This might suggest that gestures are more helpful when speakers have spatial information in the non-propositional format in mind but are unable to lexicalize or to encode verbally (e.g., Graham and Heywood, 1975; Krauss and Hadar, 1999).
Young adults also show individual variation in how much they benefit from using gestures during task solving or subsequent memory and learning (e.g., Marstaller and Burianová, 2013). Young adults use many gestures when encoding information that facilitates their subsequent memory and learning, especially for visual and spatial information (e.g., Chu and Kita, 2011; So et al., 2015). However, using gestures is especially beneficial for people with lower cognitive capacity (e.g., Marstaller and Burianová, 2013; Pouw et al., 2016; Galati et al., 2018). People who used gestures when trying to learn new routes had better memory in a subsequent navigation task; however, this was only evident for people with a lower spatial perspective-taking ability (Galati et al., 2018). Moreover, gesturing benefited problem solving under higher cognitive load (e.g., dual-task paradigm; Marstaller and Burianová, 2013) and when internal cognitive resources are taxed or limited (e.g., Pouw et al., 2016).
Individual differences in verbal and visual-spatial skills affect how much young adult speakers use and benefit from producing gestures during speaking and problem-solving. Conforming the gesture-as-a-compensation-tool account, speakers employ gestures to compensate for lower verbal and spatial cognitive resources. However, we should be cautious about the generalizability of these findings as to the use of different cognitive measures, and gesture elicitation tasks (e.g., spatial vs. non-spatial abstract) might yield different results. Further research is needed to replicate these conclusions across contexts.
Individual Differences in Gesture Production in Healthy Aging
Evidence on spontaneous gesture use in healthy aging is minimal. Most of the research compared young and elderly adults and showed that spontaneous gesture production and gesture imitation is impaired in aged populations (e.g., Cohen and Borsoi, 1996; Dimeck et al., 1998; Feyereisen and Havard, 1999). Elderly adults used less representational gestures compared to young adults, whereas overall gesture frequency or the use of non-representational gestures (e.g., beat or conduit gestures) was comparable across two groups (Cohen and Borsoi, 1996; Glosser et al., 1998; Feyereisen and Havard, 1999; Arslan and Göksun, in press; for c.f. see Özer et al., 2017; Schubotz et al., 2019). This might be due to declining visual-spatial cognitive resources in aging. For example, mental imagery declines with aging (e.g., Dror and Kosslyn, 1994; Copeland and Radvansky, 2007; Andersen and Ni, 2008) and, indeed, individual differences in mental imagery, but not spatial WM capacity was associated with how frequently young and elderly individuals used spontaneous gestures, particularly for a spatial address description task (Arslan and Göksun, in press). Elderly individuals were also impaired in designing their multimodal utterances for their addressees (i.e., audience design; Schubotz et al., 2019). When narrating comic cartoons, young adults used fewer gestures when they knew that their addressee also watched the comic cartoon compared to when their addressee did not see the cartoon. However, elderly adults used an equal number of gestures in both cases.
We might expect that declining visual-spatial skills in aging would lead to higher use of gestures by older adults than in younger ones. However, gestures might be used as a compensatory tool only to manage cognitive load when the person has the necessary/intact resources. Most of the studies comparing younger vs. older adults tested individuals who are older than 60 years of age (e.g., Cohen and Borsoi, 1996), and it is unknown whether visual-spatial skills are severely impaired in this age group. Less is also known on the decline in which cognitive abilities in healthy aging leads to age-related impairments in gesture production (but see Arslan and Göksun, in press). It is important to note that this area is open to investigation and future research should study the decline in which cognitive resources lead to impaired gesturing in aging, whether the effects of aging on gesturing is similar for everyone, and which cognitive resources might play a protective role for the decline of gesture production. More research is also needed to examine whether elderly individuals benefit from using gestures as young adults and children do or producing gestures impose an extra cognitive burden to their already limited cognitive resources.
Moreover, we mainly focused on gesture use in healthy aging, yet, the line of research on how people with neurodegenerative disorders use gestures is informative as well (e.g., Cleary et al., 2011; Rousseaux et al., 2012; Klooster et al., 2015; Akhavan et al., 2018; Özer et al., 2019). People with different types of neurodegenerative disorders such as Alzheimer’s disease, primary progressive aphasia, and Parkinson’s disease are natural targets to study gesture in aged populations because the prevalence rates of these diseases consistently increase with age (e.g., Jorm et al., 1987; Brayne et al., 2006). For example, Klooster et al. (2015) showed that the beneficial effects of using and observing gestures on new learning in a Tower of Hanoi paradigm were absent in elderly patients with intact declarative memory, but impaired procedural memory as a consequence of Parkinson’s disease. This suggests that the procedural memory system supports the ability of gestures to drive new learning. Thus, the decline in different memory systems in different neurodegenerative disorders that increase with age might lead to variation in how elderly adults benefit from gestures during learning. Future studies should test the cognitive correlates of impaired gesture use in different neuropsychological groups.
Individual Differences in Gesture Processing
Listeners are sensitive to speakers’ gestures and benefit from observing these gestures during online language comprehension, encoding, and subsequent memory and learning (Holler et al., 2009; Kelly et al., 2010; Hostetter, 2011; Dargue et al., 2019). The facilitative effects of observing gestures are evidenced across children (e.g., Cook et al., 2008; Austin and Sweller, 2014, 2017; Macoun and Sweller, 2016; Vogt and Kauschke, 2017; Holler et al., 2018; Aussems and Kita, 2019; Dargue and Sweller, 2020; Kartalkanat and Göksun, 2020) and young adults (e.g., Beattie and Shovelton, 1999; Roth, 2001; Holle and Gunter, 2007; Kelly et al., 2008; Hostetter, 2011; Rueckert et al., 2017; Dargue and Sweller, 2020). Research regarding individual differences in how listeners attend to and process speakers’ gestures and how much they benefit from observing gestures during comprehension and learning is quite limited, especially when compared to the literature on individual differences in gesture production (e.g., Post et al., 2013; Wu and Coulson, 2014a,b; Yeo and Tzeng, 2019; Özer and Göksun, 2020). In the next subsections, we review evidence on individual differences in gesture processing and its effects on comprehension and learning in children, young adults, and elderly adults. Then, we discuss several possible cognitive mechanisms that might yield individual differences in gesture processing, suggesting new venues for future research.
Individual Differences in Gesture Processing in Children
Electrophysiological studies showed that children start to process iconic gestures as semantic entities like words at around 18 months of age (Sheehan et al., 2007). Behaviorally, they start to comprehend iconic gestures representing entities at around 3 years of age (Stanfield et al., 2014) and iconic gestures representing events at around 4 years of age (Glasser et al., 2018). Studies showed that 3-year olds could not integrate speech and gesture, whereas 5-year old and adults did (e.g., Sekine and Kita, 2015; Sekine et al., 2015). Moreover, children starting from 6 years of age integrate speech and gesture in an online fashion comparable to adults (Dick et al., 2012; Sekine et al., in press). Demir-Lira et al. (2018) showed that gesture-speech integration recruits the same neural network as in adults. Yet, this was true only for children who were able to successfully integrate speech and gesture behaviorally. Then, what drives these individual differences in early gesture-speech integration ability?
Gesture-speech integration requires a global developmental shift. The precursors of gesture comprehension and gesture-speech integration are unknown. Gestures are visual-spatial entities and the processing, and the interpretation of gestures requires visual-spatial cognitive resources (e.g., Kelly and Goldsmith, 2004). Children with lower visual-spatial skills might have difficulty in processing and comprehending gestures compared to children with higher visual-spatial skills. The global development of executive attention and general WM capacity, on the other hand, might play a role in gesture-speech integration. For example, children with lower overall WM capacity might have difficulty in maintaining and integrating two different kinds of information simultaneously, especially in offline integration tasks (e.g., Demir-Lira et al., 2018). Cognitive predictors of individual differences in children’s gesture comprehension and gesture-speech integration abilities require further attention.
What about individual differences in the beneficial effects of observing gestures for subsequent learning? Not all children benefit from visual aids such as diagrammatical illustrations when learning math (e.g., (Cooper et al., 2017). Indeed, observing gestures does not assist all children’s comprehension of narratives or learning new skills (e.g., Church et al., 2004; van Wermeskerken et al., 2016; Yeo and Tzeng, 2019; Bohn et al., 2020; Kartalkanat and Göksun, 2020). Kartalkanat and Göksun (2020) found a positive relationship with verbal skills and the beneficial effects of observing gestures, preschoolers with higher expressive language ability benefited more from observing iconic gestures in the encoding of spatial events. Bohn et al. (2020) found that children benefited from observing gestures when learning novel skills (e.g., how to open a novel apparatus) as they became older. On the other hand, Demir et al. (2014) showed that children with pre‐ and perinatal unilateral brain injury (BI) and had difficulty in narrative production benefited more from observing gestures when retelling narratives compared to TD children (Demir et al., 2014). Moreover, children with specific language impairment (SLI) benefited more from observing gestures compared to TD and used the same gestures they observed when retelling the inferred meaning of the spoken messages (Kirk et al., 2011). The contradictory findings regarding the relation between verbal abilities and the beneficial effects of observing gestures between children with language impairments and children with intact language abilities pose a challenge. We might expect TD children with lower verbal abilities to benefit more from observing gestures as predicted by gesture-as-a-compensation-tool account; however, young children’s limited verbal resources might be already consumed with processing speech, leaving few resources to process and benefit from external visual cues (i.e., gestures; Kalyuga, 2007). Again, gestures might help children manage cognitive load when they have fully developed verbal abilities. However, children with language impairments might employ gestures to compensate for their already-impaired spoken language abilities.
Individual differences in verbal (e.g., digit span task; Kartalkanat and Göksun, 2020), visual (e.g., visual patterns task; van Wermeskerken et al., 2016), or general WM capacity (e.g., operation span task; Yeo and Tzeng, 2019) did not predict how much children benefited from observing gestures for learning. There was hardly any variance in WM capacity in most of these studies (e.g., van Wermeskerken et al., 2016). This might obscure the otherwise possible effects of different WM capacities on the values of observing gestures in children. Additionally, how general spatial skills (e.g., mental rotation and mental imagery) relate to how much children benefit from observing gestures needs to be investigated in future research.
Individual Differences in Gesture Processing in Young Adults
Young adults also differ in how they process spontaneous co-speech gestures. Processing gestures require visual, spatial, and motoric cognitive resources (e.g., Kelly and Goldsmith, 2004; Wu and Coulson, 2014a). We expect people with higher visual-spatial abilities to process and comprehend gestures better. Indeed, Wu and Coulson (2014a) found that people with higher spatial WM (but not verbal WM) were better at processing co-speech gestures as they were more sensitive to speech-gesture mismatches (i.e., high-spatial individuals were affected more negatively when gesture and speech expressed incongruent information). Moreover, people who have larger spans for retaining and manipulating bodily configurations (i.e., motor movement span task assessing individuals’ ability to retain body-centric motor information) comprehended gestures better (Wu and Coulson, 2014b). In a recent study, we asked how visual-spatial vs. verbal WM capacity relates to processing concurrent visual (i.e., gesture) and verbal (i.e., speech) information in a mismatch paradigm used initially by Kelly and colleagues in 2011 (Özer and Göksun, 2020). We demonstrated that listeners showed differential sensitivity in processing concurrent gestural vs. spoken information. Although gesture-speech mismatches hindered overall comprehension, how listeners got affected by mismatches in different modalities (gesture vs. speech mismatches) was dependent on the listeners’ cognitive dispositions on visual-spatial vs. verbal resources. Observing mismatching visual information (i.e., gesture) imposes an additional visual-spatial cognitive load and people with higher spatial abilities were better at maintaining and processing two different and mismatching visual information due to their higher capacity. As a result, these individuals perform better when gestures expressed mismatching information compared to people with lower spatial abilities. People with higher verbal abilities, on the other hand, had better performance when speech expressed mismatching information compared to people with lower verbal abilities. These findings suggest that visual-spatial cognitive resources are critical for gesture processing and observing mismatching gestures increase visual-spatial cognitive load (e.g., Kelly and Goldsmith, 2004; Hostetter et al., 2018). Processing mismatching information in visual modality would be less demanding for people with larger visual-spatial cognitive resources.
What about individual differences in how much listeners benefit from observing gestures? Earlier studies are limited in suggesting how listeners integrate visual information with speech and use gestures to encode information either for online language comprehension or for subsequent learning. Research on how learners benefit from different multimedia materials (visual vs. verbal representations) might give us insight in this matter (Ausburn and Ausburn, 1978; Kirby et al., 1988; Koć-Januchta et al., 2017; Kiat and Belli, 2018; but see Kirschner, 2017). Individuals show variation in how they benefit from visual vs. verbal information (Kirby et al., 1988; Riding et al., 1995; Kozhevnikov et al., 2002; Mendelson and Thorson, 2004; Meneghetti et al., 2014; Alfred and Kraemer, 2017). For example, learners show variation in how they fixated to text vs. pictures when learning from multimedia resources (Koć-Januchta et al., 2017) and students with higher spatial abilities benefited more from the presence of 3D models when learning cell biology compared to students with lower spatial abilities (Huk, 2006). This suggests that listeners’ cognitive dispositions might be related to how much they benefit from observing gestures vs. hearing speech. A very recent study directly tested how different WM capacities related to how much young adults benefited from observing gestures (Aldugom et al., 2020). Undergraduate students with higher visual WM capacity (i.e., visual patterns task) benefited more from observing gestures during math learning whereas verbal (i.e., sentence span task) and motoric (i.e., movement span task, Wu and Coulson, 2014b) WM capacities did not predict the beneficial effects of observing gestures (Aldugom et al., 2020). Although it is well-established in the literature that gestures facilitate listeners’ comprehension and learning (see Özyürek, 2014 for review), evidence suggests that this is not a monolithic process. It is also possible that observing gestures do not always facilitate comprehension and learning. For example, observing gestures hurt learning phonetic distinctions at the syllable level within a word for English-speaking adults learning vowel length contrasts in Japanese (Kelly et al., 2014). However, as in the case with children (Kartalkanat and Göksun, 2020), learners’ level of second-language proficiency might play a role for benefitting from gestures, which is another cognitive resource to be examined. Future studies should investigate the cognitive precursors of individual differences in the beneficial effects of observing gestures across different learning contexts (e.g., spatial vs. non-spatial) and different stages of language processing (e.g., phonological vs. semantic; Kelly et al., 2014).
Individual Differences in Gesture Processing in Healthy Aging
Few studies examined how elderly individuals process gestures and benefit from observing gestures (e.g., Thompson, 1995; Ska and Croisile, 1998; Montepare et al., 1999; Thompson and Guzman, 1999; Cocks et al., 2011). Elderly individuals are impaired in their comprehension of pantomimes and emotional gestures compared to young individuals (Ska and Croisile, 1998; Montepare et al., 1999). Moreover, elderly adults are impaired in integrating speech and gesture compared to young adults (Cocks et al., 2011). However, they performed equally when two cues are presented in isolation, suggesting that they might be impaired in gesture-speech integration with a preserved ability to process gestures. Indeed, elderly adults mostly relied on visible speech and did not benefit from observing gestures when recalling sentences (Thompson, 1995).
Although young adults benefited from visual aids (i.e., visible speech and gestures) under challenging listening conditions (i.e., dichotic shadowing task), older adults did not (Thompson and Guzman, 1999). The differences in the effects of observing gestures between younger and older adults might be related to the declining cognitive abilities associated with aging, mainly due to WM capacity as WM is required to maintain and manipulate different information. However, this has not been addressed directly.
Previous research suggests that elderly adults have difficulty in integrating visual (i.e., gesture) and verbal (i.e., speech) information compared to younger adults. It might be due to a decline in global cognitive skills such as executive attention and general WM capacity. Yet, it has not been directly tested. Future studies should compare younger vs. older adults with several cognitive measures to understand the cognitive architecture behind impaired gesture processing and gesture-speech integration in healthy aging.
Conclusion and Future Directions
Speakers use gestures as they speak and think, and listeners, in turn, are sensitive to speakers’ gestures. Gestures (both using by speakers and observing by listeners) have beneficial effects on language comprehension, problem-solving, encoding, and subsequent learning. Studies, to date, mostly focused on the role of different external factors (e.g., speech content and communicative context) on gestural behavior to answer when we use and benefit from gestures. However, it is also essential to ask who uses and benefits from gestures for which purposes. Research on the cognitive precursors of these individual differences in gesture use and processing has just started to emerge. Examining individual differences in gesture use and processing will help us uncover the cognitive architecture behind these processes and inform gesture research that is based on group data. The accounts that explain how and why gestures are employed should integrate individual differences research to have a full picture of when, why, and for whom gestures exhibit their supposed roles. This line of research is also informative for the development of educational programs incorporating the use of gestures by learners or teachers. The instructional programs should be tailored according to the cognitive dispositions and needs of the learners for optimal learning outcomes.
Most of the research on individual differences in the gesture literature examined gesture production in young adults. Studies on gesture use in children and elderly adults focused on group comparisons (i.e., comparing children at different ages, children vs. adults, younger vs. older adults, and clinical vs. non-clinical groups). Moreover, individual differences in gesture processing are limited compared to the production literature. In the current review paper, we (1) combined two lines of research: using gestures and observing gestures and (2) discussed the possible cognitive precursors of gesture use and processing in different age groups. We also highlighted the functions of producing and seeing gestures regarding their compensatory roles in speaking and thinking.
Gestures provide an alternative expression channel and assist speakers and listeners communicate (e.g., Alibali et al., 2001; Hostetter, 2011). Gestures also decrease speakers’ and listeners’ cognitive load by aiding them to activate, maintain, and manipulate visual-spatial information (e.g., Kita et al., 2017; Novack and Goldin-Meadow, 2017). Gestures help people manage cognitive load and are used as a compensatory tool. Listeners’ and speakers’ cognitive dispositions interact with this compensatory role of gestures, leading to individual differences in how much people benefit from using and seeing gestures for speaking, comprehension, task solving, and learning. As the gesture-as-a-compensation-tool account would argue, children and adults with lower cognitive resources use and benefit from using gestures more to manage cognitive load compared to people with higher cognitive resources (e.g., Church et al., 2000; Göksun et al., 2010; Marstaller and Burianová, 2013; Austin and Sweller, 2014; Chu et al., 2014; Gillespie et al., 2014; Galati et al., 2018). However, we suggest that gestures do not replace the impaired cognitive abilities; instead, gestures help manage cognitive load when cognitive resources are intact. Gestures might not compensate for already-impaired cognitive abilities. For example, people with aphasia use more gestures to compensate for impaired speech, but only when they have the intact conceptual knowledge of what they express (e.g., Göksun et al., 2013b, 2015). In a similar vein, there is a decrease in gesture production in healthy aging that might be due to impaired visual-spatial abilities such as mental imagery (e.g., Cohen and Borsoi, 1996; Arslan and Göksun, in press). There is also evidence of individual differences in gesture processing. Processing and comprehending gestures require visual-spatial cognitive resources (e.g., Kelly and Goldsmith, 2004; Hostetter et al., 2018). People with higher visual-spatial skills (or older children compared to younger children) process gestures better compared to people with lower visual-spatial skills (e.g., Wu and Coulson, 2014a,b; Özer and Göksun, 2020). In line with the gesture-as-a-compensation-tool account, we expect that people with lower cognitive resources (especially visual-spatial) would benefit more from observing external visual cues (i.e., gestures, but see Aldugom et al., 2020). However, research on how visual-spatial abilities relate to how much listeners benefit from observing gestures is inconclusive and begs for further investigation.
Although group-comparison studies are informative, future work should address within-group variation more, especially in children and in elderly adults. How different cognitive skills are associated with gesture production and processing should be tested directly across different conditions. The employment of different cognitive measures, gesture elicitation tasks, and learning contexts might yield different results, and these should be incorporated to have a full picture of whom for and when gestures are helpful. For example, the relationship between visual-spatial abilities and how frequently speakers use gestures and how much they benefit from using and observing gestures depend on the content of the information to be communicated or learned (Lausberg and Kita, 2003; Hostetter and Alibali, 2007; Chu et al., 2014; Arslan and Göksun, in press). The role of visual-spatial abilities in gesture use and processing might be more pronounced in spatial speech compared to non-spatial speech (e.g., Alibali, 2005; Arslan and Göksun, in press). Future work should also investigate how internal sources of variation (e.g., individual differences in several abilities) interact with external sources of variation (e.g., speech content and task difficulty).
One area open for future investigation is the cognitive predictors of gesture processing; that is, how listeners attend, process, and benefit from observing gestures. Studies on how visual, verbal, and motoric WM capacities are linked to individuals’ processing of concurrent gesture vs. speech employed mismatch paradigms (Wu and Coulson, 2014a,b; Özer and Göksun, 2020). However, gesture mismatches are rare in natural communication, and we should investigate how different cognitive abilities relate to gesture processing in more ecologically valid paradigms. It is also unknown whether there are any individual differences in visual attention to gestures. Gestures are visual articulators and subject to visual processing. Although earlier research found that gestures can be processed peripherally and do not require direct visual attention (e.g., Gullberg and Holmqvist, 1999, 2006; Gullberg and Kita, 2009), recent evidence suggests that several factors might modulate how listeners allocate overt visual attention to gestures such as the comprehensibility of speech, and the native/non-native status of the listener (e.g., Drijvers et al., 2019). Future studies should address whether people with different visual-spatial vs. verbal abilities show differential overt visual attention to gestures and how this relates to individual differences in gesture processing (Wakefield et al., 2018). Above attending to and processing gestures, very little is known on whether and how individuals benefit from observing gestures during online language comprehension and learning across different learning contexts (Aldugom et al., 2020). We currently investigate how visual-spatial skills relate to how much listeners benefit from observing gestures when comprehending spatial relations between objects.
All studies reviewed above tested individual differences behaviorally. Electrophysiological and neuroimaging studies investigate the neural architecture of gesture use and processing (e.g., Kelly et al., 2004; Wu and Coulson, 2005; Willems et al., 2009). We might observe individual differences in neural data, that is, otherwise non-observable behaviorally (e.g., Demir-Lira et al., 2018). Future work should examine individual differences in the recruitment of different neural networks when using and observing gestures and how these differences in neural data relate to behavioral performance after considering individuals’ cognitive skills.
The current review only focused on how individual differences in cognitive skills (mostly verbal and visual-spatial skills) relate to gesture use and processing. However, individual differences in other domains might also affect how people employ gestures. Individual differences in other domains such as personality (Hostetter and Potthoff, 2012) and other aspects of cognitive and perceptual skills such as selective attention, auditory processing, and the speed of multisensory processing should be tested (e.g., Schmalenbach et al., 2017). Moreover, it is also important to study the relation between gesture production and processing. How individual differences in spontaneous gesture use predict how people attend to and benefit from observing gestures or vice versa are unknown (Wakefield et al., 2013). Gesture processing might be affected by to what extent people themselves use gestures, and future studies should address the production-perception cycle and the mechanisms behind it.
In sum, gesture use and processing are not monolithic processes and show individual variation. Speakers and listeners can use gestures as a compensation tool during communication and thinking that interact with individuals’ cognitive dispositions.
DÖ and TG conceived of the presented idea. DÖ drafted the manuscript. TG revised the manuscript critically for important intellectual content. Both the authors contributed to the article and approved the submitted version.
This work was supported in part by TÜBA-GEBİP 2018 award given (Turkish Academy of Sciences Outstanding Young Scientist Award) and a James S. McDonnell Foundation Scholar Award (Grant no: 220020510) to TG.
Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
The reviewer declared a shared collaboration in a society with one of the authors TG at time of review.
We would like to thank all members of the Language & Cognition Lab at Koç University for thought-provoking discussions and invaluable contributions to various stages of this article.
Akbıyık, S., Karaduman, A., Göksun, T., and Chatterjee, A. (2018). The relationship between co-speech gesture production and macrolinguistic discourse abilities in people with focal brain injury. Neuropsychologia 117, 440–453. doi: 10.1016/j.neuropsychologia.2018.06.025
Alamillo, A. R., Colletta, J. M., and Guidetti, M. (2013). Gesture and language in narratives and explanations: the effects of age and communicative activity on late multimodal discourse development. J. Child Lang. 40, 511–538. doi: 10.1017/S0305000912000062
Aldugom, M., Fenn, K., and Cook, S. W. (2020). Gesture during math instruction specifically benefits learners with high visuospatial working memory capacity. Cogn. Res. Princ. Implic. 5:27. doi: 10.1186/s41235-020-00215-8
Alfred, K. L., and Kraemer, D. J. (2017). Verbal and visual cognition: individual differences in the lab, in the brain, and in the classroom. Dev. Neuropsychol. 42, 507–520. doi: 10.1080/87565641.2017.1401075
Alibali, M. W., Evans, J. L., Hostetter, A. B., Ryan, K., and Mainela-Arnold, E. (2009). Gesture-speech integration in narrative: are children less redundant than adults? Gesture 9, 290–311. doi: 10.1075/gest.9.3.02ali
Alibali, M. W., Heath, D. C., and Myers, H. J. (2001). Effects of visibility between speaker and listener on gesture production: some gestures are meant to be seen. J. Mem. Lang. 44, 169–188. doi: 10.1006/jmla.2000.2752
Austin, E. E., and Sweller, N. (2017). Getting to the elephants: gesture and preschoolers’ comprehension of route direction information. J. Exp. Child Psychol. 163, 1–14. doi: 10.1016/j.jecp.2017.05.016
Azar, Z., Backus, A., and Özyürek, A. (2019). General-and language-specific factors influence reference tracking in speech and gesture in discourse. Discourse Process. 56, 553–574. doi: 10.1080/0163853X.2018.1519368
Azar, Z., Backus, A., and Özyürek, A. (2020). Language contact does not drive gesture transfer: heritage speakers maintain language specific gesture patterns in each language. Biling.: Lang. Cogn. 23, 414–428. doi: 10.1017/S136672891900018X
Bates, E., Dale, P., and Thal, D. (1995). “Individual differences and their implications for theories of language development” in Handbook of child language. eds. P. Fletcher and B. MacWhinney (Oxford: Blackwell).
Bello, A., Capirci, O., and Volterra, V. (2004). Lexical production in children with Williams syndrome: spontaneous use of gesture in a naming task. Neuropsychologia 42, 201–213. doi: 10.1016/s0028-3932(03)00172-6
Blake, J., Myszczyszyn, D., Jokel, A., and Bebiroglu, N. (2008). Gestures accompanying speech in specifically language-impaired children and their timing with speech. First Lang. 28, 237–253. doi: 10.1177/0142723707087583
Bohn, M., Kordt, C., Braun, M., Call, J., and Tomasello, M. (2020). Learning novel skills from iconic gestures: a developmental and evolutionary perspective. Psychol. Sci. 31, 873–880. doi: 10.1177/0956797620921519 in press
Brayne, C., Gao, L., Dewey, M., and Matthews, F. E. Ageing Study Investigators (2006). Dementia before death in ageing societies—the promise of prevention and the reality. PLoS Med. 3:e397. doi: 10.1371/journal.pmed.0030397
Calero, C. I., Shalom, D. E., Spelke, E. S., and Sigman, M. (2019). Language, gesture, and judgment: children’s paths to abstract geometry. J. Exp. Child Psychol. 177, 70–85. doi: 10.1016/j.jecp.2018.07.015
Chu, M., Meyer, A., Foulkes, L., and Kita, S. (2014). Individual differences in frequency and saliency of speech-accompanying gestures: the role of cognitive abilities and empathy. J. Exp. Psychol. Gen. 143, 694–709. doi: 10.1037/a0033861
Church, R. B., Ayman-Nolley, S., and Mahootian, S. (2004). The role of gesture in bilingual education: does gesture enhance learning? Int. J. Biling. Educ. Biling. 7, 303–319. doi: 10.1080/13670050408667815
Church, R. B., Kelly, S. D., and Lynch, K. (2000). Immediate memory for mismatched speech and representational gesture across development. J. Nonverbal Behav. 24, 151–174. doi: 10.1023/A:1006610013873
Cleary, R. A., Poliakoff, E., Galpin, A., Dick, J. P., and Holler, J. (2011). An investigation of co-speech gesture production during action description in Parkinson’s disease. Parkinsonism Relat. Disord. 17, 753–756. doi: 10.1016/j.parkreldis.2011.08.001
Clough, S., and Duff, M. C. (2020). The role of gesture in communication and cognition: implications for understanding and treating neurogenic communication disorders. Front. Hum. Neurosci. 14:323. doi: 10.3389/fnhum.2020.00323
Colgan, S. E., Lanter, E., McComish, C., Watson, L. R., Crais, E. R., and Baranek, G. T. (2006). Analysis of social interaction gestures in infants with autism. Child Neuropsychol. 12, 307–319. doi: 10.1080/09297040600701360
Colletta, J. M., Pellenq, C., and Guidetti, M. (2010). Age-related changes in co-speech gesture and narrative: evidence from French children and adults. Speech Comm. 52, 565–576. doi: 10.1016/j.specom.2010.02.009
Cook, S. W., and Fenn, K. M. (2017). “The function of gesture in learning and memory” in Why gesture? How the hands function in speaking, thinking and communicating. eds. R. B. Church, M. W. Alibali, and S. D. Kelly (Amsterdam: John Benjamins Publishing Company), 129–153.
Cook, S. W., Yip, T. K., and Goldin-Meadow, S. (2012). Gestures, but not meaningless movements, lighten working memory load when explaining math. Lang. Cogn. Process. 27, 594–610. doi: 10.1080/01690965.2011.567074
Cooper, J. L., Sidney, P. G., and Alibali, M. W. (2017). Who benefits from diagrams and illustrations in math problems? Ability and attitudes matter. Appl. Cogn. Psychol. 32, 24–38. doi: 10.1002/acp.3371
de Nooijer, J. A., van Gog, T., Paas, F., and Zwaan, R. A. (2013). Effects of imitating gestures during encoding or during retrieval of novel verbs on children’s test performance. Acta Psychol. 144, 173–179. doi: 10.1016/j.actpsy.2013.05.013
de Ruiter, J. P., Bangerter, A., and Dings, P. (2012). The interplay between gesture and speech in the production of referring expressions: investigating the tradeoff hypothesis. Top. Cogn. Sci. 4, 232–248. doi: 10.1111/j.1756-8765.2012.01183.x
Demir, Ö. E., Fisher, J. A., Goldin-Meadow, S., and Levine, S. C. (2014). Narrative processing in typically developing children and children with early unilateral brain injury: seeing gesture matters. Dev. Psychol. 50, 815–828. doi: 10.1037/a0034322
Demir, Ö. E., Levine, S. C., and Goldin-Meadow, S. (2015). A tale of two hands: children’s early gesture use in narrative production predicts later narrative structure in speech. J. Child Lang. 42, 662–681. doi: 10.1017/S0305000914000415
Demir-Lira, Ö. E., Asaridou, S. S., Raja Beharelle, A., Holt, A. E., Goldin-Meadow, S., and Small, S. L. (2018). Functional neuroanatomy of gesture-speech integration in children varies with individual differences in gesture processing. Dev. Sci. 21:e12648. doi: 10.1111/desc.12648
Dimitrova, N., Özçalışkan, Ş., and Adamson, L. B. (2016). Parents’ translations of child gesture facilitate word learning in children with autism, down syndrome and typical development. J. Autism Dev. Disord. 46, 221–231. doi: 10.1007/s10803-015-2566-7
Drijvers, L., Vaitonytė, J., and Özyürek, A. (2019). Degree of language experience modulates visual attention to visible speech and iconic gestures during clear and degraded speech comprehension. Cogn. Sci. 43:e12789. doi: 10.1111/cogs.12789
Eielts, C., Pouw, W., Ouwehand, K., van Gog, T., Zwaan, R. A., and Paas, F. (2018). Co-thought gesturing supports more complex problem solving in subjects with lower visual working-memory capacity. Psychol. Res. 84, 502–513. doi: 10.1007/s00426-018-1065-9
Evans, J. L., Alibali, M. W., and McNeil, N. M. (2001). Divergence of verbal expression and embodied knowledge: evidence from speech and gesture in children with specific language impairment. Lang. Cogn. Process. 16, 309–331. doi: 10.1080/01690960042000049
Galati, A., Weisberg, S. M., Newcombe, N. S., and Avraamides, M. N. (2018). When gestures show us the way: co-thought gestures selectively facilitate navigation and spatial memory. Spat. Cogn. Comput. 18, 1–30. doi: 10.1080/13875868.2017.1332064
Gillespie, M., James, A. N., Federmeier, K. D., and Watson, D. G. (2014). Verbal working memory predicts co-speech gesture: evidence from individual differences. Cognition 132, 174–180. doi: 10.1016/j.cognition.2014.03.012
Glasser, M. L., Williamson, R. A., and Özçalışkan, Ş. (2018). Do children understand iconic gestures about events as early as iconic gestures about entities? J. Psycholinguist. Res. 47, 741–754. doi: 10.1007/s10936-017-9550-7
Göksun, T., Lehet, M., Malykhina, K., and Chatterjee, A. (2013b). Naming and gesturing spatial relations: evidence from focal brain-injured individuals. Neuropsychologia 51, 1518–1527. doi: 10.1016/j.neuropsychologia.2013.05.006
Goldin-Meadow, S., and Saltzman, J. (2000). The cultural bounds of maternal accommodation: how Chinese and American mothers communicate with deaf and hearing children. Psychol. Sci. 11, 307–314. doi: 10.1111/1467-9280.00261
Gullberg, M., and Holmqvist, K. (2006). What speakers do and what addressees look at: visual attention to gestures in human interaction live and on video. Pragmat. Cogn. 14, 53–82. doi: 10.1075/pc.14.1.05gul
Holler, J., Kendrick, K. H., and Levinson, S. C. (2018). Processing language in face-to-face conversation: questions with gestures get faster responses. Psychon. Bull. Rev. 25, 1900–1908. doi: 10.3758/s13423-017-1363-z
Holler, J., Shovelton, H., and Beattie, G. (2009). Do iconic hand gestures really contribute to the communication of semantic information in a face-to-face context? J. Nonverbal Behav. 33, 73–88. doi: 10.1007/s10919-008-0063-9
Hostetter, A. B., Murch, S. H., Rothschild, L., and Gillard, C. S. (2018). Does seeing gesture lighten or increase the load? Effects of processing gesture on verbal and visuospatial cognitive load. Gesture 17, 268–290. doi: 10.1075/gest.17017.hos
Huettig, F., and Janse, E. (2016). Individual differences in working memory and processing speed predict anticipatory spoken language processing in the visual world. Lang. Cogn. Neurosci. 31, 80–93. doi: 10.1080/23273798.2015.1047459
Iverson, J. M., and Braddock, B. A. (2011). Gesture and motor skill in relation to language in children with language impairment. J. Speech Lang. Hear. Res. 54, 72–86. doi: 10.1044/1092-4388(2010/08-0197)
Iverson, J. M., Capirci, O., Volterra, V., and Goldin-Meadow, S. (2008). Learning to talk in a gesture-rich world: early communication in Italian vs. American children. First Lang. 28, 164–181. doi: 10.1177/0142723707087736
Iverson, J. M., Longobardi, E., Spampinato, K., and Caselli, M. C. (2006). Gesture and speech in maternal input to children with Down’s syndrome. Int. J. Lang. Commun. Disord. 41, 235–251. doi: 10.1080/13682820500312151
Jorm, A. F., Korten, A. E., and Henderson, A. S. (1987). The prevalence of dementia: a quantitative integration of the literature. Acta Psychiatr. Scand. 76, 465–479. doi: 10.1111/j.1600-0447.1987.tb02906.x
Kane, M. J., and Engle, R. W. (2002). The role of prefrontal cortex in working-memory capacity, executive attention, and general fluid intelligence: an individual-differences perspective. Psychon. Bull. Rev. 9, 637–671. doi: 10.3758/bf03196323
Karadöller, D. Z., Ünal, E., Sumer, B., Göksun, T., Özer, D., and Ozyurek, A. (2019). “Children but not adults use both speech and gesture to produce informative expressions of left-right relations” in the 44th Annual Boston University Conference on Language Development (BUCLD 44); October 7-10, 2019.
Kartalkanat, H., and Göksun, T. (2020). The effects of observing different gestures during storytelling on the recall of path and event information in 5-year-olds and adults. J. Exp. Child Psychol. 189:104725. doi: 10.1016/j.jecp.2019.104725
Kelly, S. D., Hirata, Y., Manansala, M., and Huang, J. (2014). Exploring the role of hand gestures in learning novel phoneme contrasts and vocabulary in a second language. Front. Psychol. 5:673. doi: 10.3389/fpsyg.2014.00673
Kelly, S. D., Manning, S. M., and Rodak, S. (2008). Gesture gives a hand to language and learning: perspectives from cognitive neuroscience, developmental psychology, and education. Lang Ling Compass 2, 569–588. doi: 10.1111/j.1749-818X.2008.00067.x
Kiat, J. E., and Belli, R. F. (2018). The role of individual differences in visual\verbal information processing preferences in visual\verbal source monitoring. J. Cogn. Psychol. 30, 701–709. doi: 10.1080/20445911.2018.1509865
Kim, Z. H., and Lausberg, H. (2018). Koreans and germans: cultural differences in hand movement behaviour and gestural repertoire. J. Intercult. Commun. Res. 47, 439–453. doi: 10.1080/17475759.2018.1475296
Kirk, E., Pine, K. J., and Ryder, N. (2011). I hear what you say but I see what you mean: the role of gestures in children’s pragmatic comprehension. Lang. Cogn. Process. 26, 149–170. doi: 10.1080/01690961003752348
Kita, S., and Özyürek, A. (2003). What does cross-linguistic variation in semantic coordination of speech and gesture reveal: evidence for an interface representation of spatial thinking and speaking. J. Mem. Lang. 48, 16–32. doi: 10.1016/S0749-596X(02)00505-3
Klooster, N. B., Cook, S. W., Uc, E. Y., and Duff, M. C. (2015). Gestures make memories, but what kind? Patients with impaired procedural memory display disruptions in gesture production and comprehension. Front. Hum. Neurosci. 8:1054. doi: 10.3389/fnhum.2014.01054
Koć-Januchta, M., Höffler, T., Thoma, G. B., Prechtl, H., and Leutner, D. (2017). Visualizers versus verbalizers: effects of cognitive style on learning with texts and pictures—an eye-tracking study. Comput. Hum. Behav. 68, 170–179. doi: 10.1016/j.chb.2016.11.028
Lausberg, H., and Kita, S. (2003). The content of the message influences the hand choice in co-speech gestures and in gesturing without speaking. Brain Lang. 86, 57–69. doi: 10.1016/s0093-934x(02)00534-5
LeBarton, E. S., and Iverson, J. M. (2017). “Gesture’s role in learning interactions” in Why gesture? How the hands function in speaking, thinking and communicating. eds. R. B. Church, M. W. Alibali, and S. D. Kelly (Amsterdam: John Benjamins Publishing), 331–351.
Liszkowski, U., Carpenter, M., and Tomasello, M. (2008). Twelve-month-olds communicate helpfully and appropriately for knowledgeable and ignorant partners. Cognition 108, 732–739. doi: 10.1016/j.cognition.2008.06.013
Mainela-Arnold, E., Alibali, M. W., Hostetter, A. B., and Evans, J. L. (2014). Gesture-speech integration in children with specific language impairment. Int. J. Lang. Commun. Disord. 49, 761–770. doi: 10.1111/1460-6984.12115
Mainela-Arnold, E., Alibali, M. W., Ryan, K., and Evans, J. L. (2011). Knowledge of mathematical equivalence in children with specific language impairment: insights from gesture and speech. Lang. Speech Hear. Serv. Sch. 42, 18–30. doi: 10.1044/0161-1461(2010/09-0070)
Mastrogiuseppe, M., Capirci, O., Cuva, S., and Venuti, P. (2015). Gestural communication in children with autism spectrum disorders during mother-child interaction. Autism 19, 469–481. doi: 10.1177/1362361314528390
Meneghetti, C., Labate, E., Grassano, M., Ronconi, L., and Pazzaglia, F. (2014). The role of visuospatial and verbal abilities, styles and strategies in predicting visuospatial description accuracy. Learn. Individ. Differ. 36, 117–123. doi: 10.1016/j.lindif.2014.10.019
Nagels, A., Kircher, T., Steines, M., Grosvald, M., and Straube, B. (2015). A brief self-rating scale for the assessment of individual differences in gesture perception and production. Learn. Individ. Differ. 39, 73–80. doi: 10.1016/j.lindif.2015.03.008
Newcombe, N. S., Uttal, D. H., and Sauter, M. (2013). “Spatial development” in Oxford library of psychology. The Oxford handbook of developmental psychology (Vol. 1): Body and mind. ed. P. D. Zelazo (New York, NY: Oxford University Press), 564–590.
Obermeier, C., Dolk, T., and Gunter, T. C. (2012). The benefit of gestures during communication: evidence from hearing and hearing-impaired individuals. Cortex 48, 857–870. doi: 10.1016/j.cortex.2011.02.007
Özçalışkan, Ş., Adamson, L. B., and Dimitrova, N. (2016). Early deictic but not other gestures predict later vocabulary in both typical development and autism. Autism 20, 754–763. doi: 10.1177/1362361315605921
Özçalışkan, Ş., Adamson, L. B., Dimitrova, N., and Baumann, S. (2017). Early gesture provides a helping hand to spoken vocabulary development for children with autism, Down syndrome, and typical development. J. Cogn. Dev. 18, 325–337. doi: 10.1080/15248372.2017.1329735
Özçalışkan, Ş., Adamson, L. B., Dimitrova, N., and Baumann, S. (2018). Do parents model gestures differently when children’s gestures differ? J. Autism Dev. Disord. 48, 1492–1507. doi: 10.1007/s10803-017-3411-y
Özçalışkan, Ş., Levine, S. C., and Goldin-Meadow, S. (2013). Gesturing with an injured brain: how gesture helps children with early brain injury learn linguistic constructions. J. Child Lang. 40:69. doi: 10.1017/S0305000912000220
Özer, D., and Göksun, T. (2020). Visual-spatial and verbal abilities differentially affect processing of gestural vs. spoken expressions. Lang. Cogn. Neurosci. 35, 896–914. doi: 10.1080/23273798.2019.1703016
Özer, D., Göksun, T., and Chatterjee, A. (2019). Differential roles of gestures on spatial language in neurotypical elderly adults and individuals with focal brain injury. Cogn. Neuropsychol. 36, 282–299. doi: 10.1080/02643294.2019.1618255
Özer, D., Tansan, M., Özer, E. E., Malykhina, K., Chatterjee, A., and Göksun, T. (2017). “The effects of gesture restriction on spatial language in young and elderly adults” in Proceedings of the 38th Annual Conference of the Cognitive Science Society. eds. G. Gunzelmann, A. Howes, T. Tenbrink and E. Davelaar. July 26-29, 2017; (Austin, TX: Cognitive Science Society), 1471–1476.
Pika, S., Nicoladis, E., and Marentette, P. F. (2006). A cross-cultural study on the use of gestures: evidence for cross-linguistic transfer? Biling.: Lang. Cogn. 9, 319–327. doi: 10.1017/S1366728906002665
Post, L. S., Van Gog, T., Paas, F., and Zwaan, R. A. (2013). Effects of simultaneously observing and making gestures while studying grammar animations on cognitive load and learning. Comput. Hum. Behav. 29, 1450–1455. doi: 10.1016/j.chb.2013.01.005
Pouw, W. T., De Nooijer, J. A., Van Gog, T., Zwaan, R. A., and Paas, F. (2014). Toward a more embedded/extended perspective on the cognitive function of gestures. Front. Psychol. 5:359. doi: 10.3389/fpsyg.2014.00359
Pouw, W. T., Mavilidi, M. F., van Gog, T., and Paas, F. (2016). Gesturing during mental problem solving reduces eye movements, especially for individuals with lower visual working memory capacity. Cogn. Process. 17, 269–277. doi: 10.1007/s10339-016-0757-6
Priesters, M. A., and Mittelberg, I. (2013). “Individual differences in speakers’ gesture spaces: multi-angle views from a motion-capture study” in Proceedings of the Tilburg Gesture Research Meeting (TiGeR); June 19-21, 2013; 19–21.
Richmond, V. P., McCroskey, J. C., and Johnson, A. D. (2003). Development of the nonverbal immediacy scale (NIS): measures of self-and other-perceived nonverbal immediacy. Commun. Q. 51, 504–517. doi: 10.1080/01463370309370170
Rousseaux, M., Rénier, J., Anicet, L., Pasquier, F., and Mackowiak-Cordoliani, M. A. (2012). Gesture comprehension, knowledge and production in Alzheimer’s disease. Eur. J. Neurol. 19, 1037–1044. doi: 10.1111/j.1468-1331.2012.03674.x
Rozga, A., Hutman, T., Young, G. S., Rogers, S. J., Ozonoff, S., Dapretto, M., et al. (2011). Behavioral profiles of affected and unaffected siblings of children with autism: contribution of measures of mother-infant interaction and nonverbal communication. J. Autism Dev. Disord. 41, 287–301. doi: 10.1007/s10803-010-1051-6
Sassenberg, U., Foth, M., Wartenburger, I., and van der Meer, E. (2011). Show your hands—are you really clever? Reasoning, gesture production, and intelligence. Linguist. 49, 105–134. doi: 10.1515/ling.2011.003
Sauer, E., Levine, S. C., and Goldin-Meadow, S. (2010). Early gesture predicts language delay in children with pre-or perinatal brain lesions. Child Dev. 81, 528–539. doi: 10.1111/j.1467-8624.2009.01413.x
Schmalenbach, S. B., Billino, J., Kircher, T., van Kemenade, B. M., and Straube, B. (2017). Links between gestures and multisensory processing: individual differences suggest a compensation mechanism. Front. Psychol. 8:1828. doi: 10.3389/fpsyg.2017.01828
Schubotz, L., Özyürek, A., and Holler, J. (2019). Age-related differences in multimodal recipient design: younger, but not older adults, adapt speech and co-speech gestures to common ground. Lang. Cogn. Neurosci. 34, 254–271. doi: 10.1080/23273798.2018.1527377
Sekine, K., Schoechl, C., Mulder, K., Holler, J., Kelly, S., Furman, R., et al. (in press). Evidence for children’s online integration of simultaneous information from speech and iconic gestures: an ERP study. Lang. Cogn. Neurosci. 1–12.
Sekine, K., Sowden, H., and Kita, S. (2015). The development of the ability to semantically integrate information in speech and iconic gesture in comprehension. Cogn. Sci. 39, 1855–1880. doi: 10.1111/cogs.12221
So, W. C., Shum, P. L. C., and Wong, M. K. Y. (2015). Gesture is more effective than spatial language in encoding spatial information. Q. J. Exp. Psychol. 68, 2384–2401. doi: 10.1080/17470218.2015.1015431
Sowden, H., Clegg, J., and Perkins, M. (2013). The development of co-speech gesture in the communication of children with autism spectrum disorders. Clin. Linguist. Phon. 27, 922–939. doi: 10.3109/02699206.2013.818715
Stanfield, C., Williamson, R., and Özçalışkan, Ş. E. Y. D. A. (2014). How early do children understand gesture-speech combinations with iconic gestures? J. Child Lang. 41, 462–471. doi: 10.1017/S0305000913000019
Stefanini, S., Caselli, M. C., and Volterra, V. (2007). Spoken and gestural production in a naming task by young children with down syndrome. Brain Lang. 101, 208–221. doi: 10.1016/j.bandl.2007.01.005
Stone, W. L., Ousley, O. Y., Yoder, P. J., Hogan, K. L., and Hepburn, S. L. (1997). Nonverbal communication in two-and three-year-old children with autism. J. Autism Dev. Disord. 27, 677–696. doi: 10.1023/a:1025854816091
Talbott, M. R., Nelson, C. A., and Tager-Flusberg, H. (2015). Maternal gesture use and language development in infant siblings of children with autism spectrum disorder. J. Autism Dev. Disord. 45, 4–14. doi: 10.1007/s10803-013-1820-0
Tamis-LeMonda, C. S., Song, L., Leavell, A. S., Kahana-Kalman, R., and Yoshikawa, H. (2012). Ethnic differences in mother-infant language and gestural communications are associated with specific skills in infants. Dev. Sci. 15, 384–397. doi: 10.1111/j.1467-7687.2012.01136.x
Thompson, L. A., and Guzman, F. A. (1999). Some limits on encoding visible speech and gestures using a dichotic shadowing task. J. Gerontol. Ser. B Psychol. Sci. Soc. Sci. 54, P347–P349. doi: 10.1093/geronb/54b.6.p347
Trafton, J. G., Trickett, S. B., Stitzlein, C. A., Saner, L., Schunn, C. D., and Kirschenbaum, S. S. (2006). The relationship between spatial transformations and iconic gestures. Spat. Cogn. Comput. 6, 1–29. doi: 10.1207/s15427633scc0601_1
Trujillo, J. P., Simanova, I., Bekkering, H., and Özyürek, A. (2018). Communicative intent modulates production and comprehension of actions and gestures: a Kinect study. Cognition 180, 38–51. doi: 10.1016/j.cognition.2018.04.003
van Wermeskerken, M., Fijan, N., Eielts, C., and Pouw, W. T. (2016). Observation of depictive versus tracing gestures selectively aids verbal versus visual-spatial learning in primary school children. Appl. Cogn. Psychol. 30, 806–814. doi: 10.1002/acp.3256
Vanetti, E. J., and Allen, G. L. (1988). Communicating environmental knowledge: the impact of verbal and spatial abilities on the production and comprehension of route directions. Environ. Behav. 20, 667–682.
Vogel, E. K., and Awh, E. (2008). How to exploit diversity for scientific gain: using individual differences to constrain cognitive theory. Curr. Dir. Psychol. Sci. 17, 171–176. doi: 10.1111/j.1467-8721.2008.00569.x
Vogt, S., and Kauschke, C. (2017). Observing iconic gestures enhances word learning in typically developing children and children with specific language impairment. J. Child Lang. 44, 1458–1484. doi: 10.1017/S0305000916000647
Wakefield, E., Novack, M. A., Congdon, E. L., Franconeri, S., and Goldin-Meadow, S. (2018). Gesture helps learners learn, but not merely by guiding their visual attention. Dev. Sci. 21:e12664. doi: 10.1111/desc.12664
Wartenburger, I., Kühn, E., Sassenberg, U., Foth, M., Franz, E. A., and van der Meer, E. (2010). On the relationship between fluid intelligence, gesture production, and brain structure. Intelligence 38, 193–201. doi: 10.1016/j.intell.2009.11.001
Watson, L. R., Crais, E. R., Baranek, G. T., Dykstra, J. R., and Wilson, K. P. (2013). Communicative gesture use in infants with and without autism: a retrospective home video study. Am. J. Speech Lang. Pathol. 22, 25–39. doi: 10.1044/1058-0360(2012/11-0145)
Wermelinger, S., Gampe, A., Helbling, N., and Daum, M. M. (2020). Do you understand what I want to tell you? Early sensitivity in bilinguals’ iconic gesture perception and production. Dev. Sci. 23:e12943. doi: 10.1111/desc.12943
Willems, R. M., Özyürek, A., and Hagoort, P. (2009). Differential roles for left inferior frontal and superior temporal cortex in multimodal integration of action and language. NeuroImage 47, 1992–2004. doi: 10.1016/j.neuroimage.2009.05.066
Keywords: individual differences, gesture production, gesture processing, cognitive resources, functions of gestures
Citation: Özer D and Göksun T (2020) Gesture Use and Processing: A Review on Individual Differences in Cognitive Resources. Front. Psychol. 11:573555. doi: 10.3389/fpsyg.2020.573555
Edited by:Naomi Sweller, Macquarie University, Australia
Reviewed by:Ferdinand Binkofski, RWTH Aachen University, Germany
Spencer D. Kelly, Colgate University, United States
Copyright © 2020 Özer and Göksun. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Demet Özer, email@example.com