Working Memory for Linguistic and Non-linguistic Manual Gestures: Evidence, Theory, and Application
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
Linguistic manual gestures are the basis of sign languages used by deaf individuals. Working memory and language processing are intimately connected and thus when language is gesture-based, it is important to understand related working memory mechanisms. This article reviews work on working memory for linguistic and non-linguistic manual gestures and discusses theoretical and applied implications. Empirical evidence shows that there are effects of load and stimulus degradation on working memory for manual gestures. These effects are similar to those found for working memory for speech-based language. Further, there are effects of pre-existing linguistic representation that are partially similar across language modalities. But above all, deaf signers score higher than hearing non-signers on an n-back task with sign-based stimuli, irrespective of their semantic and phonological content, but not with non-linguistic manual actions. This pattern may be partially explained by recent findings relating to cross-modal plasticity in deaf individuals. It suggests that in linguistic gesture-based working memory, semantic aspects may outweigh phonological aspects when processing takes place under challenging conditions. The close association between working memory and language development should be taken into account in understanding and alleviating the challenges faced by deaf children growing up with cochlear implants as well as other clinical populations.
Working Memory and Language
Working memory is the ability to simultaneously store and process information (Daneman and Carpenter, 1980; Baddeley, 2000; Ma et al., 2014) and as such forms the foundation of higher cognition including thinking and learning. Working memory provides a platform for language processing by keeping information in mind and integrating it with new information during discourse processing as well as a platform for language learning (Baddeley et al., 1998), i.e., the establishment of new linguistic representations. The storage and processing limits of working memory may constrain language processing when it takes place under challenging conditions, i.e., when the incoming language signal is degraded and therefore cannot be readily matched to existing representations (Rönnberg, 2003; Rönnberg et al., 2008, 2013). For most people, language is primarily speech-based. However, for individuals with reduced hearing ability, gesture-based language, i.e., sign languages, provide an alternative means of communication that bypasses the defective auditory channel. The communicative importance of sign language gives the study of working memory for manual gestures applied significance. However, it also has a theoretical interest as gestural and vocal communication seem to share common origins (Corballis, 2003; Aboitiz and García, 2009; Aboitiz, 2012) and their comparison can provide insight into the architecture of working memory and its language modality specificity.
Working Memory for Sign and Speech
The comparison of working memory for sign language to working memory for speech has demonstrated similar capacity across language modalities (Boutla et al., 2004; Andin et al., 2013) and similar lifespan trajectories (Rudner et al., 2010). However, when processing demands are low and maintenance demands high, sign capacity is more similar to visuospatial capacity (5+/-2) than speech-based verbal capacity (7+/-2, Boutla et al., 2004; Andin et al., 2013). There are also language-modality specific differences in the neural networks supporting working memory for sign and speech. Specifically, working memory for sign generates more activation compared to working memory for speech in superior parietal regions associated with visuospatial processing and temporo-occipital regions associated with object recognition (for a review see Rudner et al., 2009). This net activation for sign language may reflect sign specific sensorimotor mechanisms and modes of representation (Emmorey et al., 2014) or language modality-specific executive strategies employed during working-memory tasks (Bavelier et al., 2008). Specifically, net superior parietal activation may reflect generation and storage of a virtual spatial array (Rönnberg et al., 2004, 2008; Rudner et al., 2009) in, or close to, a neural region identified as a capacity-limited store for representation of the visual scene (Todd and Marois, 2004, see Rudner, 2015 for discussion). Further, a recent animal study has shown that visuospatial working memory in deaf individuals is dependent on parietal cortex (Wong et al., 2017). Rather than focusing on the comparison of working memory for sign to working memory for speech, the current review targets work investigating working memory for manual gestures that may or may not be familiar and/or lexicalized in a sign language.
Linguistic and Non-linguistic Manual Gestures
In sign languages, use of gestures is formalized in lexicon and grammar but also in manual alphabets. The latter represent the orthography of written languages and are used in the context of sign languages for finger-spelling names or concepts that have not acquired their own sign. Deaf individuals who lack formal language input often use conventionalized gestures to communicate. These are known as homesigns (Spaepen et al., 2013). Manual gestures can function as emblems (Ekman and Friesen, 1969), e.g., “thumbs up” or provide emphasis in the context of spoken language (Chieffi and Ricci, 2005). They facilitate discourse comprehension (Wu and Coulson, 2014) and production (Morsella and Krauss, 2004; Gillespie et al., 2014) by relieving pressure on challenges to working memory (Wesp et al., 2001; Morsella and Krauss, 2004; Ping and Goldin-Meadow, 2010; Chu et al., 2014), especially for individuals with low working memory capacity (Marstaller and Burianová, 2013). Simple manual actions, however, may be used without semantic intent. Importantly, manual actions can be encoded, stored and rehearsed in working memory, irrespective of whether they have the status of manual gestures or signs (Wilson and Fox, 2007; Rudner, 2015). However, they do not seem to have the same role as gestures in supporting speech processing (Cook et al., 2012). Notwithstanding, this means that by carefully selecting manual actions, gestures and signs, linguistic working memory in the visual domain can be systematically investigated.
Semantics and Phonology
All languages have a lexicon and sign languages have lexicons that consist of signs constituted by manual actions that by definition are associated with meaning. Furthermore, within any given language-specific lexicon, individual items are consistent with the phonology of that language: this also applies to sign languages. Whereas in speech-based languages phonology can be defined either in terms of the patterning of sounds adopted within that language, or the articulatory gestures involved in producing those sounds, in signed languages an articulatory definition is adopted. Sign language phonology is defined as the patterning of formational aspects of individual signs including shape, movement and location in relation to the body (Stokoe, 1960; Sutton-Spence and Woll, 1999). In other words, handshape, movement and location are all contrastive elements that constitute the sublexical components of lexical signs. This means that any lexical sign not only has a specific meaning (or set of meanings) but also a specific phonological composition that distinguishes it from all other signs in the lexicon. However, for a person who is not familiar with any sign language, a manual gesture may not signify anything at all, and further, its sublexical composition may not comply with any known set of rules. Thus, for the non-signer, any given sign may lack both semantics and phonology. This state of affairs makes manual gestures an excellent tool for investigating the effects of semantics and phonology on working memory.
Working Memory for Manual Gestures
Measuring Working Memory for Gestures
Although non-signers can encode manual gestures in memory (Wilson and Fox, 2007; Rudner, 2015), they are less successful than signers at imitating them (Holmer et al., 2016b) and thus at a disadvantage when it comes to recall of encoded signs (Rudner, 2015; Rudner et al., 2016b). The n-back paradigm (Cohen et al., 1994; Owen et al., 2005) avoids the confounding effects of suboptimal imitation ability on the production demands of traditional serial recall paradigms, and can be used to investigate working memory for both linguistic and non-linguistic manual gestures (for examples see Table 1). In the n-back paradigm, memoranda are presented serially and the participant is asked to match each item as it occurs to the stimulus that was presented n steps back in the series, see Figure 1. Typically, n = 2 is considered to generate moderate working memory load (Smith and Jonides, 1997), requiring the maintenance of two items and their order along with the simultaneous processing demands of matching each new stimulus to the first item maintained in the storage buffer, and then updating the buffer. Updating the buffer involves adding the new item as the second item in the buffer after the original second item moves up to first place and the now obsolete item that was formerly the first item in the storage buffer is suppressed or deleted. Working memory load can be manipulated in the n-back paradigm by adjusting n. N = 1 is considered a low memory load and n = 3 high memory load (Smith and Jonides, 1997).
TABLE 1. Overview of studies using the n-back paradigm to investigate working memory for linguistic and non-linguistic manual gestures.
FIGURE 1. Schematic representation of the n-back working memory task with examples of 1-, 2- and 3-back matches. Each square represents a visual stimulus that may be for example a video-recorded sign or a picture of an object. The pattern in each square represents a given characteristic of the stimulus which may be the stimulus as a whole, a surface feature of the stimulus such sign handshape (if the stimulus is a video-recorded sign) or an inferred feature of the stimulus such as the handshape of the sign gloss of a depicted object.
Pre-existing Semantic Representation Increases Working Memory Capacity Across the Language Modalities of Sign and Speech
Working memory capacity for words is greater than capacity for pseudowords (Hulme et al., 1991). Recently, it has been shown that for British Sign Language (BSL) users this effect generalizes to lexical signs, demonstrating that the positive effect of pre-existing semantic representation on working memory capacity generalizes to sign language (Rudner et al., 2016b, see Table 1). In particular, BSL users, both deaf and hearing, scored higher than British non-signers on an n-back task based on video-recorded manual gestures (see Figure 2). More critically, hearing BSL signers scored better when items were lexicalized in BSL than when they were not. This applied across memory loads (n = 1–3) and irrespective of whether the non-British signs were lexicalized in another mutually unintelligible sign language, Swedish Sign Language (SSL), made-up non-signs or non-linguistic manual actions. A similar pattern was found for British deaf signers, although here the difference in performance on the n-back task between BSL and SSL was only significant when working memory load was high (n = 3).
FIGURE 2. Working memory performance with British Sign Language (BSL) and Swedish Sign Language (SSL) stimuli by British Deaf Signers (DS), Hearing British Signers (HS) and Hearing British non-signers. The y-axis shows arcsin transformed d′ collapsed across load. D′ is based on hits adjusted for false alarms in accordance with signal detection theory, and the arcsin transformation was used because of near ceiling performance at n = 1. Reprinted with the publisher’s permission from Rudner et al. (2016b).
No Evidence That Pre-existing Phonological Representation Supports Working Memory for Manual Gestures
Not only pre-existing semantic representation but also pre-existing phonological representation enhances working memory for words (Gathercole et al., 1999). However, review of the literature reveals no evidence that this effect generalizes to sign language. Thus, the effect of pre-existing phonological representation on working memory for words appears to be language-modality specific.
The potential effect of pre-existing phonological representation on working memory for signs was investigated by Rudner et al. (2016b, see Table 1) in two different ways. In the first place, it was tested whether BSL signers had higher scores than non-signers on the n-back task when the stimuli had accessible phonology, and in the second place, whether the signers performed better when stimuli had accessible phonology than when they did not. Items with accessible phonology were SSL, that is, they were lexicalized in SSL and were thus real natural signs but not lexicalized in BSL and thus lacked meaning for the British signing participants.
The phonological repertoires of BSL and SSL are highly similar (Rudner et al., 2016b) and thus even though the semantics of the SSL items was not available to the signing participants, its phonological structure was. SSL was contrasted with artificially constructed phonologically illegal non-signs that were eligible for lexicalization in neither BSL nor SSL. The deaf signers who participated in the study did indeed score significantly better than the hearing non-signers on the n-back task with SSL, in line with our prediction of modality generality, but a similar (although non-predicted) performance discrepancy was also found with non-signs. This suggested that pre-existing phonological representation was not the true cause of the effect. Further, there was no difference in performance with either SSL or non-signs between the two hearing groups. Because hearing signers are just as likely to benefit from access to sign phonology as deaf signers it seems unlikely that the working memory advantage of the deaf signers over hearing non-signers with SSL was caused by pre-existing phonological representation.
A sharper test of the potential effect of pre-existing phonological representation on working memory for manual actions, however, was the within-group comparison of n-back scores for SSL and non-signs for the two signing groups. We aimed to isolate the effect of pre-existing phonological representation by comparing n-back scores with SSL, an unfamiliar but phonologically accessible sign language, with n-back scores for non-signs that were deliberately created to contravene the phonological constraints of BSL. However, the phonology of sign language carries semantic information in a way that the phonology of spoken language does not. For example, signs may be iconic, i.e., have visual similarity to the objects they represent, e.g., the sign for aeroplane depicts wings and upward movement (BSL example from Thompson et al., 2012). This means that the signs of even an unfamiliar sign language may carry semantic information. Although the SSL signs were selected to be semantically opaque to the British participants, it is possible that the phonological features of the SSL signs did provide some semantic cues that could be deployed mnemonically during the n-back task. This made for a conservative comparison between n-back scores with BSL and SSL, but at the same time, it made the comparison of SSL to non-signs rather liberal. Even so, there was no difference in performance between these two stimulus types for the signing groups and for the non-signing group, to whom iconic features would also be available, there was even a tendency toward an advantage of non-signs over SSL. Thus, Rudner et al. (2016b) found little support for the notion that pre-existing phonological representation supports working memory processing.
Deaf Signers Have Greater Working Memory Capacity for Sign-Based Gestures Than Hearing Non-signers
Signers are experts in using their own language and thus it is hardly surprising that they show better performance than non-signers on a sign-based working memory task due to their expert knowledge of the language (Ericsson and Kintsch, 1995). As I have argued, part of that benefit derives from the pre-existence of semantic representations (Rudner et al., 2016b). However, above and beyond that benefit there seems to be an additional advantage for deaf signers that does not pertain specifically to pre-existing phonological representation and is not apparent for hearing signers. Deaf individuals are highly reliant on visual information for perception and communication. Therefore, it is not surprising that they develop special skills in the visual domain. Low level visual processing does not seem to be enhanced in deaf individuals but for visual skills with a greater cognitive component, such as visual attention, congenitally deaf individuals do show some advantage that is associated with neural plasticity (for a review see Bavelier et al., 2006 and discussion Rudner et al., 2009).
When sensory cortex is not recruited in its typical mode during development, cross-modal plasticity takes place (Merabet and Pascual-Leone, 2010). This applies to both visual cortex in the occipital lobe (Kupers and Ptito, 2014) and auditory cortex in the temporal lobe (Nishimura et al., 1999; Finney et al., 2001; Cardin et al., 2013) in humans. Deaf humans recruit right auditory cortex more than hearing individuals during observation of dynamic visual but not linguistic stimuli (Finney et al., 2001) and the superior temporal cortex bilaterally during observation of signs (Nishimura et al., 1999). Cardin et al. (2013) dissociated perceptual and cognitive effects, showing that while right superior temporal cortex reorganizes to process non-linguistic dynamic visual stimuli irrespective of linguistic content, the left superior temporal cortex is only sensitive to dynamic visual stimuli with linguistic content (Cardin et al., 2013). Animal studies have shown that the regional localization of cross-modally reorganized functions can be very specific: congenitally deaf cats have better orientation abilities in the visual periphery than hearing cats but this benefit is suspended by deactivating regions of the temporal lobe by localized cooling. In particular, deactivation of posterior auditory cortex selectively eliminated their superior visual localization abilities, whereas deactivation of the dorsal auditory cortex eliminated their superior visual motion detection (Lomber et al., 2010). It is likely that the localization of visually based linguistic and cognitive functions reorganized in the auditory cortex of congenitally deaf humans is just as specific, and that with the right techniques it will be possible to localize these functions.
Cross-Modal Plasticity in Temporal Cortex Supports Working Memory
There is accumulating evidence that the superior temporal cortex is engaged in working memory processing in deaf signers in a manner that is not observed in hearing individuals (Cardin et al., 2017, see Table 1; Ding et al., 2015). In particular, British deaf signers showed activation of the bilateral posterior superior temporal cortex during a 2-back working memory task, irrespective of whether it was based on BSL signs or moving nonsense objects (Cardin et al., 2017). This extended the work of Ding et al. (2015) using individuals with early deafness but with diverse language experience, by demonstrating that recruitment of superior temporal cortex in congenitally deaf individuals still takes place when language skills are well established and is thus not simply caused by poorly established language skills (MacSweeney and Cardin, 2015). Further, in the study by Cardin et al. (2017), the deaf compared to hearing participants showed increased resting state connectivity between frontal regions and the superior temporal cortex, and this finding was replicated by Ding et al. (2016) with deaf Chinese participants. These findings show that congenital deafness leads to reorganization of working memory networks. This extends previous findings showing that differences in working memory networks for sign and speech are influenced not only by language modality but also auditory deprivation.
The absence of activation differences between linguistic and non-linguistic working memory in the study by Cardin et al. (2017) confirms the suggestion of Ding et al. (2015) that the functional significance of the reorganized networks is related to visuospatial working memory rather than working memory for sign language as such. Nonetheless, these findings mean that the superior temporal cortex of congenitally deaf individuals reorganizes not only for perceptual processing of visual stimuli but also for their cognitive processing, such as working memory. This perceptual and cognitive reorganization may be related to the performance advantage of deaf signers over hearing non-signers on visual working memory tasks (Wilson et al., 1997; Geraci et al., 2008; MacSweeney and Cardin, 2015; Rudner et al., 2016b; Cardin et al., 2017).
The Role of Phonology in Working Memory for Manual Gestures
It also needs to be considered why pre-existing phonological representation does not give signers a working memory advantage, at least not in an n-back task (Rudner et al., 2016b). The phonological composition of to-be-remembered speech-based items influences processing (Baddeley, 2000). In particular, phonological similarity between items decreases working memory performance. This effect is well-attested for speech-based items and there is also evidence that phonological similarity among American Sign Language (ASL) signs decreases short-term memory performance (Wilson and Emmorey, 1997). However, there is to my knowledge no evidence of such an effect for BSL (for discussion see Andin et al., 2013). Thus, although there is evidence that sign-based phonological similarity influences working memory processing, this may not generalize across all sign languages, including BSL. Further, there is evidence that phonological information may be suppressed during the n-back task when it is not explicitly required for task solution (Sweet et al., 2008; Rudner et al., 2016b). Such an effect may be enhanced in sign language as it has been pointed out that phonological information may be heavier in signed than spoken language (Geraci et al., 2008; Gozzi et al., 2010; Marshall et al., 2011) and thus there may be more incentive to ignore it if it is task irrelevant, particularly if a semantic route to task solution is effective.
Although, there is apparently no evidence of an effect of pre-existing phonological representation on working memory for unfamiliar signs in deaf British signers, there is evidence of an effect of phonological representation on phoneme monitoring in this population (Cardin et al., 2016). Using video-recorded BSL, SSL and non-sign stimuli similar to those used in the working memory study by Cardin et al. (2016), Rudner et al. (2016b) showed greater bilateral activation of an acknowledged phonological processing region, namely the supramarginal gyrus, for lexical signs compared to non-signs in deaf signers, i.e., in participants with pre-existing phonological representations. Supramarginal gyrus activation for the signers did not differ with the phonological parameters that were targeted in the task (handshape and location) and was thus phonology specific rather than task specific. This means that it is unlikely that the absence of an effect of pre-existing phonological representation on n-back score was due to an inability to access the phonological information contained in the stimuli. Indeed, the ability to explicitly access phonological representations of sign language has been demonstrated across sign languages not only in phoneme monitoring tasks (Gutierrez et al., 2012; Grosvald et al., 2012), but also in phonological similarity judgment tasks (MacSweeney et al., 2008b; Andin et al., 2014; Holmer et al., 2016b) and for SSL in a working memory context (Rudner et al., 2013, see Table 1). Instead, the likely explanation is that when the lexical signs were maintained in working memory, the phonological information associated with them was suppressed because it was irrelevant to task solution (Sweet et al., 2008) and may have increased working memory load (Marshall et al., 2011).
Intriguingly, the study by Cardin et al. (2016) showed no difference in neural activation between BSL and SSL for any of the groups. In other words, there was no significant effect of pre-existing semantic representation on phoneme monitoring (c.f. Petitto et al., 2000; Grosvald et al., 2012). This indicates that the significant effect of pre-existing semantic representation on n-back performance (Rudner et al., 2016b) is likely reserved for the context of the working memory task in which semantic encoding, when possible, reduced task demands.
Working Memory for Non-linguistic Manual Actions
Although the combination of deafness and sign language experience conferred a working memory advantage during processing of familiar and unfamiliar signs as well as non-signs, it did not generalize to an advantage in working memory processing of non-linguistic manual actions consisting of ball-catching events (Rudner et al., 2016b). These ball-catching events were generated by asking the model who recorded the signs and non-signs to catch a small ball that was thrown toward him. Critically, the manual actions that were generated in this manner were elicited in a bottom–up rather than top–down fashion. The purpose of this was to eliminate intentionality from the actions. Performance on the n-back task was poorer with non-linguistic manual actions than with any of the other stimulus types. However, there was a significant effect of working memory load between each level of n (Rudner et al., 2016b) and a separate study showed an effect of formational similarity on n-back performance (Rudner, 2015, see Table 1). One perceptual difference between the non-linguistic manual actions and the signs and non-signs was the reduced motoric diversity displayed by the model. In particular, the handshape used to catch the ball was similar in all instances and although the ball was thrown to difference segments of the space around the model, the movements he made to catch the ball were stereotypical even if they differed in trajectory. Thus, the poorer performance by all groups with non-linguistic manual actions compared to non-signs and signs may be due to too little motoric diversity to distinguish separate items (c.f. Sehyr et al., 2017). This notion is supported by the effect of formational similarity on n-back performance with non-linguistic manual actions in which the degree of motoric diversity significantly influenced performance (Rudner, 2015).
Working Memory Load – Effect Across Materials and Groups
Working memory load is increased when more items are maintained for the same amount of processing. This is achieved using the n-back paradigm by increasing the magnitude of n. An effect of working memory load has been observed for all types of manual gestures under consideration here: familiar signs, unfamiliar sign, non-signs and non-linguistic manual actions (Rudner, 2015; Rudner et al., 2015, 2016b). Interactions with load can be informative of the way in which different types of information are stored in working memory. In particular, a significant interaction between load and gesture type for deaf signers showed that for this group, the effect of pre-existing semantic representation was only apparent when working memory load was high (Rudner et al., 2016b). This was in contrast to hearing signers who showed an effect of pre-existing semantic representation across memory loads (Rudner et al., 2016b). Further, in the same study, another significant interaction between load and gesture type showed that although there was an effect of load for non-linguistic manual actions, in line with previous work (Rudner, 2015), it was lower than for non-signs (Rudner et al., 2016b).
Speech-Based Recoding of Familiar Signs by Bimodal Bilinguals
The difference in the effect of semantic representation across memory loads between deaf and hearing signers (Rudner et al., 2016b) suggested that there were differences in working memory processing across to the two signing groups. There is evidence that working memory encoding and maintenance are more efficient for words than signs for bimodal bilinguals (Hall and Bavelier, 2011). Thus, it is likely that to maximize task performance the hearing signers in the study by Rudner et al. (2016b) encoded and maintained the familiar signs as words. On the other hand, the deaf signers who did not have such ready access to the speech modality most likely encoded and maintained the lexical signs in the visual language modality in which they were presented. Recently, it has been shown that deaf signers with good reading skills recode fingerspelled words as speech-based phonology during a working memory task (Sehyr et al., 2017). It is likely that the representations resulting from recoding by deaf signers are more fragile and susceptible to working memory load than those of the bimodal bilinguals, although this remains to be tested. Further, homesigns (gestures used by deaf individuals who lack conventional linguistic input) seem to be processed in the working memory of the homesigners who use them in much the same way, generally speaking, as words or lexicalized signs (Spaepen et al., 2013). Further investigation of working memory for homesigns could increase our understanding of the relation between working memory and language learning in the absence of a formal language system.
A common challenge to language understanding is the degradation of the incoming language signal that takes place in noisy conditions. This phenomenon has been widely researched in the speech modality (Rönnberg et al., 2013). However, it is not only acoustic noise that interferes with the speech signal, visual noise also interferes with speech perception (Cohen and Gordon-Salant, 2017) and the same applies to sign language perception (Pavel et al., 1987). In particular, reduced resolution introduced by signal compression in digital communication regularly used by sign language users may have a negative effect on communication quality (Agrafiotis et al., 2003). Indeed, visual noise in the form of reduced resolution negatively affects working memory for manual gestures and this effect interacts with working memory load such that poor signal quality has greater effect on n-back scores when load is higher (see Table 1 and Figure 3, Rudner et al., 2015). A similar effect has been found for working memory for spoken words using alpha power as an index of working memory load (Obleser et al., 2012). This supports the notion that the effect of signal degradation on working memory is language modality general. However, the effect of signal degradation on working memory for gestures has only just started to be investigated, and more work is needed in this area.
FIGURE 3. Mean d′ in each of the conditions of the n-back experiment. D′ is based on hits adjusted for false alarms in accordance with signal detection theory. Resolution decreases with increasing R. Reprinted from Rudner et al. (2015).
Implications for Working Memory Models
So far, this review has shown a range of working memory effects, some of which are specific to working memory for manual gestures and some of which are shared with working memory for spoken words. Most saliently, there is an effect of load on working memory for all types of manual gestures studied in this way (Spaepen et al., 2013; Rudner, 2015; Rudner et al., 2016b) and some evidence of an effect of signal degradation as well as an interaction between load and signal degradation (Rudner et al., 2015).
To this extent, effects on working memory are modality general. Although there is an effect of pre-existing semantic representation on working memory for manual gestures, the precise character of this effect is language modality specific and related to working memory load, see Figure 4. In particular, the effect of pre-existing semantic representation on working memory for manual gestures may only be apparent when load is high and the quality of representations maintained in working memory becomes particularly important (Rudner et al., 2016b). It is true that an effect of pre-existing semantic representation was shown for hearing signers with sign-based stimuli, but it is likely that familiar signs were recoded as words by these bimodal bilinguals for whom encoding and maintenance is likely to be more efficient in the oral rather than gestural modality (Hall and Bavelier, 2011; Rudner et al., 2016b). Further, there is a lack of strong evidence that pre-existing phonological representations are co-opted during working memory for manual gestures (Rudner et al., 2016b).
FIGURE 4. Working memory for linguistic and non-linguistic manual gestures. Manual gestures enter the corresponding loop either directly, if they are non-linguistic, or via the sign lexicon for individuals with pre-existing semantic representation. Bimodal bilinguals have the option of recoding signs as their word glosses and processing them via the word loop. Both loops are subject to negative effects of phonological/formational similarity, but while phonological familiarity aids processing in the word loop, this does not seem to be the case in the sign loop. Deafness leads to greater capacity of the sign loop, probably due to reliance on visual information and expert knowledge of sign language.
This pattern of findings is in line with flexible resource models of working memory (Ma et al., 2014). One such model is the Ease of Language Understanding model (ELU, Rönnberg, 2003; Rönnberg et al., 2008, 2013). This model explains the relationship between complex working memory and language understanding under challenging conditions. Originally (Rönnberg, 2003), ELU assumed a similar mechanism across the language modalities of sign and speech, but language modality specific aspects soon emerged (Rönnberg et al., 2008). The explicit cognitive processing that takes place when language understanding is challenged in various ways functions differently for sign and speech (Rudner and Rönnberg, 2008a,b; Rönnberg et al., 2008). On the other hand, the mechanisms underlying the implicit language understanding that takes place under optimal conditions seem to be similar across the language modalities of sign and speech (Rönnberg et al., 2000; MacSweeney et al., 2008a).
Practical Implications – Cochlear Implantation
The differences between working memory for sign and speech that become apparent when language understanding is challenging have practical implications for a new generation of bimodal bilinguals who are also cochlear implant users. An increasingly common intervention for severe to profound deafness, both congenital and acquired, the cochlear implant (CI) transfers acoustic information collected via microphone at the scalp directly to the auditory nerve bypassing the defective inner ear. It allows individuals with deafness acquired post-lingually to preserve communication by restoring access to sound, albeit with a substantially degraded and distorted signal. Early implantation, from only a few months and no later than 7 years, allows many children with congenital deafness to acquire spoken language and cognitive skills, providing they have the right support (Tobey et al., 2011), and changes the course of cross-modal plasticity caused by deafness (Kral and Sharma, 2012; Glick and Sharma, 2017).
With the advent of cochlear implantation, many profoundly deaf children attend mainstream schools where there may be little opportunity to practice and develop sign language skills. In addition, only around 5% of deaf children who could benefit from sign language communication are born into deaf families where sign language is established and common place. This means, that many profoundly deaf children growing up today do not have the same access to sign language as their parents’ generation.
Children with CI perform at a lower level than their normal hearing peers on a wide range of cognitive tasks (Lyxell et al., 2008; van Wieringen and Wouters, 2015). These include short term memory measured using forward and backward digit span (Burkholder and Pisoni, 2003; Edwards et al., 2016) and visuospatial working memory (Beer et al., 2014), even though non-implanted deaf individuals have been shown to perform better than individuals with normal hearing on visuo-spatial working memory (Wilson et al., 1997; Geraci et al., 2008; Rudner et al., 2016a). It could be argued that standard administration of digit span (Wechsler Intelligence Scale) with oral presentation of stimuli would put individuals with the limited auditory access afforded by cochlear implants at a disadvantage. However, it seems that digit span discrepancies in children with cochlear implants are due to deficits in verbal rehearsal and serial scanning skills (Burkholder and Pisoni, 2003) rather than stimulus degradation as such (Carter et al., 2002; Burkholder-Juhasz et al., 2007). There is some evidence that children with cochlear implants in mainstream educational settings perform better cognitively than their peers in so-called total communication settings where speech is augmented with various kinds of visual cues although not typically sign language (Pisoni and Cleary, 2003; Tobey et al., 2011; Boons et al., 2012). However, because selection of educational setting is not random, care should be taken in interpreting this finding. Indeed, it has been shown that the speech development of deaf children with cochlear implants who have deaf signing parents and thus good access to sign language can be comparable to that of hearing peers (Davidson et al., 2014) although those with sign support in hearing families may not always do as well (Geers et al., 2017).
Early acquisition of language is vital for cognitive development (Mayberry et al., 2002) and may be the best predictor of successful language outcome for children born deaf (Campbell et al., 2014). Not only are sign languages fully ledged natural languages, they show similar developmental milestones to spoken languages and provide a good basis for their subsequent acquisition (Mayberry et al., 2002). Experience of sign language from infancy organizes the brain for language (Rönnberg et al., 2000; MacSweeney et al., 2008a; Campbell et al., 2014; MacSweeney and Cardin, 2015) and animal studies show that reorganization of auditory cortex for visual processing does not preclude subsequent auditory processing when cochlear implantation provides access to sound in the mature brain (Land et al., 2016). Although cochlear implantation is a revolution in the treatment of deafness, it provides only partial access to the richness of the speech signal, and in noisy situations it provides only limited assistance in segregating the signal of interest. Demonstrably, it provides a basis for language acquisition and cognitive development for many deaf children, but this basis is suboptimal. As Campbell et al. (2014) pointed out in their Frontiers review, there is little evidence to suggest that encouraging sign language development in deaf children is detrimental to speech development. If sign language can provide early and better quality cognitive representations leading to better ability to imitate gestures and maintain them in working memory its use should be stimulated.
Reading is a vital skill for everyone in the modern world but especially for deaf individuals for whom it can give access to information that may be less available through direct communication channels of sign and speech. Good language skills lay the foundation for good reading skills and this is true of both spoken and signed language (Holmer, 2016). A recent review by Mayer and Trezek (2017) shows that overall, studies of reading comprehension suggest that the majority of participants with cochlear implants achieved scores in the average range, although with a wide range of variability. The language skills of deaf native signing children are likely to be more firmly established for sign language than for a spoken second language acquired via cochlear implants. There is evidence that sign language skill predicts reading ability (Hermans et al., 2008; Holmer et al., 2016a), while the predictive strength of spoken language skills in deaf children is unreliable (Mayberry et al., 2010). Further, there is a link between reading ability and precision in imitating signs in deaf children (Holmer et al., 2016b). Recently, it has been shown that training the link between sign language and the written word may have a positive effect on word reading (Holmer et al., 2017). Thus, both spoken language skill and reading skill in deaf children are associated with firmly established first language skills.
The investigation of working memory for manual gestures as an independent phenomenon rather than in comparison to working memory for words has only just begun and future directions of interest are many and various. I will outline some of the most salient.
This review reports evidence of an effect of semantic representation on working memory for manual gestures but no effect of phonological representation. The effect of semantic representation differed for deaf and hearing signers being apparent across different memory loads for hearing signers but only apparent for deaf signers at high memory load. Based on the emerging model of working memory for linguistic and non-linguistic manual gestures, see Figure 4, future work should investigate:
(1) the load limits of working memory for manual gestures in deaf signers and how they are influenced by pre-existing semantic representation.
(2) the influence of pre-existing semantic representation on working memory for manual gestures in hearing signers when sign-based representation is mandatory.
(3) the influence of pre-existing phonological representation when phonological representation is mandatory.
(4) the neural networks underpinning exploitation of pre-existing representation during working memory for manual gestures.
This review also reports effects of load and degradation on working memory for manual gestures similar to those found for words. Future work should investigate:
(5) the modality specificity of the neural networks underpinning effects of load and degradation and their interaction.
Little work has investigated the effect of age on working memory for manual gestures. Future work should investigate:
(6) how age plays into the phenomena listed above.
I have discussed how representation and maintenance of gesture may support language development in deaf children. Future work should investigate:
(7) imitation of, and memory for, manual gestures in deaf children as well as their correlation with academic development.
Other populations with disorders of language and cognition including but not limited to individuals with intellectual disabilities, apraxia, aphasia or psychiatric disorders such as schizophrenia may also benefit from using gesture as means of representation. Thus, future work should investigate:
(8) imitation of, and memory for, manual gestures in other clinical populations.
The ability to represent and maintain manual gestures in older adults at risk of post-lingual deafness has, to my knowledge, not yet been investigated. Future work should consider
(9) how age-related hearing loss plays into the above mentioned phenomena.
The article was devised and written by MR.
Conflict of Interest Statement
The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
This work was supported by the Swedish Research Council through the grant to the Linnaeus Centre HEAD.
Agrafiotis, D., Canagarajah, N., Bull, D. R., and Dye, M. (2003). Perceptually optimized sign language video coding based on eye tracking analysis. Electron. Lett. 39, 1703–1705. doi: 10.1049/el:20031140
Andin, J., Orfanidou, E., Cardin, V., Holmer, E., Capek, C. M., Woll, B., et al. (2013). Similar digit-based working memory in deaf signers and hearing non-signers despite digit span differences. Front. Psychol. 4:942. doi: 10.3389/fpsyg.2013.00942
Bavelier, D., Newman, A. J., Mukherjee, M., Hauser, P., Kemeny, S., Braun, A., et al. (2008). Encoding, rehearsal, and recall in signers and speakers: shared network but differential engagement. Cereb. Cortex. 18, 2263–2274. doi: 10.1093/cercor/bhm248
Beer, J., Kronenberger, W. G., Castellanosa, I., Colsona, B. G., Henning, S. C., and Pisoni, D. B. (2014). Executive functioning skills in preschool-age children with cochlear implants. J. Speech Lang. Hear. Res. 57, 1521–1534. doi: 10.1044/2014_JSLHR-H-13-0054
Boons, T., Brokx, J. P., Dhooge, I., Frijns, J. H., Peeraer, L., Vermeulen, A., et al. (2012). Predictors of spoken language development following pediatric cochlear implantation. Ear Hear. 33, 617–639. doi: 10.1097/AUD.0b013e3182503e47
Burkholder, R. A., and Pisoni, D. B. (2003). Speech timing and working memory in profoundly deaf children after cochlear implantation. J. Exp. Child Psychol. 85, 63–88. doi: 10.1016/S0022-0965(03)00033-X
Burkholder-Juhasz, R. A., Levi, S. V., Dillon, C. M., and Pisoni, D. B. (2007). Nonword repetition with spectrally reduced speech: some developmental and clinical findings from pediatric cochlear implantation. J. Deaf Stud. Deaf Educ. 12, 472–485. doi: 10.1093/deafed/enm031
Campbell, R., MacSweeney, M., and Woll, B., (2014). Cochlear implantation (CI) for prelingual deafness: the relevance of studies of brain organization and the role of first language acquisition in considering outcome success. Front. Hum. Neurosci. 8:834. doi: 10.3389/fnhum.2014.00834
Cardin, V., Orfanidou, E., Kästner, L., Rönnberg, J., Woll, B., Capek, C. M., et al. (2016). Monitoring different phonological parameters of sign language engages the same cortical language network but distinctive perceptual ones. J. Cogn. Neurosci. 28, 20–40. doi: 10.1162/jocn_a_00872
Cardin, V., Orfanidou, E., Rönnberg, J., Capek, C. M., Rudner, M., and Woll, B. (2013). Dissociating cognitive and sensory neural plasticity in human superior temporal cortex. Nat. Commun. 4:1473. doi: 10.1038/ncomms2463
Cardin, V., Rudner, M., De Oliveira, R. F., Andin, J. T., Su, M., Beese, L., et al. (2017). The organization of working memory networks is shaped by early sensory experience. Cereb. Cortex. 30, 1–15. doi: 10.1093/cercor/bhx222
Carter, A. K., Dillon, C. M., and Pisoni, D. B. (2002). Imitation of nonwords by hearing impaired children with cochlear implants: suprasegmental analyses. Clin. Linguist. Phon. 16, 619–638. doi: 10.1080/02699200021000034958
Chu, M., Meyer, A., Foulkes, L., and Kita, S. (2014). Individual differences in frequency and saliency of speech-accompanying gestures: the role of cognitive abilities and empathy. J. Exp. Psychol. Gen. 143, 694–709. doi: 10.1037/a0033861
Cohen, J. D., Forman, S. D., Braver, T. S., Casey, B. J., Servan-Schreiber, D., and Noll, D. C. (1994). Activation of the prefrontal cortex in a nonspatial working memory task with functional MRI. Hum. Brain Mapp. 1, 293–304. doi: 10.1002/hbm.460010407
Cook, S. W., Yip, T. K., and Goldin-Meadow, S. (2012). Gestures, but not meaningless movements, lighten working memory load when explaining math. Lang. Cogn. Process. 27, 594–610. doi: 10.1080/01690965.2011.567074
Davidson, K., Lillo-Martin, D., and Chen Pichler, D. (2014). Spoken English language development among native signing children with cochlear implants. J. Deaf Stud. Deaf Educ. 19, 238–250. doi: 10.1093/deafed/ent045
Ding, H., Ming, D., Wan, B., Li, Q., Qin, W., and Yu, C. (2016). Enhanced spontaneous functional connectivity of the superior temporal gyrus in early deafness. Sci. Rep. 6:23239. doi: 10.1038/srep23239
Ding, H., Qin, W., Liang, M., Ming, D., Wan, B., Li, Q., et al. (2015). Cross-modal activation of auditory regions during visuo-spatial working memory in early deafness. Brain 138(Pt 9), 2750–2765. doi: 10.1093/brain/awv165
Edwards, L., Aitkenhead, L., and Langdon, D. (2016). The contribution of short-term memory capacity to reading ability in adolescents with cochlear implants. Int. J. Pediatr. Otorhinolaryngol. 90, 37–42. doi: 10.1016/j.ijporl.2016.08.017
Emmorey, K., McCullough, S., Mehta, S., and Grabowski, T. J. (2014). How sensory-motor systems impact the neural organization for language: direct contrasts between spoken and signed language. Front. Psychol. 5:484. doi: 10.3389/fpsyg.2014.00484
Geers, A. E., Mitchell, C. M., Warner-Czyz, A., Wang, N. Y., Eisenberg, L. S., and CDaCI Investigative Team (2017). Early sign language exposure and cochlear implantation benefits. Pediatrics 140:e20163489. doi: 10.1542/peds.2016-3489
Geraci, C., Gozzi, M., Papagno, C., and Cecchetto, C. (2008). How grammar can cope with limited short-term memory: simultaneity and seriality in sign languages. Cognition 106, 780–804. doi: 10.1016/j.cognition.2007.04.014
Gillespie, M., James, A. N., Federmeier, K. D., and Watson, D. G. (2014). Verbal working memory predicts co-speech gesture: evidence from individual differences. Cognition 132, 174–180. doi: 10.1016/j.cognition.2014.03.012
Gozzi, M., Geraci, C., Cecchetto, C., Perugini, M., and Papagno, C. (2010). Looking for an explanation for the low sign span. Is order involved? J. Deaf Stud. Deaf Educ. 16, 101–107. doi: 10.1093/deafed/enq035
Grosvald, M., Lachaud, C., and Corina, D. (2012). Handshape monitoring: evaluation of linguistic and perceptual factors in the processing of American Sign Language. Lang. Cogn. Process. 27, 117–141. doi: 10.1080/01690965.2010.549667
Gutierrez, E., Williams, D., Grosvald, M., and Corina, D. (2012). Lexical access in American Sign Language: an ERP investigation of effects of semantics and phonology. Brain Res. 1468, 63–83. doi: 10.1016/j.brainres.2012.04.029
Hermans, D., Knoors, H., Ormel, E., and Verhoeven, L. (2008). The relationship between the reading and signing skills of deaf children in bilingual education programs. J. Deaf Stud. Deaf Educ. 13, 518–530. doi: 10.1093/deafed/enn009
Holmer, E. (2016). Signs for Developing Reading: Sign Language and Reading Development in Deaf and Hard-of-Hearing Children. Doctoral thesis, Linköping University Electronic Press, Linköping. doi: 10.3384/diss.diva-128207
Holmer, E., Heimann, M., and Rudner, M. (2016a). Evidence of an association between sign language phonological awareness and word reading in deaf and hard-of-hearing children. Res. Dev. Disabil. 48, 145–159. doi: 10.1016/j.ridd.2015.10.008
Holmer, E., Heimann, M., and Rudner, M. (2016b). Imitation, sign language skill and the developmental ease of language understanding (D-ELU) model. Front. Psychol. 7:107. doi: 10.3389/fpsyg.2016.00107
Holmer, E., Heimann, M. and Rudner, M. (2017). Computerized sign language based literacy training for deaf and hard-of-hearing children. J. Deaf Stud. Deaf Edu. 22, 404–421. doi: 10.1093/deafed/enx023
Hulme, C., Maughan, S., and Brown, G. D. A. (1991). Memory for familiar and unfamiliar words: evidence for a long-term memory contribution to short-term memory span. J. Mem. Lang. 30, 685–701. doi: 10.1016/0749-596X(91)90032-F
Land, R., Baumhoff, P., Tillein, J., Lomber, S. G., Hubka, P., and Kral, A. (2016). Cross-modal plasticity in higher-order auditory cortex of congenitally deaf cats does not limit auditory responsiveness to cochlear implants. J. Neurosci. 36, 6175–6185. doi: 10.1523/JNEUROSCI.0046-16.2016
Lomber, S. G., Meredith, M. A., and Kral, A. (2010). Cross-modal plasticity in specific auditory cortices underlies visual compensations in the deaf. Nat. Neurosci. 13, 1421–1427. doi: 10.1038/nn.2653
Lyxell, B., Sahlén, B., Wass, M., Ibertsson, T., Larsby, B., Hällgren, M., et al. (2008). Cognitive development in children with cochlear implants: relations to reading and communication. Int. J. Audiol. 47, S47–S52 doi: 10.1080/14992020802307370
MacSweeney, M., Waters, D., Brammer, M. J., Woll, B., and Goswami, U. (2008b). Phonological processing in deaf signers and the impact of age of first language acquisition. Neuroimage 40, 1369–1379. doi: 10.1016/j.neuroimage.2007.12.047
Mayberry, R. I., del Giudice, A. A., and Lieberman, A. M. (2010). Reading achievement in relation to phonological coding and awareness in deaf readers: a meta-analysis. J. Deaf Stud. Deaf Educ. 16, 164–188. doi: 10.1093/deafed/
Obleser, J., Wöstmann, M., Hellbernd, N., Wilsch, A., and Maess, B. (2012). Adverse listening conditions and memory load drive a common alpha oscillatory network. J. Neurosci. 32, 12376–12383. doi: 10.1523/JNEUROSCI.4908-11.2012
Owen, A. M., McMillan, K. M., Laird, A. R., and Bullmore, E. (2005). N-back working memory paradigm: a meta-analysis of normative functional neuroimaging studies. Hum. Brain Mapp. 25, 46–59. doi: 10.1002/hbm.20131
Pavel, M., Sperling, G., Rield, T., and Vanderbeek, A. (1987). Limits of visual communication: the effect of signal-to-noise ratio on the intelligibility of American Sign Language. J. Opt. Soc. Am. A 4, 2355–2365. doi: 10.1364/JOSAA.4.002355
Petitto, L. A., Zatorre, R. J., Gauna, K., Nikelski, E. J., Dostie, D., and Evans, A. C. (2000). Speech-like cerebral activity in profoundly deaf people processing signed languages: implications for the neural basis of human language. Proc. Natl. Acad. Sci. 97, 13961–13966. doi: 10.1073/pnas.97.25.13961
Pisoni, D. B., and Cleary, M. (2003). Measures of working memory span and verbal rehearsal speed in deaf children after cochlear implantation. Ear Hear. 24(Suppl. 1), 106S–120S. doi: 10.1097/01.AUD.0000051692.05140.8E
Rönnberg, J., Lunner, T., Zekveld, A., Sörqvist, P., Danielsson, H., Lyxell, B., et al. (2013). The ease of language understanding (ELU) model: theoretical, empirical, and clinical advances. Front. Syst. Neurosci. 7:31. doi: 10.3389/fnsys.2013.00031
Rönnberg, J., Rudner, M., Foo, C., and Lunner, T. (2008). Cognition counts: a working memory system for ease of language understanding (ELU). Int. J. Audiol. 47(Suppl. 2), S99–S105. doi: 10.1080/14992020802301167
Rudner, M., Davidsson, L., and Rönnberg, J. (2010). Effects of age on the temporal organization of working memory in deaf signers. Aging Neuropsychol. Cogn. 17, 360–383. doi: 10.1080/13825580903311832
Rudner, M., Fransson, P., Ingvar, M., Nyberg, L., and Rönnberg, J. (2007). Neural representation of binding lexical signs and words in the episodic buffer of working memory. Neuropsychologia 45, 2258–2276. doi: 10.1016/j.neuropsychologia.2007.02.017
Rudner, M., Karlsson, T., Gunnarsson, J., and Rönnberg, J. (2013). Levels of processing and language modality specificity in working memory. Neuropsychologia 51, 656–666. doi: 10.1016/j.neuropsychologia.2012.12.011
Rudner, M., Keidser, G., Hygge, S., and Rönnberg, J. (2016a). Better visuospatial working memory in adults who report profound deafness compared to those with normal or poor hearing: data from the UK Biobank resource. Ear Hear. 37, 620–622. doi: 10.1097/AUD.0000000000000314
Rudner, M., Orfanidou, E., Cardin, V., Capek, C. M., Woll, B., and Rönnberg, J. (2016b). Pre-existing semantic representation improves working memory performance in the visuospatial domain. Mem. Cogn. 44, 608–620. doi: 10.3758/s13421-016-0585-z
Sehyr, Z. S., Petrich, J., and Emmorey, K. (2017). Fingerspelled and printed words are recoded into a speech-based code in short-term memory. J. Deaf Stud. Deaf Educ. 22, 72–87. doi: 10.1093/deafed/enw068
Spaepen, E., Coppola, M., Flaherty, M., Spelke, E., and Goldin-Meadow, S. (2013). Generating a lexicon without a language model: Do words for number count? J. Mem. Lang. 69, 496–505. doi: 10.1016/j.jml.2013.05.004
Stokoe, W. C. (1960). Sign Language Structure: An Outline of the Visual Communication Systems of the American Deaf, Studies in Linguistics: Occasional Papers (No. 8). Buffalo, NY: University of Buffalo.
Sweet, L. H., Paskavitz, J. F., Haley, A. P., Gunstad, J. J., Mulligan, R. C., Nyalakanti, P. K., et al. (2008). Imaging phonological similarity effects on verbal working memory. Neuropsychologia 46, 1114–1123. doi: 10.1016/j.neuropsychologia.2007.10.022
Thompson, R. L., Vinson, D. P., Woll, B., and Vigliocco, G. (2012). The road to language learning is iconic: evidence from British Sign Language. Psychol. Sci. 23, 1443–1448. doi: 10.1177/0956797612459763
Tobey, E. A., Geers, A. E., Sundarrajan, M., and Shin, S. (2011). Factors influencing speech production in elementary and high school-aged cochlear implant users. Ear Hear. 32, 27S–38S. doi: 10.1097/AUD.0b013e3181fa41bb
van Wieringen, A., and Wouters, J. (2015). What can we expect of normally-developing children implanted at a young age with respect to their auditory, linguistic, and cognitive skills? Hear. Res. 322, 171–179. doi: 10.1016/j.heares.2014.09.002
Wilson, M., Bettger, J. G., Niculae, I., and Klima, E. S. (1997). Modality of language shapes working memory: evidence from digit span and spatial span in ASL signers. J. Deaf Stud. Deaf Educ. 2, 150–160. doi: 10.1093/oxfordjournals.deafed.a014321
Wong, C., Pearson, K. G., and Lomber, S. G. (2017). Contributions of parietal cortex to the working memory of an obstacle acquired visually or tactilely in the locomoting cat. Cereb. Cortex 2, 1–16. doi: 10.1093/cercor/bhx186
Keywords: working memory, manual gestures, sign language, deafness, semantics, phonology, cochlear implantation
Citation: Rudner M (2018) Working Memory for Linguistic and Non-linguistic Manual Gestures: Evidence, Theory, and Application. Front. Psychol. 9:679. doi: 10.3389/fpsyg.2018.00679
Received: 07 January 2018; Accepted: 19 April 2018;
Published: 15 May 2018.
Edited by:Yi Du, Institute of Psychology (CAS), China
Reviewed by:Michael Charles Corballis, University of Auckland, New Zealand
Amira J. Zaylaa, Lebanese University, Lebanon
Benjamin Straube, Philipps University of Marburg, Germany
Copyright © 2018 Rudner. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Mary Rudner, email@example.com