Event Abstract

The effect of verbal working memory load in speech/gesture integration processing

  • 1 University of Mons, Belgium
  • 2 University of Hull, United Kingdom

Co-speech gestures are characterized by a formal relationship between hand movements and the verbal units accompanying them [1]. Even though they retain a certain meaning in a lack of context, they rely on the latter to be understood in a conversation [2]. Several studies agree on an impact of co-speech gestures on language comprehension [3, 4, 5, 6, 7]. A study among aphasic patients showed improved comprehension following the presentation of congruent co-speech gestures compared to incongruent ones [8]. Furthermore, they were perceived and processed by brain regions linked to semantic information [7]. The observed gestures modulated the neural activation, suggesting an attempt of comprehension by the listeners. Because of this online processing between co-speech gestures and verbal utterances during speech, the working memory (WM) is likely to be involved in their integration [9, 10]. In 2014, Wu & Coulson [11] have investigated how verbal (VWM) and visuospatial WM capacity influence the processing of speech/gesture integration. They have highlighted better performances on a gender classification task (task in which participants should discriminate whether they hear a man or woman’s voice while watching gestures enacted by a man or woman; a gender congruent condition being the voice of a man heard simultaneously of a gesture enacted by a man) among participants with a higher ability in processing visuospatial information. However, they failed to show this with its verbal counterpart. Nevertheless, given the nature of iconic gestures (i.e. their associativity with verbal information), its involvement would have been expected. One explanation relied in the potential lack of complexity of the VWM task (i.e. remembering 1 to 4 digits) to cause an interference between the tasks. In order to assess this lack of effect, we suggested to conduct a similar study, in which the VWM task would be of increased difficulty. The aim of our study was to observe a reduced benefit of semantic congruency on gesture/speech integration when increasing the load on VWM. We thus expected: (1) a main effect of semantic congruency, shown by reduced reaction times (RTs) for the semantically congruent (SC) condition compared to the semantically incongruent (SI), (2) a main effect of gender congruency, shown by faster RTs in the gender congruent (GC) condition, and (3) an interaction between the VWM load and SC. In the latter, we expect the difference in RTs between SI and SC to be reduced in the high load condition compared to the low load. For this, 53 participants (27 females, M_age = 23.75 ; SD = 8.76) took part in an hour long study. The participants were students from the University of Hull and had no reported sensitive or psychic disorders. All participants were fluent in English and gave written informed consent. They received 8£ for taking part in the experiment. This study was approved by the Ethics Committee of the University of Hull. The study included a reading span test (RST) and a computerized task (main task) composed of a gender classification task (GCT) as the primary task and a word span test (WST) as the secondary task. The RST [12] required participants to read out loud blocks of semantically unrelated sentences and remember the final word in each. The test was of increasing difficulty, with each level containing 3 blocks of either 2, 3, 4, 5 or 6 sentences. Participants were thus asked to retain 2, 3,4, 5 or 6 words. An individual’s reading span was the highest level where the final words from 2 consecutive sentences were recalled correctly. Performances on this task were used to create our grouping variable. The main task (fig.1) was composed of a GCT embedded in a WST. For the GCT, stimuli consisted on the one hand of video recording of 16 simple acted actions (e.g. zipping up a coat or breaking a bar) which have been used in a previous study [13]. Each action was either completed by a man or a woman. On the other hand, verbal utterances describing each action were recorded separately. Video and voice recordings were then paired, the audio following the onset of the video from 200ms, to create an audio-visuo stimulus which was either congruent in gender (i.e. the person completing the action and the voice heard belonged to either a man or a woman) and/or gesture (i.e. the seen action matched the verbal utterance) or incongruent in gender and/or gesture. This task was completed by the secondary WST. Stimuli consisted of 1280 English words, retrieved from subtlex-uk [14]. All the words contained either 1 or 2 syllables and were selected based on ratings for familiarity, concreteness, imageability. The Zipf scale was considered as the measure for word frequency. The words were then randomly ordered and seperated into 4 groups: high load targets and high load distractors (4 words in each group), and low load targets and low load distractors (1 word in each group). The experimental task comprised 256 trials consisting of words and gesture videos. Each trial began with either one (low load) or four (high load) written words at an ISI (inter stimuli interval) of 750 ms. Participants were asked to remember these words for futher recognition. They were then presented with an audio-visuo stimulus and were asked to indicate whether the voice heard was spoken by a male or female by clicking on the right or left button of the mouse (conditions were conter-balanced). If they responded incorrectly or failed to answer within 2000ms, they received a 500ms feedback. Past the 2000ms, or after a given response, participants were then confronted with two (low load) or eight (high load) written words, displayed around the center of the screen. They were asked to click on the word(s) previously presented, at the beginning of the same trial, in the order of presentation. Trial were separated by an ITI (inter trial interval) of 500, 750 or 1000 ms. All participants were asked to reply as quickly and accurately as possible. Participants were grouped into 2 groups (RST low and RST high) according to their spans on the reading span task (span of 2 = 1 ; > 2 = 2). Following this, 2 participants were excluded from the experiment for outliers. We conducted a 3 way repeated-measures ANOVA (semantic(2)*load(2)*gender(2)) with the RST variable as inter-subject factor. Results showed a main effect of semantic congruency (F(1,49) = 5.03; p = 0.03), indicating that RTs were lower in the semantic congruent (SC) condition (M = 619.16 ; SD = 15.78) than incongruent (SI (M = 625.73 ; SD = 15.9)). A main effect of gender congruency (GC) was also found (F(1,49) = 71.12 ; p < 0.01), indicating that RTs were lower in the gender congruent (GC) condition (M = 609.48 ; SD = 15.66) than incongruent (GI (M = 635.42 ; SD = 10.04)). A triple interaction semantic*gender*RST (F(1,49) = 4.76 ; p = 0.03) was also highlighted. For both groups (low RST ; high RST), performances were slower in the GI condition compared to the GC condition in both the SC and SI conditions (MSC-GI-RST-low = 668.1 ; SD = 24.3 ; MSC-GC-RST-low = 651.08 ; SD = 23.61 ; MSC-GI-RST-high = 595.05 ; SD = 21.16 ; MSC-GC-RST-high = 562.45 ; SD = 20.57 ; MSI-GI-RST-low = 678,43 ; SD = 24,37 ; MSI-GC-RST-low = 650.38 ; SD = 23.92 ; MSI-GI-RST-high = 600.1 ; SD = 21.23 ; MSI-GC-RST-high = 574.01 ; SD = 20.84). Furthermore, an interaction load*semantic*gender*RST was also found (F(1,49) = 4.029 ; p = 0.05). Finally, we also calculated the difference between SI and SC in each condition, to determine the advantage of congruency. Participants with a high RST performed significantly slower (F(1,28) = 5.54 ; p = 0.03) in the SI condition compared to SC (M = 8.31 ; SD = 3.53) when in high load and GC condition. This difference was not found among the RST low participants (F(1,29) = 0.98 ; p = 0.33). In the lack of significant differences between RTs for SI and SC according to the load on the WST, these results suggest an absence of VWM loading effect on gesture/speech integration. However, when considering the performances on RST, we do observe significant differences between performances according to the groups. Therefore, although gesture/speech integration does not seem affected by the secondary WST on its own, participants with a higher RST show significantly slower RTs in the SI condition compared to SC in the high load condition. On the contrary, participants with a low RST tend to not be disturbed. This could indicate an interference effect for the high RST participants when faced with semantically incongruent stimuli in a high load condition. Hence, the WST could have engaged specific verbal WM resources for high RST participants, slowing their RTs in the main task. In conclusion, it seems that VWM could play a role in gesture/speech integration among participants presenting high RST performances. However, these results need to be further investigated and a deeper analysis is required to better understand its role.

Image 1

References

[1] Driskell, J.E., & Radtke, P. (2003). The effect of gesture on speech production and comprehension. Human Factors, 3(5), 445-454. doi: 10.1518/hfes.45.3.445.27258 [2] Holle, H., Gunter, T.C., Rüschemeyer, S-A., Hennenlotter, A., & Iacoboni, M. (2008). Neural correlates of the processing of co-speech gestures. NeuroImage, 39(4), 2010-2024. doi: 10.1016/j.neuroimage.2007.10.055 [3] Beattie, G, & Shovelton, H. (1999). Mapping the range of information contained in the iconic hand gestures that accompany spontaneous speech. Journal of language and social Psychology, 18(4), p.483-462. doi: 10.1177/0261927X99018004005 [4] Wu, Y., & Coulson, S. (2015). Iconic gestures facilitate discours comprehension in individuals with superior immediate memory for body configurations. Psychological Science, 26(11), p.1717-27. doi: 10.1177/0956797615597671 [5] Ozyurek, A., Willems, R., Kita, S., & Hagoort, P. (2007). On-line integration of semantic information from speech and gesture: insights from event-related brain potentials. Journal of Cognitive Neuroscience, 19(4), p.605-16. doi: 10.1162/jocn.2007.19.4.605 [6] Holle, H., & Gunter, T. (2007). The role of iconic gestures in speech disambiguation: ERP evidence. Journal of Cognitive Neuroscience, 19(7), p.1175-92. doi: 10.1162/jocn.2007.19.7.1175 [7] Dick, A., Goldin-Meadow, S., Hasson, U., Skipper, J., & Small, S. (2009). Co-speech gestures influence neural activity in brain regions associated with processing semantic information. Human Brain Mapping, 30(11), p.3509-3526. doi : 10.1002/hbm.20774. [8] Eggenberger, N., Preisig, B., Schumacher, R., Hopfner, S., Vanbellingen, T., Nyffeler, T., Gutbrod, K., Annoni, J-M., Bohlhalter, S., Cazzoli, D., & Müri, R. (2016). Comprehension of co-speech gestures in aphasic patients : An eye movement study. Plos One, 11(1): e0146583. doi: 10.1371/journal.pone.0146583 [9] Gillepsie, M., James, A., Federmeier, K., & Watson, D. (2014). Verbal working memory predicts co-speech gesture: Evidence from individual differences. Cognition, 132(2), p.174-180. doi: 10.1016/j.cognition.2014.03.012 [10] De Ruiter, J.-P. (1998). Gesture and speech production (Doctoral dissertation, Max Planck Institute for Psycholinguistics, 1998). [11] Wu, Y. & Coulson, S. (2014), Co-speech iconic gestures and visuo-spatial working memory. Acta Psychologica, 153, p.39-50. doi : 10.1016/j.actpsy.2014.09.002 [12] Daneman, M., & Carpenter, P. (1980). Individual differences in working memory and reading. Journal of verbal learning and verbal behavior, 19(4), p.450-466. doi: 10.1016/S0022-5371(80)90312-6 [13] Zhao, W., Riggs, K., Schindler, I., & Holle, H. (2018). Transcranial magnetic stimulation over left inferior frontal and posterior temporal cortex disrupts gesture-speech integration. Journal of Neuroscience, p. 1748-17. doi : 10.1523/JNEUROSCI.1748-17.2017 [14] http://meshugga.ugent.be/open-lexicons/interfaces/subtlex-uk/).

Keywords: Co-speech iconic gesture, working memory, reaction times, verbal working memory (VWM), Gesture/speech integration

Conference: Belgian Brain Congress 2018 — Belgian Brain Council, LIEGE, Belgium, 19 Oct - 19 Oct, 2018.

Presentation Type: e-posters

Topic: NOVEL STRATEGIES FOR NEUROLOGICAL AND MENTAL DISORDERS: SCIENTIFIC BASIS AND VALUE FOR PATIENT-CENTERED CARE

Citation: Kandana Arachchige KG, Holle H, Simoes Loureiro I, Blekic W, Rossignol M and Lefebvre L (2019). The effect of verbal working memory load in speech/gesture integration processing. Front. Neurosci. Conference Abstract: Belgian Brain Congress 2018 — Belgian Brain Council. doi: 10.3389/conf.fnins.2018.95.00043

Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.

The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.

Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.

For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.

Received: 14 Aug 2018; Published Online: 17 Jan 2019.

* Correspondence: Miss. Kendra G Kandana Arachchige, University of Mons, Mons, Belgium, kendra.kandanaarachchige@umons.ac.be