Original Research ARTICLE
Viewing photos and reading nouns of natural graspable objects similarly modulate motor responses
- 1Dipartimento di Neuroscienze, Sezione di Fisiologia, Università di Parma, Parma, Italy
- 2Dipartimento di Psicologia, University Milano Bicocca, Milano, Italy
- 3IRCCS Neuromed, Pozzilli, Italy
- 4Dipartimento di Scienze Mediche e Chirurgiche, Università “Magna Graecia” di Catanzaro, Germaneto, Italy
It is well known that the observation of graspable objects recruits the same motor representations involved in their actual manipulation. Recent evidence suggests that the presentation of nouns referring to graspable objects may exert similar effects. So far, however, it is not clear to what extent the modulation of the motor system during object observation overlaps with that related to noun processing. To address this issue, 2 behavioral experiments were carried out using a go-no go paradigm. Healthy participants were presented with photos and nouns of graspable and non-graspable natural objects. Also scrambled images and pseudowords obtained from the original stimuli were used. At a go-signal onset (150 ms after stimulus presentation) participants had to press a key when the stimulus referred to a real object, using their right (Experiment 1) or left (Experiment 2) hand, and refrain from responding when a scrambled image or a pseudoword was presented. Slower responses were found for both photos and nouns of graspable objects as compared to non-graspable objects, independent of the responding hand. These findings suggest that processing seen graspable objects and written nouns referring to graspable objects similarly modulates the motor system.
It is known that hand-object interactions recruit a parieto-frontal circuit in the brain of both monkeys and humans subserving sensorimotor transformations (Rizzolatti et al., 1981, 1988, 2002; Kurata and Tanji, 1986; Taira et al., 1990; Hepp-Reymond et al., 1994; Jeannerod et al., 1995; Sakata et al., 1995; Binkofski et al., 1999; Grol et al., 2007; Hecht et al., 2013). Also the mere observation of objects that have the potential for being manipulated has been proven to be effective in modulating the activity of the motor system. Single-unit recording studies in monkeys have shown that a set of neurons known as “canonical neurons” discharges during the presentation of graspable objects (Rizzolatti et al., 1988; Murata et al., 1997; Raos et al., 2006; Umiltà et al., 2007). In keeping with this, brain imaging studies have shown the activation of fronto-parietal areas in the human brain during the observation of graspable objects (Chao and Martin, 2000; Grèzes et al., 2003a,b). The recruitment of the motor system during object observation is fine-tuned with the intrinsic features of objects that make them appropriate for manual action: for example motor evoked potentials (MEPs) recorded during the observation of graspable objects (e.g., a mug) with a broken handle were significantly different from MEPs recorded during the observation of a complete object (Buccino et al., 2009).
As far as language is concerned, the embodiment approach claims that language processing involves the activation of the same sensorimotor neural substrates recruited when one experiences the content of language material (Lakoff, 1987; Glenberg, 1997; Barsalou, 1999; Pulvermueller, 2001; Gallese, 2003; Gallese and Lakoff, 2005; Zwaan and Taylor, 2006; Fischer and Zwaan, 2008; Jirak et al., 2010). In recent years, there has been growing experimental evidence in favor of the embodiment. Much of this evidence comes from studies that used action verbs (individually presented or embedded in sentences) as stimuli (e.g., Pulvermueller et al., 2001, 2005; Hauk et al., 2004; Buccino et al., 2005; Tettamanti et al., 2005). Some works investigating the recruitment of the motor cortex during noun processing showed a modulation of the motor system activity according to manipulability of objects expressed by nouns (Glover et al., 2004; Tucker and Ellis, 2004; Lindemann et al., 2006; Myung et al., 2006; Bub et al., 2008; Cattaneo et al., 2010; Gough et al., 2012). Recently, slower hand motor responses have been shown during processing of nouns referring to hand-related objects (Marino et al., 2013; see also Sato et al., 2008; Dalla Volta et al., 2009 for similar results with verbs). Summing up, the studies reviewed so far clearly show that manipulation and observation of objects as well as processing of nouns referring to graspable objects modulate the activity of the motor system. It is not clear to what extent the modulation of the motor system during object observation overlaps with that related to noun processing. For example there is evidence that when some features of objects like the spatial location or the orientation are taken into account, then processing photos depicting graspable objects or nouns referring to those same objects differently modulate motor responses (Ferri et al., 2011; Myachykov et al., 2013).
Using a go-no go paradigm, we compared motor responses given while observing photos of graspable and non-graspable natural objects with those given while reading nouns of objects from the same categories. Given some evidence showing that tools and natural objects are differently represented in the brain and differently modulate the activity of the motor system (Boronat et al., 2005; Peeters et al., 2009; Rueschemeyer et al., 2010; Gough et al., 2012; Orban and Rizzolatti, 2012), we restricted our choice to natural objects. The experimental hypothesis was that if object and noun processing share the same neural substrates, as maintained by the embodiment approach, then objects and nouns should also exert a similar modulation of motor responses. In details, based on previous studies where a similar paradigm was used (e.g., Buccino et al., 2005; Sato et al., 2008; Marino et al., 2013), we expected slower motor responses for both types of stimuli with an early go-signal (150 ms). In Experiment 1 participants responded with the right hand while in experiment 2 participants responded with the left hand.
Forty (23 females; mean age = 22 years and 9 mo) and 43 (21 females; mean age = 23 years and 6 mo) undergraduate students from the University of Catanzaro took part in Experiment 1 and Experiment 2, respectively. They were right-handed according to the Edinburgh Inventory (Oldfield, 1971). None took part in both experiments. All participants were native Italian speakers, had normal or corrected-to-normal vision, and reported no history of language disorders. They were unaware of the purpose of the experiments and gave their informed consent before testing. The study was approved by the local Ethics Committee and conducted in accordance with the World Medical Organization (1996) and the procedure recommended by the Italian Association of Psychology (AIP).
Forty Italian nouns (see Table A1) referring to natural objects and 40 pseudowords as well as 40 digital color photos (see Table A2) depicting natural objects and 40 scrambled images were used as stimuli. Twenty nouns referred to natural graspable objects (e.g., “foglia,” “leaf”) and 20 to natural non-graspable objects (e.g., “nuvola,” “cloud”). Figure 1 shows an example of each category. Nouns in the 2 categories were matched for word length [average values for nouns referring to graspable and non-graspable objects: 6.35 and 5.95; F(1, 38) = 0.68, p = 0.41], syllable number [average values: 2.5 and 2.6, F(1, 38) = 0.24, p = 0.63] and written lexical frequency [average values: 3.92 and 5.05 number of occurrences per million in Google search engine F(1, 38) = 0.31, p = 0.58; average values: 6.13 and 6.95 number of occurrences per million in CoLFIS (Corpus e Lessico di Frequenza dell'Italiano Scritto ~3.798.000 words)—Laudanna et al., 1995—F(1, 38) = 0.08, p = 0.78; rGoogle/CoLFIS = 0.83, p < 0.0001]. Pseudowords were built by substituting one consonant and one vowel in two distinct syllables of each noun (e.g., “nipola” instead of “nuvola”). With this procedure, pseudowords contained orthographically and phonologically legal syllables for the Italian language. In addition, nouns and pseudowords were matched for word length.
Figure 1. Stimuli. Examples of stimuli presented in the two experiments. Upper row shows visual items while lower row shows verbal items. (A) Non-graspable object. (B) Graspable object. (C) Scrambled image. (D) A noun expressing a non-graspable object. (E) A noun expressing a graspable object. (F) Pseudoword.
Photos depicted 20 graspable objects and 20 non-graspable objects. Figure 1 shows an example of each category. The scrambled images were built by applying Adobe Illustrator distorting graphic filters (e.g., twist and zigzag) to the photos depicting both graspable and non-graspable objects so to make them unrecognizable and then meaningless. All photos and scrambled images were 440 × 440 pixels. The nouns of objects depicted in the photos and the 40 Italian nouns used as stimuli were matched for word length [average values for visual items and for verbal item: 6.45 and 6.15; F(1, 78) = 0.82, p = 0.37], syllable number [average values: 2.57 and 2.55; F(1, 78) = 0.04, p = 0.84] and written lexical frequency [Google average values: 4.98 and 4.49; F(1, 78) = 0.10, p = 0.75; CoLFIS average values: 7.74 and 6.54; F(1, 78) = 0.18, p = 0.67]. For further analysis on the stimuli, see also Supplementary Materials. The same set of stimuli served both Experiment 1 and 2.
Experimental Design and Procedure
The experiment was carried out in a sound-attenuated room, dimly illuminated by a halogen lamp directed toward the ceiling. Participants sat comfortably in front of a PC screen (LG 22″ LCD, 1920 × 1080 pixel resolution and 60 Hz refresh rate). The eye-to-screen distance was 60 cm.
Figure 2 shows the experimental procedure. Each trial started with a black (RGB coordinates = 0, 0, 0) fixation cross displayed at the center of a gray (RGB coordinates = 178, 178, 178) background. After a delay of 1000–1500 ms (in order to avoid response habituation), the fixation cross was replaced by a stimulus item, either a noun/pseudoword or a photo/scramble. Note that the delay could be at any time between 1000 and 1500 ms. Trial-by-trial a value between 1000 and 1500 was picked according to a uniform distribution. The verbal labels were written in black lowercase Courier New bold (font size = 24). Stimuli were centrally displayed and surrounded by a red (RGB coordinates = 255, 0, 0) 20 pixels-wide frame. The red frame changed to green (RGB coordinates = 0, 255, 0) 150 ms after the stimulus onset. The color change of the frame was the “go” signal for the response. Participants were instructed to give a motor response, as fast and accurate as possible, by pressing a key on a computer keyboard centered on participants' body midline with their right (Experiment 1) or left (Experiment 2) index finger. They had to respond when the stimulus referred to a real object, and refrain from responding when it was meaningless (go-no go paradigm). After the go signal, stimuli remained visible for 1350 ms or until participant's response. Stimulus presentation and response times (RTs) collection were controlled using the software package E-Prime 2 (Psychology Software Tools, Inc.).
Figure 2. Experimental procedure. The timeline relative to the verbal stimuli presentation is depicted in the left part of the figure while the timeline relative to the visual stimuli presentation is depicted in the right part. Each trial started with a fixation cross. The appearance of the green frame represented the go-signal. Stimuli remained visible until motor response was given or 1500 ms had elapsed.
The experiment consisted of 1 practice block and 1 experimental block. In the practice block, participants were presented with 16 stimuli (4 photos of graspable/non-graspable objects, 4 scrambled images, 4 nouns of graspable/non-graspable objects and 4 pseudowords) which were not used in the experimental block. During the practice block, participants received feedback (“ERROR”) after giving a wrong response (i.e., responding to a meaningless or refraining from responding to a real item), as well as for responses given prior to go signal presentation (“ANTICIPATION”), or later than 1.5 s (“YOU HAVE NOT ANSWERED”). In the experimental block, the 160 items selected as stimuli were randomly presented with the constraint that no more than three items of the same kind (verbal, visual) or referring to objects of the same category (graspable, non-graspable, meaningless) could be presented on consecutive trials. No feedback was given to participants. Thus, the experiment, which lasted about 20 min, consisted of 80 go trials (40 nouns of objects, 50% graspable and 50% non-graspable, plus 40 photographs of objects, 50% graspable and 50% non-graspable) and 80 no-go trials (40 pseudowords plus 40 scrambled images), and 16 practice trials, for a total of 176 trials. To sum up, the experiment used a 2 × 2 repeated measures factorial design with Object Graspability (graspable, non-graspable) and Stimulus Type (nouns, photos) as within-subjects variables.
Figure 3 shows the result for both experiments.
Figure 3. Results. Median values of response times collected in Experiment 1 (A) and in Experiment 2 (B) as a function of Object Graspability (graspable objects vs. non-graspable objects), separately for each Stimulus Type (nouns: black columns vs. photos: white columns). Error bars represent the confidence interval at 95%. Significant differences between values are marked by asterisks.
Experiment 1. Trials with errors were excluded without replacement. Errors were not further analyzed given they were extremely rare (<5%). One participant was excluded from the analysis because his error rate exceeded 10%. RTs below 130 ms or above 1000 ms were omitted from the analysis (outliers). This cut-off was established so that no more than 0.5% of correct RTs were removed (Ulrich and Miller, 1994).
Median values of remaining RTs were calculated for each combination of Object Graspability (graspable and non-graspable) and Stimulus Type (photo and noun). These data entered a 2-way repeated measures analysis of variance (ANOVA) with Object Graspability and Stimulus Type as the within-subjects factors. Post-hoc comparisons were performed using the Newman-Keuls test with an alpha level of 0.05. Partial eta square values (η2p) are reported as an additional metric of effect size for all significant ANOVA contrasts.
The ANOVA revealed a main effect Object Graspability [F(1, 38) = 73.90, p < 0.0001, η2p = 0.66], reflecting slower responses for stimuli related to graspable objects (492 ms) as compared to those related to non-graspable objects (455 ms). Also the interaction between Object Graspability and Stimulus Type [F(1, 38) = 25.01, p < 0.0001, η2p = 0.40] was significant (Figure 3A). Post-hoc analysis showed that responses given to nouns referring to graspable objects were slower than responses to nouns referring to non-graspable objects (477 vs. 461 ms, p < 0.02). Similarly, responses given to photos referring to graspable objects were slower than those given to photos referring to non-graspable objects (507 vs. 448 ms, p < 0.0002). Moreover, responses to graspable objects were faster with nouns than with photos (477 vs. 507 ms, p < 0.0002) and, vice versa, for responses to non-graspable objects (nouns = 461 ms vs. photos = 448 ms, p < 0.04).
Experiment 2. Trials with errors and with outlier RTs were removed from the analysis as in Experiment 1. Four participants were excluded because their error rate exceeded 10%. Median values of correct RTs were computed and analyzed as in Experiment 1. The analysis revealed a main effect Object Graspability [F(1, 38) = 48.50, p < 0.0001, η2p = 0.56], reflecting slower responses for stimuli related to graspable objects (510 ms) as compared to those related to non-graspable objects (470 ms). Also the interaction between Object Graspability and Stimulus Type [F(1, 38) = 21.94, p < 0.0001, η2p = 0.37] was significant (Figure 3B). Post-hoc analysis showed that responses given to nouns referring to graspable objects were slower than responses to nouns referring to non-graspable objects (500 vs. 484 ms, p < 0.03). Similarly, responses given to photos referring to graspable objects were slower than those given to photos referring to non-graspable objects (521 vs. 457 ms, p < 0.0002). Moreover, responses to graspable objects were faster with nouns than with photos (500 vs. 521 ms, p < 0.007) and, vice versa, for responses to non-graspable objects (nouns = 484 ms vs. photos = 457 ms, p < 0.001).
In the present study, participants gave slower motor responses when they were presented with natural graspable objects as compared to natural non-graspable objects. This was true for both nouns and photos. As for nouns, these findings are in keeping with previous data concerning verbs (Buccino et al., 2005; Boulenger et al., 2006; Sato et al., 2008; Dalla Volta et al., 2009; De Vega et al., 2013, 2014), hand-related relative to foot-related nouns (Marino et al., 2013) and adjectives (Gough et al., 2013). To solve the requested semantic task in the case of nouns referring to graspable objects, it is most likely that participants relied on the motor representations of potential hand interactions with the object expressed by the verbal label. In this way, the motor system was engaged in two tasks at the same time, that is processing language material and performing a motor response (pressing the button). Hence participants paid a cost as revealed by a slowing down of their responses. It is worth underlining that our findings are not at odds with EEG and MEG studies (for review see Pulvermueller et al., 2009) supporting an early recruitment of the motor system during language processing and possibly a specific role of this system in this function. Thus, they seem to bolster this argument by showing that when the motor system is crucially involved in both a linguistic and a motor task there is a competition for resources. Moreover, our results concerning nouns are not in contrast with studies showing faster motor responses during processing of language material compatible with the direction of movement (Glenberg and Kaschak, 2002; Kaschak and Borreggine, 2008) or the type of prehension (e.g., Tucker and Ellis, 2004) required to give responses (i.e., the so-called Action Compatibility Effect, ACE). Indeed, this facilitation has been interpreted as an outcome that emerges relatively late in the time course of language processing (Taylor and Zwaan, 2008). In fact, the modulation of the motor system during language processing may change over time, moving from an early interference (operating between 100–200 ms after stimulus onset) to a later facilitation (operating later than 200 ms from stimulus presentation). The former effect could be a consequence of the fact that the motor system is a common neural substrate for action performance and language processing, while the latter may reflect priming triggered by the content of language material (for a computational model, see Chersi et al., 2010).
As for photos, it is well-accepted that the visual presentation of a graspable object automatically recruits motor representations of potential actions that the object affords to the observer (Gibson, 1977). We suggest that the recruitment of the motor system during the presentation of photos was relevant and most likely crucial to perform our semantic task, at least for graspable objects. As in the case of nouns, since the motor system was involved both in solving the semantic task and in planning and implementing the motor response, participants were slower when processing graspable objects. Similar findings were reported in a recent paper (Salmon et al., 2014). The authors found slower responses for photos depicting graspable as compared to non-graspable objects during a categorization task. In the present study this interference effect was stronger for photos than for nouns. This difference may be due to the fact that through photos the intrinsic features of objects, relevant for action, are immediately evident and specific (i.e., pertinent to the particular seen object) while through nouns these features are not related to specific objects but rather to a prototype of the class the objects belong to, most likely presented in a decontextualized fashion. It is worth stressing that even within language material it has been shown that the degree of sensorimotor specificity expressed by sentences affects how deeply the motor system is recruited during language processing (Marino et al., 2012).
At odds with a previous paper concerning nouns (Marino et al., 2013) where an interference effect was found only for responses given with the right hand, the present study did not find any difference between responses given by the two hands. In the study of Marino and colleagues, the authors suggested that the differential pattern of interference may be explained by the fact that only the left hemisphere is involved in both the linguistic and motor tasks, with the right one involved in only the motor task. Unfortunately, this explanation cannot account for the present results. We therefore forward that the different results in the two studies may be due to the kind of stimuli used. In fact, while Marino and colleagues used only nouns referring to tools, here we used nouns referring to natural objects. It is well known that tools and natural objects are differently represented in the brain (Boronat et al., 2005; Peeters et al., 2009; Rueschemeyer et al., 2010; Gough et al., 2012; Orban and Rizzolatti, 2012) and in particular, a specific sector of the left inferior parietal lobule is devoted to tool use in humans. It may be argued therefore, that besides the linguistic role of the left hemisphere the different modulation of the two hemispheres in the paper of Marino et al. (2013) is due to the specific role of the left hemisphere in processing tools and in praxic functions (Heilman et al., 1982; De Renzi and Lucchelli, 1988; Buxbaum and Kalénine, 2010).
Taken as a whole, our data support that semantic processing of visually presented graspable objects and nouns referring to the same object category is sub-served by common neural substrates crucially involving the motor system (Ganis et al., 1996; Vandenberghe et al., 1996; Van Doren et al., 2010). A similar modulation of the motor system has been also assessed for visually presented actions and verbs (Aziz-Zadeh et al., 2006; Baumgaertner et al., 2007; De Vega et al., 2014). Recently, Borghi and Riggio (2009) proposed a distinction between stable and temporary affordances of objects, the former being related to features like shape and size, the latter being related to aspects like orientation and position. One plausible explanation for the present findings is that a similar modulation of the motor system during processing of both nouns and photos occurred because, given the task, only stable affordances of objects were coded. In keeping with this explanation, there is evidence that when temporary affordances, such as the position or the orientation, come into account then photos and nouns differently modulate the activity of the motor system (Ferri et al., 2011; Myachykov et al., 2013). An alternative but not mutually exclusive explanation may be related to the kind of stimuli used. As compared to previous studies that in most cases employed tools (or a combination of both tools and natural objects) in the present study we used only natural objects. For this kind of objects it is less clear cut which part of the object can elicit hand actions and it is hard to disentangle between manipulation and function knowledge of objects (Boronat et al., 2005). Indeed information about the position or the orientation of an object may be more relevant when using a hammer rather than when grasping an apple.
Conflict of Interest Statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
The Supplementary Material for this article can be found online at: http://www.frontiersin.org/journal/10.3389/fnhum.2014.00968/abstract
Aziz-Zadeh, L., Wilson, S. M., Rizzolatti, G., and Iacoboni, M. (2006). Congruent embodied representations for visually presented actions and linguistic phrases describing actions. Curr. Biol. 16, 1818–1823. doi: 10.1016/j.cub.2006.07.060
Baumgaertner, A., Buccino, G., Ruediger, L., McNamara, A., and Binkofski, F. (2007). Polymodal conceptual processing of human biological actions in the left inferior frontal lobe. Eur. J. Neurosci. 25, 881–889. doi: 10.1111/j.1460-9568.2007.05346.x
Binkofski, F., Buccino, G., Posse, S., Seitz, R. J., Rizzolatti, G., and Freund, H. (1999). A fronto-parietal circuit for object manipulation in man: evidence from an fMRI-study. Eur. J. Neurosci. 11, 3276–3286. doi: 10.1046/j.1460-9568.1999.00753.x
Boronat, C. B., Buxbaum, L. J., Coslett, H. B., Tang, K., Saffran, E. M., Kimberg, D. Y., et al. (2005). Distinctions between manipulation and function knowledge of objects: evidence from functional magnetic resonance imaging. Brain Res. Cogn. Brain Res. 23, 361–373. doi: 10.1016/j.cogbrainres.2004.11.001
Boulenger, V., Roy, A. C., Paulignan, Y., Deprez, V., Jeannerod, M., and Nazir, T. A. (2006). Cross-talk between language processes and overt motor behavior in the first 200 msec of processing. J. Cogn. Neurosci. 18, 1607–1615. doi: 10.1162/jocn.2006.18.10.1607
Buccino, G., Riggio, L., Melli, G., Binkofski, F., Gallese, V., and Rizzolatti, G. (2005). Listening to action-related sentences modulates the activity of the motor system: a combined TMS and behavioral study. Brain Res. Cogn. Brain Res. 24, 355–363. doi: 10.1016/j.cogbrainres.2005.02.020
Cattaneo, Z., Devlin, J. T., Salvini, F., Vecchi, T., and Silvanto, J. (2010). The causal role of category-specific neuronal representations in the left ventral premotor cortex (PMv) in semantic processing. Neuroimage 49, 2728–2734. doi: 10.1016/j.neuroimage.2009.10.048
De Vega, M., León, I., Hernández, J. A., Valdés, M., Padrón, I., and Ferstl, E. C. (2014). Action sentences activate sensory motor regions in the brain independent of their status of reality. J. Cogn. Neurosci. 26, 1363–1376. doi: 10.1162/jocn_a_00559
De Vega, M., Moreno, V., and Castillo, D. (2013). The comprehension of action-related sentence may cause interference rather than facilitation on matching actions. Psychol. Res. 77, 20–30. doi: 10.1007/s00426-011-0356-1
Ganis, G., Kutas, M., and Sereno, M. I. (1996). The search for “common sense”: an electrophysiological study of the comprehension of words and pictures in reading. J. Cogn. Neurosci. 8, 89–106. doi: 10.1162/jocn.1918.104.22.168
Gough, P. M., Campione, G. C., and Buccino, G. (2013). Fine tuned modulation of the motor system by adjectives expressing positive and negative properties. Brain Lang. 125, 54–59. doi: 10.1016/j.bandl.2013.01.012
Gough, P. M., Riggio, L., Chersi, F., Sato, M., Fogassi, L., and Buccino, G. (2012). Nouns referring to tools and natural objects differentially modulate the motor system. Neuropsychologia 50, 19–25. doi: 10.1016/j.neuropsychologia.2011.10.017
Grèzes, J., Armony, J. L., Rowe, J., and Passingham, R. E. (2003a). Activations related to “mirror” and “canonical” neurones in the human brain: an fMRI study. Neuroimage 18, 928–937. doi: 10.1016/S1053-8119(03)00042-9
Grèzes, J., Tucker, M., Armony, J., Ellis, R., and Passingham, R. E. (2003b). Objects automatically potentiate action: an fMRI study of implicit processing. Eur. J. Neurosci. 17, 2735–2740. doi: 10.1046/j.1460-9568.2003.02695.x
Grol, M. J., Majdandziæ, J., Stephan, K. E., Verhagen, L., Dijkerman, H. C., Bekkering, H., et al. (2007). Parieto-frontal connectivity during visually guided grasping. J. Neurosci. 27, 11877–11887. doi: 10.1523/jneurosci.3923-07.2007
Hecht, E. E., Murphy, L. E., Gutman, D. A., Votaw, J. R., Schuster, D. M., Preuss, T. M., et al. (2013). Differences in neural activation for object-directed grasping in chimpanzees and humans. J. Neurosci. 33, 14117–14134. doi: 10.1523/jneurosci.2172-13.2013
Hepp-Reymond, M. C., Huesler, E. J., Maier, M. A., and Ql, H. X. (1994). Force-related neuronal activity in two regions of the primate ventral premotor cortex. Can. J. Physiol. Pharmacol. 72, 571–579.
Jeannerod, M., Arbib, M. A., Rizzolatti, G., and Sakata, H. (1995). Grasping objects: the cortical mechanisms of visuomotor transformation. Trends Neurosci. 18, 314–320. doi: 10.1016/0166-2236(95)93921-J
Laudanna, A., Thornton, A. M., Brown, G., Burani, C., and Marconi, L. (1995). “Un corpus dell'italiano scritto contemporaneo dalla parte del ricevente,” in III Giornate Internazionali di Analisi Statistica dei Dati Testuali, Vol. 1, eds S. Bolasco, L. Lebart, and A. Salem (Roma: Cisu), 103–109.
Myung, J.-Y., Blumstein, S. E., and Sedivy, J. C. (2006). Playing on the typewriter, typing on the piano: manipulation of knowledge of objects. Cognition 98, 223–243. doi: 10.1016/j.cognition.2004.11.010
Peeters, R., Simone, L., Nelissen, K., Fabbri-Destro, M., Vanduffel, W., Rizzolatti, G., et al. (2009). The representation of tool use in humans and monkeys: common and uniquely human features. J. Neurosci. 29, 11523–11539. doi: 10.1523/jneurosci.2040-09.2009
Pulvermueller, F., Shtyrov, Y., and Hauk, O. (2009). Understanding in an instant: neurophysiological evidence for mechanistic language circuits in the brain. Brain Lang. 110, 81–94. doi: 10.1016/j.bandl.2008.12.001
Raos, V., Umiltà, M. A., Murata, A., Fogassi, L., and Gallese, V. (2006). Functional properties of grasping-related neurons in the ventral premotor area F5 of the macaque monkey. J. Neurophysiol. 95, 709–729. doi: 10.1152/jn.00463.2005
Rizzolatti, G., Camarda, R., Fogassi, L., Gentilucci, M., Luppino, G., and Matelli, M. (1988). Functional organization of inferior area 6 in the macaque monkey. II: area F5 and the control of distal movements. Exp. Brain Res. 71, 491–507.
Rizzolatti, G., Scandolara, C., Gentilucci, M., and Camarda, R. (1981). Response properties and behavioral modulation of “mouth” neurons of the postarcuate cortex (area 6) in macaque monkeys. Brain Res. 225, 421–424.
Rueschemeyer, S. A., van Rooij, D., Lindemann, O., Willems, R. M., and Bekkering, H. (2010). The function of words: distinct neural correlates for words denoting differently manipulable objects. J. Cogn. Neurosci. 22, 1844–1851. doi: 10.1162/jocn.2009.21310
Sakata, H., Taira, M., Murata, A., and Mine, S. (1995). Neural mechanisms of visual guidance of hand action in the parietal cortex of the monkey. Cereb. Cortex. 5, 429–438. doi: 10.1093/cercor/5.5.429
Sato, M., Mengarelli, M., Riggio, L., Gallese, V., and Buccino, G. (2008). Task related modulation of the motor system during language processing. Brain Lang. 105, 83–90. doi: 10.1016/j.bandl.2007.10.001
Tettamanti, M., Buccino, G., Saccuman, M. C., Gallese, V., Danna, M., Scifo, P., et al. (2005). Listening to action-related sentences activates fronto-parietal motor circuits. J. Cogn. Neurosci. 17, 273–281. doi: 10.1162/08989290531Z24965
Umiltà, M. A., Brochier, T., Spinks, R. L., and Lemon, R. N. (2007). Simultaneous recording of macaque premotor and primary motor cortex neuronal populations reveals different functional contributions to visuomotor grasp. J. Neurophysiol. 98, 488–501. doi: 10.1152/jn.01094.2006
Van Doren, L., Dupont, P., De Grauwe, S., Peeters, R., and Vandenberghe, R. (2010). The amodal system for conscious word and picture identification in the absence of a semantic task. Neuroimage 49, 3295–3307. doi: 10.1016/j.neuroimage.2009.12.005
Table A1. List of the Italian nouns used in Experiment 1 and 2, their English translation, graspability of their referents, lexical frequency (number of occurrence per million in Google search engine—e.g., Marino et al., 2012—and in CoLFIS search engine—Laudanna et al., 1995), length and syllable number.
Keywords: embodiment, language processing, canonical neurons, affordances, motor responses
Citation: Marino BFM, Sirianni M, Volta RD, Magliocco F, Silipo F, Quattrone A and Buccino G (2014) Viewing photos and reading nouns of natural graspable objects similarly modulate motor responses. Front. Hum. Neurosci. 8:968. doi: 10.3389/fnhum.2014.00968
Received: 19 September 2014; Accepted: 13 November 2014;
Published online: 04 December 2014.
Edited by:Anna M. Borghi, University of Bologna and Institute of Cognitive Sciences and Technologies, Italy
Reviewed by:Andriy Myachykov, Northumbria University, UK
Joshua P. Salmon, Cognitive Health and Recovery Research Laboratory, Canada
Copyright © 2014 Marino, Sirianni, Volta, Magliocco, Silipo, Quattrone and Buccino. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Giovanni Buccino, Dipartimento di Scienze Mediche e Chirurgiche, Università “Magna Graecia” di Catanzaro, Viale Salvatore Venuta, 88100, Germaneto, Italy e-mail: email@example.com