Skip to main content

ORIGINAL RESEARCH article

Front. Psychol., 23 April 2014
Sec. Cognition
This article is part of the Research Topic The cognitive and neural bases of human tool use View all 15 articles

The left inferior parietal lobe represents stored hand-postures for object use and action prediction

  • Department of Psychology, University of Amsterdam, Amsterdam, Netherlands

Action semantics enables us to plan actions with objects and to predict others' object-directed actions as well. Previous studies have suggested that action semantics are represented in a fronto-parietal action network that has also been implicated to play a role in action observation. In the present fMRI study it was investigated how activity within this network changes as a function of the predictability of an action involving multiple objects and requiring the use of action semantics. Participants performed an action prediction task in which they were required to anticipate the use of a centrally presented object that could be moved to an associated target object (e.g., hammer—nail). The availability of actor information (i.e., presenting a hand grasping the central object) and the number of possible target objects (i.e., 0, 1, or 2 target objects) were independently manipulated, resulting in different levels of predictability. It was found that making an action prediction based on actor information resulted in an increased activation in the extrastriate body area (EBA) and the fronto-parietal action observation network (AON). Predicting actions involving a target object resulted in increased activation in the bilateral IPL and frontal motor areas. Within the AON, activity in the left inferior parietal lobe (IPL) and the left premotor cortex (PMC) increased as a function of the level of action predictability. Together these findings suggest that the left IPL represents stored hand-postures that can be used for planning object-directed actions and for predicting other's actions as well.

Introduction

Imagine yourself sitting in a restaurant at a romantic dinner with your partner. If your partner would lift a bottle of wine you would likely infer that he wants to pour you a glass of wine. Upon offering your glass, you expect him to pour wine and to subsequently put the bottle back in the wine cooler. You would be quite surprised if your partner would pour wine in the wine cooler instead. As this example illustrates, many of our everyday actions rely on the use of action semantic knowledge about objects, specifying what to do with and how to use objects (van Elk et al., 2013). Action semantics can be used to guide our own actions involving objects (e.g., we brush our teeth, pour coffee or write a letter) and to predict other's object-directed actions as well (e.g., seeing some grasping a wine bottle allows one to infer the subsequent goal of the action).

Neuropsychological studies have provided important insight in the neural organization of action semantics. For instance, studies with left-brain damaged patients have indicated that these patients exhibit strong impairments in the ability to use objects (often specifically following damage to the left inferior parietal lobe (IPL); cf. Buxbaum, 2001; Buxbaum and Saffran, 2002; Goldenberg, 2009; Osiurak et al., 2011) and that they may no longer be able to apply the correct hand posture to an object (e.g., inserting the wrong fingers in a pair of scissors; Sirigu et al., 1995). Based on these findings it has been suggested that the IPL stores the motor programs required for successful hand-object interaction and that ideomotor apraxia is characterized by an impairment in accessing manipulation knowledge about objects (i.e., knowing how to apply a correct hand posture for interacting with objects; cf. Heilman et al., 1982).

Behavioral studies and neuroimaging studies have underlined the importance of motor-related knowledge for successful object interaction. Several behavioral studies have shown for instance that the mere observation of objects automatically results in the activation of the motor programs associated with using these objects (Klatzky et al., 1989; Ellis and Tucker, 2000; Tucker and Ellis, 2001; Bub et al., 2008). For instance, participants were faster to respond to object pictures when using a grip that was congruent with the size of the object that was presented (e.g., faster responding to the presentation of car-keys when making a precision grip; Ellis and Tucker, 2000). Neuroimaging studies have shown that the observation of manipulable objects and the retrieval of manipulation knowledge about objects is associated with activation in motor-related regions, such as the premotor cortex (PMC), the supplementary motor area (SMA) and the inferior parietal lobe (IPL; Chao and Martin, 2000; Okada et al., 2000; Grezes and Decety, 2002; Creem-Regehr and Lee, 2005; Noppeney et al., 2005). In single-cell studies a strong specificity for hand-shape in relation to the manipulation of objects has been found in the monkey homolog of the IPL (Sakata et al., 1995; Murata et al., 2000). Furthermore, neuroimaging studies in humans have also shown that the IPL is selectively involved in the visuomotor transformations required for successful grasping and interacting with an object (Culham et al., 2003; Grol et al., 2007; Cohen et al., 2009). Accordingly it has been proposed that the activation in parietal areas in response to object observation reflects the automatic coding of hand-object interactions and that action semantics are stored in motor-related brain regions (Beauchamp and Martin, 2007; Barsalou, 2008; van Elk et al., 2013).

As the example with the wine bottle illustrates, in addition to using semantic knowledge for guiding our own actions, we use action semantics to predict others' actions as well (van Elk et al., 2008; Springer and Prinz, 2010). The last decade, many studies have shown that the observation of others' actions recruits the action observation network (AON), consisting of the PMC, the SPL and IPL, the inferior frontal gyrus (IFG), and the extrastriate body area (EBA) (see: Caspers et al., 2010 EBA; for a meta-analysis of studies on action observation). Activity in the AON increases as a function of the familiarity of the action (Calvo-Merino et al., 2005; Vingerhoets, 2008; Cross et al., 2009), indicating an important role for action experience in shaping the associations between executed and observed movements (Heyes, 2010). It has also been shown that the AON is more strongly activated for the observation of object-directed actions compared to intransitive actions (Buccino et al., 2001; Koski et al., 2002; Aziz-Zadeh et al., 2006; Caspers et al., 2010). For instance, single cell studies in monkeys have shown that neurons in the ventral PMC selectively responded to object-directed actions, even when the final phase of the action was occluded (Umilta et al., 2001). Furthermore, it has been found that neurons in the parietal lobe and PMC responded differentially depending on the final outcome of the action (Fogassi et al., 2005; Umilta et al., 2008). In an fMRI study in humans it has been found that activation in the AON in response to the observation of grasping actions varies as a function of the to-be-performed goal (Iacoboni et al., 2005). Based on these findings it has been suggested that within the AON actions are represented primarily in terms of the goal or outcome of the observed action (Iacoboni et al., 2005; van Elk et al., 2008; Newman-Norlund et al., 2010). Furthermore, it has been proposed that the AON may support action prediction by enabling observers to infer the goal of an observed action through the recruitment of similar mechanisms as involved in planning an action oneself (Blakemore and Decety, 2001; Wilson and Knoblich, 2005; Kilner et al., 2007). According to the “predictive coding account of action observation,” information about observed actions is used to minimize the prediction error at different levels in the action hierarchy, which allows one to infer the most likely goal or outcome of the action (Kilner et al., 2007). In support of this account, it has been found for instance that motor-related areas are activated during action prediction tasks (Kilner et al., 2004; Aglioti et al., 2008) and that TMS-induced disruption of the AON impairs action prediction (Stadler et al., 2012; Avenanti et al., 2013).

However, most studies on action prediction have focused on relatively simple actions and on the role of low-level kinematic cues in action understanding and prediction (Schubotz, 2007; Stadler et al., 2012; Avenanti et al., 2013; Zimmermann et al., 2013). In contrast, in daily life we often rely on semantic knowledge about objects in order to fine-tune our predictions about others' action. Behavioral studies have shown that action prediction is modulated as a function of both contextual, kinematic and object information (Stapel et al., 2012) and that semantic information can affect action prediction (Springer and Prinz, 2010). Action semantics may facilitate action prediction, by enabling the observer to use prior information to constrain the number of possible inferences about an observed action (e.g., an object is associated with only a limited set of possible goals) and by disambiguating the observed kinematics within the context of the objects involved (e.g., grasping a wine bottle when two glasses are empty entails a different prediction than when the two glasses are full). Whereas previous studies on action observation have compared transitive to intransitive actions (Buccino et al., 2001; Koski et al., 2002; Aziz-Zadeh et al., 2006; Caspers et al., 2010), it is not known whereas activation in the AON is modulated as a function of the predictability based on action semantic information. For instance, observing someone grasping a full bottle of wine is more predictable in a context in which both glasses are empty, but less predictable in a context where both glasses are full (cf. Newman-Norlund et al., 2013). Accordingly, the aim of the present fMRI study was to investigate how activation in the AON is modulated as a function of the predictability of an action involving multiple objects that require the use of action semantics.

In this fMRI study an action prediction task was used in which participants were required to predict the subsequent use of a centrally presented object, that was presented in association with two flanker objects (see Figure 1 for example stimuli). By manipulating the number of possible target objects the predictability of the action could be manipulated. For instance, a wine bottle presented with two unrelated distractor objects (e.g., two other bottles) resulted in an action of low predictability, whereas a wine bottle presented with a target object (e.g., a wine glass) resulted in an action of high predictability. In addition, the availability of actor information was manipulated, by including trials with or without a hand grasping the central object. In this way, it could be investigated whether using semantics for predicting imagined and observed actions recruit comparable neural mechanisms. Neuroimaging studies have suggested that comparable brain areas (i.e., the IPL and the PMC) are involved in the retrieval of action semantics (van Elk et al., 2013), in motor imagery (Zacks, 2008) and in action observation (Caspers et al., 2010). However, a direct comparison between the brain areas involved in using action semantics for motor imagery and for action prediction has not been made. In line with the “predictive coding account of action observation” (Kilner et al., 2007), it was expected that the use of semantics for predicting observed actions relies on similar neural mechanisms as involved in using semantics to guide our own (imagined) actions as well. Accordingly, in the present study a direct comparison was made between trials in which participants were asked to imagine planning an object-directed action and trials in which participants were asked to predict observed object-directed actions.

FIGURE 1
www.frontiersin.org

Figure 1. Example stimuli used in the experiment. Pictures represented a central object with 0 Target Objects/2 Distractor Objects (lower row), 1 Target Object/1 Distractor Object (middle row of figure), or 2 Target Objects/0 Distractor Objects (upper row of figure). Pictures were presented without an action cue (left side of figure) or with an action cue representing an actor grasping the central object at either the lower or the upper side (right side of figure). Within the “Action Cue—1 Target Object” condition the Action Cue could be congruent or incongruent with respect to the target object in the picture (see right side of figure).

Based on previous neuropsychological and neuroimaging studies, the following predictions were made. First, it was expected that the observation of an action (i.e., comparing trials with and without an action cue) should be associated with increased activation in the AON, consisting of the dorsal premotor cortex (dPMC), SPL and IPL, the IFG, and the EBA (see: Caspers et al., 2010 for meta-analysis of studies on action observation). Second, it was expected that comparing trials in which a target object was presented compared to trials in which no target object was presented, would require the retrieval of stored hand-object postures, which should become apparent in a stronger activation of the left IPL (Caspers et al., 2006). Third, by using a conjunction analysis it could be directly investigated if there is an overlap between the brain areas involved in action observation and in the retrieval of action semantics for imagined actions (Kilner et al., 2007). It was expected that the use of action semantics for motor imagery and action observation should converge in two core regions of the fronto-parietal motor network, notably the IPL and the PMC (Zacks, 2008; van Elk et al., 2013).

Materials and Methods

Participants

In total 20 people participated in the fMRI study (12 men, mean age = 23.0 years, SD = 2.4 years) after giving informed written consent according to institutional guidelines (Ethics Committee, University of Amsterdam, The Netherlands) for payment of 10 €/h. All participants were right-handed as assessed through subject self-report and had normal or corrected-to-normal vision. One participant made more than 50% errors on trials in which only 1 target object was presented and this subject was excluded from all analyses.

Action Prediction Task

During the experiment participants observed pictures representing three objects positioned on a table next to each other (see Figure 1). Participants were instructed to predict whether the central object would be moved to the left, to the right or to neither side, by pressing one of three buttons on a button box with their right hand (left, middle, or right button). Participants were instructed that predictions should be based on the type of objects that were presented in the picture and/or the action information presented by the actor grasping the central object.

As stimuli I used standardized pictures (750 × 500 pixels) representing a central object with respectively, 0, 1, or 2 target objects and 2, 1, or 0 distractor objects at either side (see Figure 1). A target object was defined as an object that would yield a meaningful action sequence in combination with the central object. A distractor object was defined as an object that was semantically related to the central object but that could not be used in a meaningful action sequence with the central object. For instance, a wine bottle can be used in combination with a wine glass to pour wine or in combination with a wine cooler to cool wine. However, a wine bottle cannot be combined in a meaningful action sequence with a beer bottle or a sports drinking bottle.

In half of all pictures an action cue was presented, representing a hand grasping the upper or lower side of the central object. Each grasp type (upper vs. lower side) was associated with using a different target object. For instance, grasping the wine bottle at the lower side affords pouring wine in a wineglass, whereas grasping the wine bottle at the upper side affords putting the wine bottle in the wine cooler. Thus, I created pictures according to a 3 (# of Target Objects: 0, 1, 2) × 2 (No Action Cue vs. Action Cue) design. I selected 10 different central objects that were associated with two different target objects and that were paired with two different distractor objects (see Table 1). Different pictures were created for all possible combinations of the location of the target objects (left vs. right), the location of the distractor objects (left vs. right), and the action cue (No Cue, Cue-Up vs. Cue-Down). In the “Action Cue—1 Target Object” condition the grip type represented by the action cue could be congruent or incongruent with respect to the target object presented in the picture (see right side of Figure 1). For instance, grasping a bottle opener at the upper side would be congruent in combination with a wine bottle (i.e., affording the use of this object), but would be incongruent in combination with a beer bottle (i.e., grasping the tool in this way does not allow opening the beer bottle). In the analyses described below, trials were collapsed across both congruent and incongruent conditions, because at a neural level, comparison of incongruent with congruent trials did not yield significant differences using FWE correction for multiple comparisons.

TABLE 1
www.frontiersin.org

Table 1. Central Objects, Target Objects, and Distractor Objects used in the experiment.

Participants engaged in 60 practice trials outside the scanner and 8 practice trials in the fMRI environment. During the fMRI experiment, participants conducted two sessions of 160 trials that were separated by a short break (<2 min). Participants stayed inside the scanner during the break. Within each session trials were divided in four blocks of 40 trials, with rest breaks between blocks. Trials were presented in a pseudo-randomized order, such that each session contained the same number of trials for each condition.

Each trial began with the presentation of a fixation cross, followed by the presentation of a picture representing the different objects to which the participant responded by pressing one of three buttons on the response box. The picture was always presented for a duration of 3 s and participants were instructed to respond within this interval, before the next trial would be presented. Next, a fixation cross appeared and the next trial was initiated after a jittered interval of 2.5–4.5 s. During the scanning sessions eye movements were recorded using an MR-compatible eye tracker (Eyelink 1000; SR Research Ltd., Ontario, Canada). Due to technical issues, we did not collect eye movement data from two participants during the fMRI task.

EBA Localizer Task

A functional localizer was used to localize the EBA, using a standardized stimulus set consisting of 20 pictures of human bodies and 20 pictures of chairs (http://pages.bangor.ac.uk/~pss811/page7/page7.html). These stimuli were presented using a blocked design with a presentation of 300 ms per stimulus followed by a 450 ms blank screen and with 20 stimuli per block.

Analysis of Behavioral Data

Analysis of the behavioral data focused on the error rates and reaction times (RTs) obtained during the action prediction task in the fMRI experiment for the different experimental categories. For the analysis of the RTs incorrect trials and trials in which the RTs exceeded the subject's mean by more than two standard deviations were excluded from analysis. Behavioral data was analyzed using a repeated measures ANOVA with the factors Action Cue (No Cue vs. Cue) and # of Target Objects (0, 1, or 2 Target Objects). Effects that exceeded F-values corresponding to p-values < 0.05 were considered significant.

Eye Movement Data

The eye movement data were analyzed using Matlab and analysis focused on the time window from stimulus onset until the subject made a response. For each subject and each experimental condition (i.e., No Cue vs. Cue; 0, 1, or 2 Target Objects) the number of saccades, the amplitude of saccadic eye movements, the onset of the first saccade following stimulus onset, the number of fixations and the number of blinks were calculated. The averaged eye movement data was analyzed by using a repeated measures ANOVA with the factors Action Cue (No Cue vs. Cue) and # of Targets (0, 1, or 2 Targets). Effects that exceeded F-values corresponding to p-values < 0.05 were considered significant.

Image Data Acquisition

The fMRI data were acquired on a 3T scanner (Achieva, Philips) in a single scanning session consisting of two runs. During each run 540 T2-weighted echoplanar images were acquired (time repetition [TR]/time echo [TE] = 2000/28 ms; voxel size 3 × 3 × 3 mm). Anatomical images were acquired with a T1-weighted sagittal scan of the whole brain before the functional runs (TR/TE = 8.2 /3.8 ms, voxel size 1 × 1 × 1 mm). The head of each participant was carefully constrained using foam padding and subjects were instructed to move as little as possible.

Imaging Data Analysis

Statistical analyses were conducted using SPM8 software (Wellcome Department of Cognitive Neurology, London, UK). Preprocessing steps involved spatial realignment (Friston et al., 1995), correction for motion and differences in slice acquisition time, spatial normalization and smoothing with an isotropic Gaussian kernel of 8 mm full-width at half-maximum. Anatomical normalization to MNI space was performed by co-registration of the functional images with the anatomical T1 scan (Ashburner and Friston, 1999).

First-level fMRI analyses were performed for each individual subject in the context of the General Linear Model (Friston et al., 1996). The fMRI time series for both sessions was fitted in one statistical model, with six regressors of interest and their temporal derivatives according to the six possible combinations of Action Cue (No Cue vs. Cue) and # of Target Objects (0, 1, or 2). Each trial was modeled by constructing a square-wave function with the duration that corresponded to the reaction time of that trial. Regressors of no interest included: incorrect and missed responses and the presentation of a fixation cross. Residual head movement-related effects were modeled by including Volterra expansions of the six rigid- body motion parameters (Lund et al., 2005). To control for potential confounding effects of eye movements, hrf-convolved metrics of eye movements (i.e., number of saccades, length of saccades, and number of eye blinks) were included as additional regressors of no interest.

After estimation, beta values were taken to the second level for random effects analysis (Friston et al., 1999). Contrasts were thresholded at p < 0.05 using familywise error (FWE) correction for multiple comparisons at the voxel level. An anatomical representation of significant clusters was obtained by superimposing the structural parametric maps on a standard MNI template. Brodmann areas (BAs) were assigned based on the SPM anatomy toolbox (Eickhoff et al., 2005). Analysis focused on the main effects of Action Cue, # of Target Objects and the overlap between Action Cue and # of Target Objects.

Results

Behavioral Results

Table 2 presents the RTs and the error rates for the different experimental conditions. A speed-accuracy trade-off was observed, reflected in relatively more errors and faster RTs for the “Action Cue—2 Target Objects” condition compared to the “Action Cue—1 Target Object condition.” To control for the speed-accuracy trade-off, for the analysis of the behavioral data, the inverse efficiency was calculated by dividing the RTs by the proportion of correct responses (Townsend and Ashby, 1978).

TABLE 2
www.frontiersin.org

Table 2. Error rates and reaction times according to the different experimental conditions.

As can be seen in Figure 2, response times were faster for trials in which no action cue was presented [1318 ± 52 ms; (mean ± SE)] compared to trials in which an action cue was present [1382 ± 47 ms], F(1, 18) = 31.5, p < 0.001, η2 = 0.64. RTs increased with an increasing number of target objects, [0 Target Objects: 1239 ± 52 ms; 1 Target Object: 1356 ± 45 ms; 2 Target Objects: 1456 ± 53 ms], F(2, 36) = 91.6, p < 0.001, η2 = 0.84. The interaction between Action Cue and # of Target Objects was not significant, F(2, 36) = 2.1, p = 0.14. There was no significant difference between trials in which the action cue was congruent (1388 ± 47 ms) or incongruent (1425 ± 41 ms) with respect to the target object and in all subsequent analyses, data was collapsed over both incongruent and congruent stimuli.

FIGURE 2
www.frontiersin.org

Figure 2. Reaction times for the action prediction task according to the number of target objects and for conditions in which no action cue was present (dark bars) and pictures in which an action cue was present (bright bars).

Eye Movement Data

The eye movement data is represented in Table 3. A comparable statistical pattern was observed for the number of saccades, the amplitude of saccades and the number of fixations, which was reflected in (1) a main effect of Action Cue: more eye movements and fixations were made for the action cue compared to the no action cue condition, (2) a main effect of Target Object: more eye movements and fixations were made with an increasing number of target objects and (3) an interaction between Action Cue and Target Object: for the 0 and 1 target object conditions the number of eye movements and fixations increased when an action cue was presented, but for the 2 target object conditions the number of eye movements and fixations did not differ depending on whether an action cue was present. The statistical results for the eye movement data are summarized in Table 4.

TABLE 3
www.frontiersin.org

Table 3. Eye movement data according to the different experimental conditions.

TABLE 4
www.frontiersin.org

Table 4. ANOVA results for the analysis of the eye movement data.

Effects of Action Cue

Comparing trials in which participants made a prediction about an upcoming action based on the observation of an action cue compared to no action cue (Action Cue > No Action Cue) revealed increased activation in the AON, consisting of the left Middle Temporal Gyrus (MTG), the right Inferior Temporal Gyrus (ITG), the IPL bilaterally and the left dPMC (see Figure 3A and Table 5). The cluster in the MTG falls within the 30–50% probability range of BA 36 (Eickhoff et al., 2005) and overlaps with the EBA as identified by the functional localizer data (peak activation for contrast Body > Chair at x = 48, y = −64, z = 4 and x = −45, y = −67, z = 7). The activity increases in the left IPL were found to be within the 30–80% probability range of BA 40 and extended to the left supramarginal gyrus (SMG). The right IPL cluster was found to be within the 60–100% probability range of BA2 and extended to the right SMG. The activation in the left dPMC was found to be within the 10–40% probability range of BA6. The reverse contrast (No Action Cue > Action Cue) did not reveal significant activations when using the FWE-correction for multiple comparisons.

FIGURE 3
www.frontiersin.org

Figure 3. (A,B) Activation maps representing areas that showed a stronger activation for trials in which an action cue was presented compared to no action cue (A) and areas that showed an increased activation when a target object compared to when no target object was presented (B). Activation is thresholded at p < 0.001 uncorrected, for display purposes.

TABLE 5
www.frontiersin.org

Table 5. Brain regions associated with increased activity during prediction of actions based on action cues compared to no action cues (upper part of table).

Effects of the # of Target Objects

Comparing trials in which a target object was presented compared to trials in which no target object was presented (2 Target Objects and 1 Target Object > 0 Target Objects) revealed activation in the IPL bilaterally, the right superior parietal lobe (SPL), the dPMC and the left IFG (see Figure 3B and Table 5). The left IPL falls within the 30–60% probability range of area hIP1 and the right IPL falls within the 20–40% probability range of area hIP2 (Caspers et al., 2006). The activation in the SPL was within the 20–30% probability range of BA 7A. The activation in the dPMC was within the 0–30% probability range of area BA6. Activation in the left IFG was found to be within the 10–30% probability range of BA 45 and overlapped with the pars triangularis. No increased activation was observed for the reverse contrast (0 Target Objects > 1 Target Object and 2 Target Objects).

Overlap Between Action Cue and # of Target Objects

To investigate whether areas within the AON were differentially activated as a function of the predictability of the action, a conjunction analysis was conducted (“Action Cue > No Action Cue” and “2 Target Objects and 1 Target Object > 0 Target Objects”). As can be seen in Figure 3, activity within the AON increased as a function of the presence of a target object in the left IPL and the PMC. When applying a more lenient statistical threshold for the AON mask (p < 0.001, uncorrected), an additional cluster was observed in the right IPL (see Table 6).

TABLE 6
www.frontiersin.org

Table 6. Brain regions associated with increased activity during prediction of actions based on action cues compared to no action cues (upper part of table).

Effects of Action Cue Congruency

In all analyses reported, for the “Action Cue—1 Target Object condition” the data was collapsed over congruent and incongruent action cues. Directly comparing the effect of action cue congruency did not reveal significant differences in brain activation between congruent and incongruent action cues. Excluding trials in which the action cue was incongruent with respect to the target object also did not change the pattern of results that were reported above. These findings warrant the fact that in the reported analyses the data was collapsed over both congruent and incongruent action cues.

Discussion

The present study investigated how action semantics facilitates the prediction of imagined and observed actions and which neural mechanisms are involved. Participants performed an action prediction task in which they were required to anticipate the use of a centrally presented object that could be moved to an associated target object. At a behavioral level it was found that action prediction was modulated as a function of the predictability of the action (i.e., the number of target objects involved) and the availability of actor information (i.e., whether a hand could be observed grasping the central object). At a neural level it was found that predicting actions that involved a target object resulted in increased activation in the bilateral IPL and frontal motor areas. The presentation of an action cue was associated with increased activation in the EBA and the fronto-parietal AON. Within the AON, activity in the left IPL and the left PMC increased as a function of the level of action predictability. These findings indicate that the retrieval of action semantics for imagined object use and action prediction rely on comparable neural mechanisms, in line with the predictive coding framework of action observation (Kilner et al., 2007).

In this study participants were required to predict actions with objects that could be used in multiple ways and that could be associated with different action goals. It was found that RTs increased as a function of the presence of a target object, likely reflecting that more action semantic information needed to be retrieved to predict the upcoming goal of actions involving multiple objects (van Elk et al., 2012). Predicting actions involving a target object was associated with increased activation in the left IPL and in frontal motor areas. Neuroimaging studies have shown that this region is selectively involved in the observation of human hand-object interactions (Johnson-Frey et al., 2005; Peeters et al., 2009, 2013; Valyear et al., 2012) and in the planning of object-directed actions (Culham et al., 2003; Valyear et al., 2007; Gallivan et al., 2013). The increased activation in the left IPL for making a prediction about an action involving a target object likely reflects a motor simulation process, in which participants imagined grasping the central object to derive at the most likely action in the given context (Wolpert and Kawato, 1998; Buxbaum et al., 2005). This interpretation is in line with neuroimaging studies on motor imagery, indicating that activity in the IPL increases when participants are required to imagine more complex movements (de Lange et al., 2005, 2006; Zacks, 2008).

The finding of the involvement of the left IPL in predicting the use of object-directed actions is in line with neuropsychological studies with apraxic patients, suggesting that the left IPL is a critical region for storing hand postures required for the interaction with objects (Heilman et al., 1982; Heilman and Rothi, 1993; Buxbaum, 2001; Buxbaum and Saffran, 2002). Recently, an alternative account of the deficits observed in tool use following damage to the left IPL has been proposed, according to which apraxic patients are primarily characterized by impairments in technical reasoning (Osiurak et al., 2009, 2011; Osiurak and Lesourd, 2014). On this account, apraxic patients have difficulties with technical reasoning about abstract physical properties of objects and specifically in identifying the technical means to achieve a specific technical end (for a similar view, i.e., the “mechanical problem solving” account, see: Goldenberg, 2009). This view is supported by the finding that apraxic patients showed an impaired performance on a problem solving test involving the selection and use of novel objects (Goldenberg and Hagmann, 1998) and furthermore impairments in the use of novel tools are often accompanied by an impaired use of well-known objects as well (Osiurak et al., 2009; Jarry et al., 2013). The implication of the technical reasoning account is that in many cases, the successful use of objects does not rely on stored semantic or motor representations, but requires applying mechanical or technical knowledge instead (i.e., knowledge about abstract mechanical principles, such as “lifting” or “screwing”; cf. Osiurak et al., 2009, 2013). This view provides an important alternative account of the available neuropsychological data and has implications for the supposed role of the left IPL in object use as well, indicating that this region may play a critical role in mechanical or technical reasoning in relation to the use of objects.

The availability of actor information was manipulated by including trials in which a hand could be observed grasping the central object and trials in which no hand was presented. The observed grasp type (i.e., whether the central object was grasped at the upper or lower side) could be used to disambiguate the upcoming action, only when two target objects were presented (e.g., a wine bottle in association with a wine glass and a wine cooler). When only one or no target object was presented at all, the action prediction could be based solely on the basis of the objects involved (e.g., a wine bottle in association with a wine glass). Closer inspection of the behavioral data indicates that when two target objects were presented, actor information indeed facilitated the disambiguation of the upcoming action, resulting in faster RTs (and less eye movements) but at the expense of more errors (i.e., a speed-accuracy trade-off was observed). In contrast, when only one or no target objects were presented, participants responded faster when no action cue was presented, but they made more errors. Correcting for this speed-accuracy trade-off, by using the inverse efficiency instead (Townsend and Ashby, 1978), indicated that response times increased when an action cue was presented, irrespective of the number of target objects. This finding indicates that participants automatically processed the actor information—even though in some cases it was irrelevant—likely because their focus of attention was initially on the central object and action cues were always centrally presented (Duncan, 1984).

The observation of an action cue was associated with increased activation in the EBA, the IPL, and the dPMC. These areas are commonly referred to as the (AON; Caspers et al., 2010) that is typically found activated during the observation of others' actions. In the present study activation in the AON was observed by using an action prediction task, in which participants were required to anticipate an upcoming action. The finding that the AON is involved in action prediction is in line with previous studies, indicating the central role of the AON in action prediction tasks as well (Kilner et al., 2004; Aglioti et al., 2008; Stadler et al., 2012; Avenanti et al., 2013).

An important question is to what extent the action cue may have been perceived primarily as a hand grasping an object, or as a spatial cue indicating the relevant side of the object instead (i.e., up or down). This question has been addressed extensively in research on imitation that is characterized by a similar discussion to what extent effects of observed actions are driven by the biological properties of the stimulus or rather reflect spatial compatibility effects (Heyes, 2011). Several studies indicate that spatial compatibility can be dissociated from imitative compatibility effects, suggesting a special role for the processing of observed biological stimuli (Brass et al., 2000; Catmur and Heyes, 2011). This notion is further supported by the present fMRI data, indicating that the observation of an action cue did not only result in activation of brain areas associated with the processing of spatial information (i.e., the superior parietal lobe and the dPMC; Crammond and Kalaska, 1994; Iacoboni et al., 1996; Koski et al., 2005), but in the activation of brain areas involved in the perception of biological stimuli as well, such as the EBA (Chan et al., 2004; Downing et al., 2006).

The activation of the AON in response to an action cue may be partly driven by the stimuli in which either a hand was visible or not (Downing et al., 2001), resulting in the automatic activation of the corresponding motor programs used for grasping objects (Buccino et al., 2001; Brass and Heyes, 2005). Furthermore, in the present study static images depicting a human hand were used as stimuli rather than dynamic stimuli depicting biological motion. By using static images it was ensured that participants would predict the upcoming action solely based on the objects presented in the picture and the initial grasping location of the hand, rather than the dynamic cues associated with hand movements. Previous studies on action observation have shown that the observation of static action images results in reliable activation of the AON (Johnson-Frey et al., 2003; de Lange et al., 2008) and also in this study the AON was found activated for pictures representing a hand compared to no hand. It could be argued that the use of static compared to dynamic images may have resulted in an induced process of motor imagery, in which the participant imagines completing the observed action. Previous studies have indicated that motor imagery also activates similar brain regions as observed in action observation, such as the IPL and the PMC (Zacks, 2008; Caruana et al., 2014), and the stronger activation of these areas in the present study may be partly related to a more complex motor imagery processed (de Lange et al., 2005, 2006; Zacks, 2008). This suggestion is also supported by the reaction time data, indicating that participants responded slower when they were presented with an action cue, likely because the integration of an observed action cue required additional processing time. However, it should be noted that in the present study, participants were always required to predict actions, either by imagining the use of visually presented objects, or by imagining how an actor would use the objects presented. Thus, the underlying process of action prediction may be functionally equivalent for trials in which an action cue was presented and trials in which no action cue was presented, such that participants always relied on using an internal forward model to infer the most likely outcome of the action (Wolpert and Flanagan, 2001). When no action cue was presented, participants may have directly engaged in a process of motor imagery (Zacks, 2008; Caruana et al., 2014), whereas in the case of an action cue the observed action first needed to be matched unto one's motor repertoire, as implied by the AON literature (Kilner et al., 2007).

Interestingly, it was found that activation within the AON varied as a function of the presence of a target object and the predictability of the action. That is, in the left IPL and the left PMC activity increased when a target object was presented, indicating that these regions support the use of semantic information for understanding and predicting observed actions. The overlap in activation in the left IPL and the left PMC for the independent effects of target objects and action cue, may indicate that upcoming actions are predicted, either through a process of motor imagery (when no action cue is presented) or by matching the observed action to stored hand postures for object use (when an action cue is presented). The finding that the activation of the AON is modulated not only as a function of the low-level kinematic features of the observed action, but also by the involvement of semantics for action is in line with the view that the AON represents higher-level aspects of observed actions as well, such as the correctness or meaningfulness of an action (Koelewijn et al., 2008; Newman-Norlund et al., 2010, 2013; Stapel et al., 2010). The present study extends these findings, by indicating a stronger involvement of the AON for unpredictable actions that require the use of action semantics. Furthermore, the finding that similar areas are involved in using semantics for imagined actions and in action observation, is in line with the “predictive coding account of action observation” (Kilner et al., 2007), according to which predicting other's actions relies on similar neural mechanisms as involved in the planning of our own actions. In sum, the present study indicates that the left IPL and PMC represent stored hand-postures that can be used for planning object-directed actions and for predicting other's actions as well.

Conflict of Interest Statement

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

This research was supported by a VENI grant no. 016.135.135 from the Netherlands Organization for Scientific Research (NWO).

References

Aglioti, S. M., Cesari, P., Romani, M., and Urgesi, C. (2008). Action anticipation and motor resonance in elite basketball players. Nat. Neurosci. 11, 1109–1116. doi: 10.1038/nn.2182

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Ashburner, J., and Friston, K. J. (1999). Nonlinear spatial normalization using basis functions. Hum. Brain Mapp. 7, 254–266. doi: 10.1002/(SICI)1097-0193(1999)7:4%3C254::AID-HBM4%3E3.3.CO;2-7

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Avenanti, A., Annella, L., Candidi, M., Urgesi, C., and Aglioti, S. M. (2013). Compensatory plasticity in the action observation network: virtual lesions of sts enhance anticipatory simulation of seen actions. Cereb. Cortex 23, 570–580. doi: 10.1093/cercor/bhs040

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Aziz-Zadeh, L., Koski, L., Zaidel, E., Mazziotta, J., and Iacoboni, M. (2006). Lateralization of the human mirror neuron system. J. Neurosci. 26, 2964–2970. doi: 10.1523/JNEUROSCI.2921-05.2006

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Barsalou, L. W. (2008). Grounded cognition. Annu. Rev. Psychol. 59, 617–645. doi: 10.1146/annurev.psych.59.103006.093639

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Beauchamp, M. S., and Martin, A. (2007). Grounding object concepts in perception and action: evidence from fMRI studies of tools. Cortex 43, 461–468. doi: 10.1016/S0010-9452(08)70470-2

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Blakemore, S. J., and Decety, J. (2001). From the perception of action to the understanding of intention. Nat. Rev. Neurosci. 2, 561–567. doi: 10.1038/35086023

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Brass, M., Bekkering, H., Wohlschlager, A., and Prinz, W. (2000). Compatibility between observed and executed finger movements: comparing symbolic, spatial, and imitative cues. Brain Cogn. 44, 124–143. doi: 10.1006/brcg.2000.1225

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Brass, M., and Heyes, C. (2005). Imitation: is cognitive neuroscience solving the correspondence problem? Trends Cogn. Sci. 9, 489–495. doi: 10.1016/j.tics.2005.08.007

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Bub, D. N., Masson, M. E. J., and Cree, G. S. (2008). Evocation of functional and volumetric gestural knowledge by objects and words. Cognition 106, 27–58. doi: 10.1016/j.cognition.2006.12.010

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Buccino, G., Binkofski, F., Fink, G. R., Fadiga, L., Fogassi, L., Gallese, V., et al. (2001). Action observation activates premotor and parietal areas in a somatotopic manner: an fMRI study. Eur. J. Neurosci. 13, 400–404. doi: 10.1111/j.1460-9568.2001.01385.x

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Buxbaum, L. J. (2001). Ideomotor apraxia: a call to action. Neurocase 7, 445–458. doi: 10.1093/neucas/7.6.445

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Buxbaum, L. J., Johnson-Frey, S. H., and Bartlett-Williams, M. (2005). Deficient internal models for planning hand-object interactions in apraxia. Neuropsychologia 43, 917–929. doi: 10.1016/j.neuropsychologia.2004.09.006

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Buxbaum, L. J., and Saffran, E. M. (2002). Knowledge of object manipulation and object function: dissociations in apraxic and nonapraxic subjects. Brain Lang. 82, 179–199. doi: 10.1016/S0093-934X(02)00014-7

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Calvo-Merino, B., Glaser, D. E., Grezes, J., Passingham, R. E., and Haggard, P. (2005). Action observation and acquired motor skills: an FMRI study with expert dancers. Cereb. Cortex 15, 1243–1249. doi: 10.1093/cercor/bhi007

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Caruana, F., Sartori, I., Lo Russo, G., and Avanzini, P. (2014). Sequencing biological and physical events affects specific frequency bands within the human premotor cortex: an intracerebral EEG study. PLoS ONE 9:e86384. doi: 10.1371/journal.pone.0086384

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Caspers, S., Geyer, S., Schleicher, A., Mohlberg, H., Amunts, K., and Zilles, K. (2006). The human inferior parietal cortex: Cytoarchitectonic parcellation and interindividual variability. Neuroimage 33, 430–448. doi: 10.1016/j.neuroimage.2006.06.054

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Caspers, S., Zilles, K., Laird, A. R., and Eickhoff, S. B. (2010). ALE meta-analysis of action observation and imitation in the human brain. Neuroimage 50, 1148–1167. doi: 10.1016/j.neuroimage.2009.12.112

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Catmur, C., and Heyes, C. (2011). Time course analyses confirm independence of imitative and spatial compatibility. J. Exp. Psychol. Hum. Percept. Perform. 37, 409–421. doi: 10.1037/a0019325

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Chan, A. W., Peelen, M. V., and Downing, P. E. (2004). The effect of viewpoint on body representation in the extrastriate body area. Neuroreport 15, 2407–2410. doi: 10.1097/00001756-200410250-00021

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Chao, L. L., and Martin, A. (2000). Representation of manipulable man-made objects in the dorsal stream. Neuroimage 12, 478–484. doi: 10.1006/nimg.2000.0635

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Cohen, N. R., Cross, E. S., Tunik, E., Grafton, S. T., and Culham, J. C. (2009). Ventral and dorsal stream contributions to the online control of immediate and delayed grasping: a TMS approach. Neuropsychologia 47, 1553–1562. doi: 10.1016/j.neuropsychologia.2008.12.034

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Crammond, D. J., and Kalaska, J. F. (1994). Modulation of preparatory neuronal-activity in dorsal premotor cortex due to stimulus-response compatibility. J. Neurophysiol. 71, 1281–1284.

Pubmed Abstract | Pubmed Full Text

Creem-Regehr, S. H., and Lee, J. N. (2005). Neural representations of graspable objects: are tools special? Brain Res. Cogn. Brain Res. 22, 457–469. doi: 10.1016/j.cogbrainres.2004.10.006

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Cross, E. S., Kraemer, D. J. M., Hamilton, A. F. D., Kelley, W. M., and Grafton, S. T. (2009). Sensitivity of the action observation network to physical and observational learning. Cereb. Cortex 19, 315–326. doi: 10.1093/cercor/bhn083

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Culham, J. C., Danckert, S. L., DeSouza, J. F., Gati, J. S., Menon, R. S., and Goodale, M. A. (2003). Visually guided grasping produces fMRI activation in dorsal but not ventral stream brain areas. Exp. Brain Res. 153, 180–189. doi: 10.1007/s00221-003-1591-5

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

de Lange, F. P., Hagoort, P., and Toni, I. (2005). Neural topography and content of movement representations. J. Cogn. Neurosci. 17, 97–112. doi: 10.1162/0898929052880039

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

de Lange, F. P., Helmich, R. C., and Toni, I. (2006). Posture influences motor imagery: an fMRI study. Neuroimage 33, 609–617. doi: 10.1016/j.neuroimage.2006.07.017

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

de Lange, F. P., Spronk, M., Willems, R. M., Toni, I., and Bekkering, H. (2008). Complementary systems for understanding action intentions. Curr. Biol. 18, 454–457. doi: 10.1016/j.cub.2008.02.057

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Downing, P. E., Jiang, Y., Shuman, M., and Kanwisher, N. (2001). A cortical area selective for visual processing of the human body. Science 293, 2470–2473. doi: 10.1126/science.1063414

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Downing, P. E., Peelen, M. V., Wiggett, A. J., and Tew, B. D. (2006). The role of the extrastriate body area in action perception. Soc. Neurosci. 1, 52–62. doi: 10.1080/17470910600668854

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Duncan, J. (1984). Selective attention and the organization of visual information. J. Exp. Psychol. Gen. 113, 501–517. doi: 10.1037/0096-3445.113.4.501

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Eickhoff, S. B., Stephan, K. E., Mohlberg, H., Grefkes, C., Fink, G. R., Amunts, K., et al. (2005). A new SPM toolbox for combining probabilistic cytoarchitectonic maps and functional imaging data. Neuroimage 25, 1325–1335. doi: 10.1016/j.neuroimage.2004.12.034

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Ellis, R., and Tucker, D. (2000). Micro-affordance: the potentiation of components of action by seen objects. Br. J. Psychol. 91, 451–471. doi: 10.1348/000712600161934

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Fogassi, L., Ferrari, P. F., Gesierich, B., Rozzi, S., Chersi, F., and Rizzolatti, G. (2005). Parietal lobe: from action organization to intention understanding. Science 308, 662–667. doi: 10.1126/science.1106138

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Friston, K. J., Holmes, A. P., Poline, J. B., Grasby, P. J., Williams, S. C., Frackowiak, R. S., et al. (1995). Analysis of fMRI time-series revisited. Neuroimage 2, 45–53. doi: 10.1006/nimg.1995.1007

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Friston, K. J., Holmes, A., Poline, J. B., Price, C. J., and Frith, C. D. (1996). Detecting activations in PET and fMRI: levels of inference and power. Neuroimage 4, 223–235. doi: 10.1006/nimg.1996.0074

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Friston, K. J., Holmes, A. P., and Worsley, K. J. (1999). How many subjects constitute a study? Neuroimage 10, 1–5. doi: 10.1006/nimg.1999.0439

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Gallivan, J., McLean, A., Valyear, K., and Culham, J. (2013). Decoding the neural mechanisms of human tool use. Can. J. Exp. Psychol. 2:e00425. doi: 10.7554/eLife.00425

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Goldenberg, G. (2009). Apraxia and the parietal lobes. Neuropsychologia 47, 1449–1459. doi: 10.1016/j.neuropsychologia.2008.07.014

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Goldenberg, G., and Hagmann, S. (1998). Tool use and mechanical problem solving in apraxia. Neuropsychologia 36, 581–589. doi: 10.1016/S0028-3932(97)00165-6

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Grezes, J., and Decety, J. (2002). Does visual perception of object afford action? Evidence from a neuroimaging study. Neuropsychologia 40, 212–222. doi: 10.1016/S0028-3932(01)00089-6

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Grol, M. J., Majdandzic, J., Stephan, K. E., Verhagen, L., Dijkerman, H. C., Bekkering, H., et al. (2007). Parieto-frontal connectivity during visually guided grasping. J. Neurosci. 27, 11877–11887. doi: 10.1523/JNEUROSCI.3923-07.2007

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Heilman, K. M., Rothi, L. J., and Valenstein, E. (1982). Two forms of ideomotor apraxia. Neurology 32, 342–346. doi: 10.1212/WNL.32.4.342

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Heilman, K. M., and Rothi, L. J. G. (1993). “Apraxia,” in Clinical Neuropsychology, eds K. M. Heilman and E. Valenstein (New York, NY: Oxford University Press), 141–164.

Heyes, C. (2010). Where do mirror neurons come from? Neurosci. Biobehav. Rev. 34, 575–583. doi: 10.1016/j.neubiorev.2009.11.007

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Heyes, C. (2011). Automatic Imitation. Psychol. Bull. 137, 463–483. doi: 10.1037/a0022288

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Iacoboni, M., Molnar-Szakacs, I., Gallese, V., Buccino, G., Mazziotta, J. C., and Rizzolatti, G. (2005). Grasping the intentions of others with one's own mirror neuron system. PLoS Biol. 3:e79. doi: 10.1371/journal.pbio.0030079

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Iacoboni, M., Woods, R. P., and Mazziotta, J. C. (1996). Brain-behavior relationships: evidence from practice effects in spatial stimulus-response compatibility. J. Neurophysiol. 76, 321–331.

Pubmed Abstract | Pubmed Full Text

Jarry, C., Osiurak, F., Delafuys, D., Chauvire, V., Etcharry-Bouyx, F., and Le Gall, D. (2013). Apraxia of tool use: more evidence for the technical reasoning hypothesis. Cortex 49, 2322–2333. doi: 10.1016/j.cortex.2013.02.011

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Johnson-Frey, S. H., Maloof, F. R., Newman-Norlund, R., Farrer, C., Inati, S., and Grafton, S. T. (2003). Actions or hand-object interactions? Human inferior frontal cortex and action observation. Neuron 39, 1053–1058. doi: 10.1016/S0896-6273(03)00524-5

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Johnson-Frey, S. H., Newman-Norlund, R., and Grafton, S. T. (2005). A distributed left hemisphere network active during planning of everyday tool use skills. Cereb. Cortex 15, 681–695. doi: 10.1093/cercor/bhh169

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Kilner, J. M., Friston, K. J., and Frith, C. D. (2007). Predictive coding: an account of the mirror neuron system. Cogn. Process. 8, 159–166. doi: 10.1007/s10339-007-0170-2

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Kilner, J. M., Vargas, C., Duval, S., Blakemore, S. J., and Sirigu, A. (2004). Motor activation prior to observation of a predicted movement. Nat. Neurosci. 7, 1299–1301. doi: 10.1038/nn1355

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Klatzky, R. L., Pellegrino, J. W., McCloskey, B. P., and Doherty, S. (1989). Can you squeeze a tomato? The role of motor representations in semantic sensibility judgments. J. Mem. Lang. 28, 56–77. doi: 10.1016/0749-596X(89)90028-4

CrossRef Full Text

Koelewijn, T., van Schie, H. T., Bekkering, H., Oostenveld, R., and Jensen, O. (2008). Motor-cortical beta oscillations are modulated by correctness of observed action. Neuroimage 40, 767–775. doi: 10.1016/j.neuroimage.2007.12.018

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Koski, L., Molnar-Szakacs, I., and Iacoboni, M. (2005). Exploring the contributions of premotor and parietal cortex to spatial compatibility using image-guided TMS. Neuroimage 24, 296–305. doi: 10.1016/j.neuroimage.2004.09.027

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Koski, L., Wohlschlager, A., Bekkering, H., Woods, R. P., Dubeau, M. C., Mazziotta, J. C., et al. (2002). Modulation of motor and premotor activity during imitation of target-directed actions. Cereb. Cortex 12, 847–855. doi: 10.1093/cercor/12.8.847

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Lund, T. E., Norgaard, M. D., Rostrup, E., Rowe, J. B., and Paulson, O. B. (2005). Motion or activity: their role in intra- and inter-subject variation in fMRI. Neuroimage 26, 960–964. doi: 10.1016/j.neuroimage.2005.02.021

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Murata, A., Gallese, V., Luppino, G., Kaseda, M., and Sakata, H. (2000). Selectivity for the shape, size, and orientation of objects for grasping in neurons of monkey parietal area AIP. J. Neurophysiol. 83, 2580–2601.

Pubmed Abstract | Pubmed Full Text

Newman-Norlund, R., Bruggink, K., Cuijpers, R. H., and Bekkering, H. (2013). fMRI correlates of observing high and low-probability actions. J. Behav. Brain Sci. 3, 49–56. doi: 10.4236/jbbs.2013.31005

CrossRef Full Text

Newman-Norlund, R., van Schie, H. T., van Hoek, M. E. C., Cuijpers, R. H., and Bekkering, H. (2010). The role of inferior frontal and parietal areas in differentiating meaningful and meaningless object-directed actions. Brain Res. 1315, 63–74. doi: 10.1016/j.brainres.2009.11.065

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Noppeney, U., Josephs, O., Kiebel, S., Friston, K. J., and Price, C. J. (2005). Action selectivity in parietal and temporal cortex. Brain Res. Cogn. Brain Res. 25, 641–649. doi: 10.1016/j.cogbrainres.2005.08.017

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Okada, T., Tanaka, S., Nakai, T., Nishizawa, S., Inui, T., Sadato, N., et al. (2000). Naming of animals and tools: a functional magnetic resonance imaging study of categorical differences in the human brain areas commonly used for naming visually presented objects. Neurosci. Lett. 296, 33–36. doi: 10.1016/S0304-3940(00)01612-8

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Osiurak, F., Jarry, C., Allain, P., Aubin, G., Etcharry-Bouyx, F., Richard, I., et al. (2009). Unusual use of objects after unilateral brain damage: the technical reasoning model. Cortex 45, 769–783. doi: 10.1016/j.cortex.2008.06.013

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Osiurak, F., Jarry, C., and Le Gall, D. (2011). Re-examining the gesture engram hypothesis. New perspectives on apraxia of tool use. Neuropsychologia 49, 299–312. doi: 10.1016/j.neuropsychologia.2010.12.041

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Osiurak, F., Jarry, C., Lesourd, M., Baumard, J., and Le Gall, D. (2013). Mechanical problem-solving strategies in left-brain damaged patients and apraxia of tool use. Neuropsychologia 51, 1964–1972. doi: 10.1016/j.neuropsychologia.2013.06.017

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Osiurak, F., and Lesourd, M. (2014). What about mechanical knowledge?: Comment on “Action semantics: a unifying conceptual framework for the selective use of multimodal and modality-specific object knowledge” by van Elk, van Schie, and Bekkering. Phys. Life Rev. doi: 10.1016/j.plrev.2014.01.013. [Epub ahead of print].

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Peeters, R., Rizzolatti, G., and Orban, G. A. (2013). Functional properties of the left parietal tool use region. Neuroimage 78, 83–93. doi: 10.1016/j.neuroimage.2013.04.023

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Peeters, R., Simone, L., Nelissen, K., Fabbri-Destro, M., Vanduffel, W., Rizzolatti, G., et al. (2009). The representation of tool use in humans and monkeys: common and uniquely human features. J. Neurosci. 29, 11523–11539. doi: 10.1523/JNEUROSCI.2040-09.2009

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Sakata, H., Taira, M., Murata, A., and Mine, S. (1995). Neural mechanisms of visual guidance of hand action in the parietal cortex of the monkey. Cereb. Cortex 5, 429–438. doi: 10.1093/cercor/5.5.429

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Schubotz, R. I. (2007). Prediction of external events with our motor system: towards a new framework. Trends Cogn. Sci. 11, 211–218. doi: 10.1016/j.tics.2007.02.006

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Sirigu, A., Cohen, L., Duhamel, J. R., Pillon, B., Dubois, B., and Agid, Y. (1995). A selective impairment of hand posture for object utilization in apraxia. Cortex 31, 41–55. doi: 10.1016/S0010-9452(13)80104-9

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Springer, A., and Prinz, W. (2010). Action semantics modulate action prediction. Q. J. Exp. Psychol. 63, 2141–2158. doi: 10.1080/17470211003721659

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Stadler, W., Ott, D. V. M., Springer, A., Schubotz, R. I., Schutz-Bosbach, S., and Prinz, W. (2012). Repetitive TMS suggests a role of the human dorsal premotor cortex in action prediction. Front. Hum. Neurosci. 6:20. doi: 10.3389/fnhum.2012.00020

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Stapel, J. C., Hunnius, S., and Bekkering, H. (2012). Online prediction of others' actions: the contribution of the target object, action context and movement kinematics. Psychol. Res. 76, 434–445. doi: 10.1007/s00426-012-0423-2

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Stapel, J. C., Hunnius, S., van Elk, M., and Bekkering, H. (2010). Motor activation during observation of unusual versus ordinary actions in infancy. Soc. Neurosci. 5, 451–460. doi: 10.1080/17470919.2010.490667

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Townsend, J. T., and Ashby, F. G. (1978). “Methods of modeling capacity in simple processing systems,” in Cognitive Theory, Vol. 3, eds J. Castellan and F. Restle (Hillsdale, NJ: Erlbaum), 200–239.

Tucker, M., and Ellis, R. (2001). The potentiation of grasp types during visual object categorization. Vis. Cogn. 8, 769–800. doi: 10.1080/13506280042000144

CrossRef Full Text

Umilta, M. A., Escola, L., Intskirveli, I., Grammont, F., Rochat, M., Caruana, F., et al. (2008). When pliers become fingers in the monkey motor system. Proc. Natl. Acad. Sci. U.S.A. 105, 2209–2213. doi: 10.1073/pnas.0705985105

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Umilta, M. A., Kohler, E., Gallese, V., Fogassi, L., Fadiga, L., Keysers, C., et al. (2001). I know what you are doing: a neurophysiological study. Neuron 31, 155–165. doi: 10.1016/S0896-6273(01)00337-3

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Valyear, K. F., Cavina-Pratesi, C., Stiglick, A. J., and Culham, J. C. (2007). Does tool-related fMRI activity within the intraparietal sulcus reflect the plan to grasp? Neuroimage 36, T94–T108. doi: 10.1016/j.neuroimage.2007.03.031

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Valyear, K. F., Gallivan, J. P., McLean, D. A., and Culham, J. C. (2012). fMRI repetition suppression for familiar but not arbitrary actions with tools. J. Neurosci. 32, 4247–4259. doi: 10.1523/JNEUROSCI.5270-11.2012

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

van Elk, M., van Schie, H., and Bekkering, H. (2013). Action semantics: a unifying conceptual framework for the selective use of multimodal and modality-specific object knowledge. Phys. Life Rev. doi: 10.1016/j.plrev.2013.11.005. [Epub ahead of print].

CrossRef Full Text

van Elk, M., van Schie, H. T., and Bekkering, H. (2008). Conceptual knowledge for understanding other's actions is organized primarily around action goals. Exp. Brain Res. 189, 99–107. doi: 10.1007/s00221-008-1408-7

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

van Elk, M., Viswanathan, S., van Schie, H. T., Bekkering, H., and Grafton, S. T. (2012). Pouring or chilling a bottle of wine: an fMRI study on the prospective planning of object-directed actions. Exp. Brain Res. 218, 189–200. doi: 10.1007/s00221-012-3016-9

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Vingerhoets, G. (2008). Knowing about tools: neural correlates of tool familiarity and experience. Neuroimage 40, 1380–1391. doi: 10.1016/j.neuroimage.2007.12.058

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Wilson, M., and Knoblich, G. (2005). The case for motor involvement in perceiving conspecifics. Psychol. Bull. 131, 460–473. doi: 10.1037/0033-2909.131.3.460

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Wolpert, D. M., and Flanagan, J. R. (2001). Motor prediction. Curr. Biol. 11, R729–R732. doi: 10.1016/S0960-9822(01)00432-8

CrossRef Full Text

Wolpert, D. M., and Kawato, M. (1998). Multiple paired forward and inverse models for motor control. Neural Netw. 11, 1317–1329. doi: 10.1016/S0893-6080(98)00066-5

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Zacks, J. M. (2008). Neuroimaging studies of mental rotation: a meta-analysis and review. J. Cogn. Neurosci. 20, 1–19. doi: 10.1162/jocn.2008.20013

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Zimmermann, M., Toni, I., and de Lange, F. P. (2013). Body posture modulates action perception. J. Neurosci. 33, 5930–5938. doi: 10.1523/JNEUROSCI.5570-12.2013

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Keywords: fMRI, objects, action prediction, action semantics, inferior parietal lobe

Citation: van Elk M (2014) The left inferior parietal lobe represents stored hand-postures for object use and action prediction. Front. Psychol. 5:333. doi: 10.3389/fpsyg.2014.00333

Received: 12 March 2014; Accepted: 31 March 2014;
Published online: 23 April 2014.

Edited by:

François Osiurak, Université de Lyon, France

Reviewed by:

Fausto Caruana, Italian Institute of Technology, Italy
François Osiurak, Université de Lyon, France

Copyright © 2014 van Elk. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Michiel van Elk, Department of Psychology, University of Amsterdam, Weesperplein 4, 1018 XA Amsterdam, Netherlands e-mail: m.vanelk@uva.nl

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.