Research Topic

Reaching and Grasping the Multisensory Side of Dexterous Manipulation

About this Research Topic

In everyday life, we integrate multiple inputs from different sensory modalities to plan and control actions aimed at exploring the surrounding environment. Through repeated interactions with the physical world, the Central Nervous System builds motor-sensory couplings underpinning the ability to predict the sensory consequences of our actions, and to monitor task execution. These processes allow us, for instance, to accurately reach for an elevator’s button, to transfer a cellphone from one hand to the other without dropping it, as well as to learn the use of a new tool. Inputs from visual, somatosensory, auditory and vestibular receptors inform us about intrinsic (e.g., size) and extrinsic (e.g., location) object features, as well as the state of our body (e.g., limb position). Motor control, on the other hand, allows the organization of effective patterns of actions aimed at achieving the goal while compensating for external perturbations.

Over the last twenty years, there has been a growing interest on how this multisensory-motor integration process unfolds to enable planning and execution of reaching and grasping. However, while psychophysical studies have provided evidence on how multisensory inputs are optimally integrated to create a coherent percept, it remains unclear how multiple sources of sensory information are coupled with motor commands to shape motor execution in dexterous tasks. From a scientific standpoint, filling this gap would allow a deep understanding of how humans can cope with different signal-to-noise ratios emerging from the combination of sensor inputs and perform accurate actions. From a more applicative point of view, this line of research would benefit the rehabilitative attempts to restore or improve dexterity in individuals affected by sensorimotor impairments, e.g., stroke survivors. Similarly, this framework would provide insights for the design of robotic devices aimed at enhancing or performing human-like interactive actions. In particular, in this framework the following questions remain to be investigated:

1) How are multisensory information about the to-be-reached/grasped object integrated with motor commands?
2) How is multisensory information about the body/limbs position integrated and used to plan and execute reach/grasp movements?
3) How are inputs from different sensory modalities integrated during object manipulation?
4) What are the neural representations of multisensory reaching, grasping, and manipulation?

This Research Topic on multisensory-motor integration aims to provide novel insights on the processes underlying reaching and grasping behaviors in multisensory conditions of object manipulation. This framework is meant to resemble the properties of naturalistic environments where sensory cues are not isolated but optimally combined and intertwined with action. Authors are encouraged to submit papers addressing behavioral and neurophysiological exploration (from EMG to EEG and fMRI) aimed at understanding the contribution of multisensory information, and its coupling with motor commands, for reaching, grasping and object manipulation in humans.


Keywords: Multisensory integration, reaching, grasping, dexterous manipulation, vision, haptic, vestibular


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

In everyday life, we integrate multiple inputs from different sensory modalities to plan and control actions aimed at exploring the surrounding environment. Through repeated interactions with the physical world, the Central Nervous System builds motor-sensory couplings underpinning the ability to predict the sensory consequences of our actions, and to monitor task execution. These processes allow us, for instance, to accurately reach for an elevator’s button, to transfer a cellphone from one hand to the other without dropping it, as well as to learn the use of a new tool. Inputs from visual, somatosensory, auditory and vestibular receptors inform us about intrinsic (e.g., size) and extrinsic (e.g., location) object features, as well as the state of our body (e.g., limb position). Motor control, on the other hand, allows the organization of effective patterns of actions aimed at achieving the goal while compensating for external perturbations.

Over the last twenty years, there has been a growing interest on how this multisensory-motor integration process unfolds to enable planning and execution of reaching and grasping. However, while psychophysical studies have provided evidence on how multisensory inputs are optimally integrated to create a coherent percept, it remains unclear how multiple sources of sensory information are coupled with motor commands to shape motor execution in dexterous tasks. From a scientific standpoint, filling this gap would allow a deep understanding of how humans can cope with different signal-to-noise ratios emerging from the combination of sensor inputs and perform accurate actions. From a more applicative point of view, this line of research would benefit the rehabilitative attempts to restore or improve dexterity in individuals affected by sensorimotor impairments, e.g., stroke survivors. Similarly, this framework would provide insights for the design of robotic devices aimed at enhancing or performing human-like interactive actions. In particular, in this framework the following questions remain to be investigated:

1) How are multisensory information about the to-be-reached/grasped object integrated with motor commands?
2) How is multisensory information about the body/limbs position integrated and used to plan and execute reach/grasp movements?
3) How are inputs from different sensory modalities integrated during object manipulation?
4) What are the neural representations of multisensory reaching, grasping, and manipulation?

This Research Topic on multisensory-motor integration aims to provide novel insights on the processes underlying reaching and grasping behaviors in multisensory conditions of object manipulation. This framework is meant to resemble the properties of naturalistic environments where sensory cues are not isolated but optimally combined and intertwined with action. Authors are encouraged to submit papers addressing behavioral and neurophysiological exploration (from EMG to EEG and fMRI) aimed at understanding the contribution of multisensory information, and its coupling with motor commands, for reaching, grasping and object manipulation in humans.


Keywords: Multisensory integration, reaching, grasping, dexterous manipulation, vision, haptic, vestibular


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

About Frontiers Research Topics

With their unique mixes of varied contributions from Original Research to Review Articles, Research Topics unify the most influential researchers, the latest key findings and historical advances in a hot research area! Find out more on how to host your own Frontiers Research Topic or contribute to one as an author.

Topic Editors

Loading..

Submission Deadlines

31 July 2020 Manuscript

Participating Journals

Manuscripts can be submitted to this Research Topic via the following journals:

Loading..

Topic Editors

Loading..

Submission Deadlines

31 July 2020 Manuscript

Participating Journals

Manuscripts can be submitted to this Research Topic via the following journals:

Loading..
Loading..

total views article views article downloads topic views

}
 
Top countries
Top referring sites
Loading..