About this Research Topic
This Research Topic is based on outputs from the workshop 'ViTac: Integrating Vision and Touch for Multimodal and Cross-modal Perception' at the International Conference on Robotics and Automation (ICRA) 2019. However, we would also welcome spontaneous submissions not associated with this workshop, which should fit the scope of the Research Topic.
Animals interact with the world through multimodal sensing inputs, especially vision and touch sensing in the case of humans. In contrast, artificial systems usually rely on a single sensing modality, with distinct hardware and algorithmic approaches developed for each modality, e.g. computer vision and tactile robotics. Future robots, as embodied agents, should make best use of all available sensing modalities to interact with the environment. Over the last few years, there have been advances in the fusing of information from distinct modalities and selecting between those modalities to use the most appropriate information for achieving a goal, e.g. grasping or manipulating an object. Furthermore, there has been a recent acceleration in the development of optical tactile sensors using cameras, such as the GelSight and TacTip tactile sensors, bridging the gap between vision and tactile sensing, and creating cross-modal perception.
This Research Topic will encompass recent progress in the area of combining vision and touch sensing from the perspective of how touch sensing complements vision to achieve a better robot perception, exploration, learning and interaction with humans. The Research Topic aims to enhance active collaboration, discussion of methods for the fusion of vision and touch, discussion of challenges for multimodal and cross-modal sensing, development of optical tactile sensors and applications.
Topics of Interest:
• trends in combining vision and tactile sensing for robot perception
• development of optical tactile sensors (using visual cameras or optical fibres)
• integration of optical tactile sensors into robotic grippers and hands
• roles of vision and touch sensing in different object perception tasks, e.g., object recognition, localization, object exploration, planning, learning and action selection
• interplay between touch sensing and vision
• bio-inspired approaches for fusion of vision and touch sensing
• psychophysics and neuroscience of combining vision and tactile sensing in humans and animals
• computational methods for processing vision and touch data in robot learning
• deep learning for optical tactile sensing and relation/interaction with deep learning for robot vision
• the use of vision and touch for safe human-robot interaction/collaboration
Keywords: Tactile Sensing, Vision and Touch, Sensor Fusion, Multimodal Perception, Cross-Modal Perception
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.