About this Research Topic
The robotics community has been relying on physical simulation almost from the beginning of its existence. While simulation has been widely used for education, testing, and prototyping, only very recently the robotics community has attempted transferring behaviors learned in simulation to the real world (this process is usually referred to as Sim2Real). Although there has been significant progress in transferring behaviors learned in simulation to reality, most of the proposed methods still fail to reliably tackle the “reality-gap” without extensive fine-tuning/learning on the real system. Things only get worse if visual or other high-dimensional sensor (e.g., touch) observations are involved, as simulating realistic sensors makes the reality-gap even bigger. It is clear that there is a need for Sim2Real methods that utilize simulations and prior knowledge about the world, such that they make it possible for robots to learn with minimum real-world interaction time.
This research topic calls for contributions that will make it possible for real complex robots equipped with real-world sensors (e.g., on-board cameras, touch sensors) to learn with limited physical data when robotic simulations are available. The goal of this research topic is to gather novel methods that will not only learn how to transfer one specific behavior, but enable robots to utilize simulations to learn multiple behaviors simultaneously.
This Research Topic aims to gather methods that explore the usage of recent machine learning methods (e.g., deep learning, deep reinforcement learning) and more traditional methods (e.g., model-predictive control, image feature matching, evolutionary methods) in an attempt to identify the most promising and effective directions for data-efficient learning of robot control policies when physical simulations are available and the robot is not equipped with external sensor capabilities (e.g., no motion capture system or complex camera setups).
This Research Topic calls for contributions that will make it possible for real complex robots equipped with realistic sensors (cameras, touch, etc.) to learn in a handful of trials when physical simulations are available. We expect robots to reason about the world through sensors (cameras, touch, lidars, etc.) without the use of external devices (e.g., motion capture systems) and utilize simulations to speed-up or robustify their learning capabilities. Subject areas include but are not limited to:
• Sim2Real robot control via deep learning
• Sim2Real robot control via reinforcement learning
• Sim2Real robot control via traditional computer vision methods
• Combining simulated and real-world experience for efficient Sim2Real
• Tools for developing Sim2Real methods (e.g., simulators, visualizations, benchmarks)
• Methods for minimizing the “reality-gap”
• Usage of differentiable simulators for robot control with image/sensor inputs
• Model-based reinforcement learning with simulated priors
• Robot learning and control through learned latent dynamics models
• Reviews and/or analysis of the literature
• Evolutionary and population-based methods for Sim2Real
Keywords: Sim2Real, Robot Learning, Sensor-Based Reinforcement Learning, Data-Efficient Robot Learning
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.