Generative AI (LLMs) has already shown some promise in healthcare and social robotics. In healthcare, it is used to enhance clinical decision-making, whereas it is shown to aid in diagnostic support as well as surgical operations. On the other hand, reinforcement learning methods have been already proven to be successful in a broad range of robotics applications, where it is predominantly used in perception, path planning, area coverage, SLAM and tracking controller design.
Recent advancements in generative AI and reinforcement learning (RL) have opened up exciting possibilities for the robotics domain. In particular, generative AI has shown promise in enabling robots to learn complex locomotion skills from animals, acquire manipulation abilities through pre-trained transformers, and even develop generalist agents capable of performing multiple tasks. On the other hand, RL continues to push the boundaries of dexterous manipulation, grasping complex objects, and facilitating more robust human-machine interactions. However, both generative AI and RL face significant challenges in their practical implementation for robot autonomy. These techniques typically demand vast amounts of data, substantial computational power, and extensive training time to achieve satisfactory performance in real-world robotic tasks.
The goal of this Research Topic is to bridge the gap between the theoretical potential of generative AI and RL and their practical application in robotics. We are particularly interested in showcasing novel approaches that address the current limitations of these techniques.
This includes, but is not limited to:
1) New Methods of Generative AI-Based Robot Learning;
2) New Reinforcement Learning Methods with Reduced Training Time and Data Requirements
3) New Deep Learning Architectures for Complex Tasks in Uncertain Environments.
By focusing on these key areas, this special issue aims to catalyze the development of more efficient, effective and robust generative AI and RL methods for robotics. Our goal is to pave the way for the next generation of intelligent and autonomous robots that can seamlessly interact with and operate in the real world.
The Research Topic would primarily solicit original research but well-rounded high-quality review articles are welcome as well.
The research areas may include, but not limited to the following:
• Efficient data generation techniques to reduce the reliance on large datasets for generative ai based robot learning
• Novel architectures and generative ai based training strategies for generative models that can better capture the complexities of robotic tasks
• New Methods for transferring knowledge from generative models to real-world robots more effectively
• Techniques for sample-efficient RL, such as model-based RL or curriculum learning.
• Methods for incorporating prior knowledge or expert demonstrations into RL to accelerate learning
• Novel neural network architectures that can handle the complexities of real-world robotic tasks, such as multi-modal perception and long-horizon planning
• Architectures that are robust to uncertainties and variations in the environment
• Methods for combining deep learning with other techniques, such as control theory or symbolic reasoning, to improve robot autonomy
• Sensor Fusion with Deep Learning for Camera/Lidar perception
• LLM based perception in resource-constrained Robots
• Safe and trustworthy Generative AI models for autonomous robots
• New transfer learning methods for robotic grasping and manipulation
• Novel learning techniques based Adaptive motion imitation or adaptive gait imitation
• Reinforcement learning based robotic teleoperation and haptics
• Human – robot interaction in complex environments
• Adversarial learning methods for improving robustness
Keywords: Generative AI, Large Language Models, Vision Transformers, Robotic Perception, Reinforcement Learning based Controls
Important note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.