In recent years, reinforcement learning (RL) has demonstrated great potential in robotic tasks such as perception, control, and autonomous decision-making, becoming a key driver of robotics and embodied intelligence. However, deploying reinforcement learning policies from simulation to the real world (sim-to-real) often requires extensive hyperparameter tuning to accommodate varying tasks and dynamic environmental conditions. As robots gradually transition from controlled laboratory settings to open, dynamic real-world environments, they are increasingly confronted with challenges such as perceptual noise, environmental disturbances, and model discrepancies. These factors significantly limit the generalization and robustness of current RL methods. Therefore, improving the robustness of reinforcement learning algorithms in robot systems has become a key issue that urgently needs to be addressed.
This Research Topic focuses on the theoretical advancements and application exploration of robust reinforcement learning across key components of robotic perception, cognition, and decision-making systems. We aim to systematically explore the key challenges and enabling techniques for building robust and autonomous agents in complex, uncertain, and dynamic environments. Covered topics include recent innovations in deploying advanced RL methods—such as adversarial training, policy regularization, domain randomization, domain adaptation, teacher-student learning, transfer learning, and continual learning—on the robotic platforms. The collection also emphasizes recent developments in robust policy design and integration under large-scale robotic foundation models, such as Vision-Language-Action (VLA) model, to promote the stability, adaptability and generalization ability of robot in complex real environments.
We welcome original research articles and comprehensive reviews in (but not limited to) the following areas:
- Robust RL methods for uncertain perception and dynamic disturbances.
- Adversarial training for robust robotic control.
- Stability optimization of multi-modal perception and cognition.
- Policy generalization and adaptation mechanisms in sim-to-real transfer.
- Cross-domain learning and multi-task transfer in robotics.
- Robustness enhancement via VLA and other large-scale models in robotic systems.
- Benchmark environments and evaluation frameworks for real-world robust RL deployment.
We warmly invite researchers from the fields of robotics, reinforcement learning, and intelligent systems to contribute and to collectively advance the development of reliable, resilient, and autonomous robotic agents capable of operating in complex and uncertain real-world scenarios.
Article types and fees
This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:
Brief Research Report
Data Report
Editorial
FAIR² Data
General Commentary
Hypothesis and Theory
Methods
Mini Review
Opinion
Articles that are accepted for publication by our external editors following rigorous peer review incur a publishing fee charged to Authors, institutions, or funders.
Article types
This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:
Important note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.