ORIGINAL RESEARCH article
Front. Robot. AI
Sec. Robotic Control Systems
Volume 12 - 2025 | doi: 10.3389/frobt.2025.1655171
This article is part of the Research TopicLearning-based Advanced Solutions for Robot Autonomous ComputingView all articles
DreamerNav: Learning-Based Autonomous Navigation in Dynamic Indoor Environments Using World Models
Provisionally accepted- University College London, London, United Kingdom
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Robust autonomous navigation in complex, dynamic indoor environments remains a central challenge in robotics, requiring agents to make adaptive decisions in real time under partial observability and uncertain obstacle motion. This paper presents DreamerNav, a robot-agnostic navigation framework that extends DreamerV3, a state-of-the-art world model--based reinforcement learning algorithm, with multimodal spatial perception, hybrid global--local planning, and curriculum-based training. By formulating navigation as a Partially Observable Markov Decision Process (POMDP), the system enables agents to integrate egocentric depth images with a structured local occupancy map encoding dynamic obstacle positions, historical trajectories, points of interest, and a global A* path. A Recurrent State-Space Model (RSSM) learns stochastic and deterministic latent dynamics, supporting long-horizon prediction and collision-free path planning in cluttered, dynamic scenes. Training is carried out in high-fidelity, photorealistic simulation using NVIDIA Isaac Sim, gradually increasing task complexity to improve learning stability, sample efficiency, and generalization. We benchmark against NoMaD, ViNT, and A*, showing superior success rates and adaptability in dynamic environments. Real-world proof-of-concept trials on two quadrupedal robots without retraining further validate the framework's robustness and quadruped robot platform independence.
Keywords: Autonomous navigation, World Model Reinforcement Learning, Dynamic obstacle avoidance, Quadrupedal robots, path planning
Received: 27 Jun 2025; Accepted: 28 Aug 2025.
Copyright: © 2025 Shanks, Embley-Riches, Liu, Delfaki, Ciliberto and Kanoulas. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Stuart Shanks, University College London, London, United Kingdom
Jonathan Embley-Riches, University College London, London, United Kingdom
Jianheng Liu, University College London, London, United Kingdom
Dimitrios Kanoulas, University College London, London, United Kingdom
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.