ORIGINAL RESEARCH article
Front. Robot. AI
Sec. Robot Learning and Evolution
Coulomb Force-Guided Deep Reinforcement Learning for Effective and Explainable Robotic Motion Planning
Provisionally accepted- Ohio University, Athens, United States
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Training mobile robots through digital twins with deep reinforcement learning (DRL) has gained increasing attention to ensure efficient and safe navigation in complex environments. In this paper, we propose a novel physics-inspired DRL framework that achieves both effective and explainable motion planning. We represent the robot, destination, and obstacles as electrical charges and model their interactions using Coulomb forces. These forces are incorporated into the reward function, providing both attractive and repulsive signals to guide robot behavior. In addition, obstacle boundaries extracted from LiDAR segmentation are integrated as anticipatory rewards, allowing the robot to avoid collisions from a distance. The proposed model is first trained in Gazebo simulation environments and subsequently deployed on a real TurtleBot v3 robot. Extensive experiments in both simulation and real-world scenarios demonstrate the effectiveness of the proposed framework. Results show that our method significantly reduces collisions, maintains safe distances from obstacles, and generates safer trajectories toward the destinations.
Keywords: Coulomb force, deep reinforcement learning, Gazebo, lidar, motion planning, TurtleBot3
Received: 01 Sep 2025; Accepted: 15 Dec 2025.
Copyright: © 2025 Song, Bihl and Liu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Jundong Liu
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.