%A Eppe,Manfred %A Nguyen,Phuong D. H. %A Wermter,Stefan %D 2019 %J Frontiers in Robotics and AI %C %F %G English %K Hierarchical Architecture,planning,Robotics,neural networks,causal puzzles,reinforcement learning %Q %R 10.3389/frobt.2019.00123 %W %L %M %P %7 %8 2019-November-26 %9 Original Research %# %! From semantics to execution %* %< %T From Semantics to Execution: Integrating Action Planning With Reinforcement Learning for Robotic Causal Problem-Solving %U https://www.frontiersin.org/articles/10.3389/frobt.2019.00123 %V 6 %0 JOURNAL ARTICLE %@ 2296-9144 %X Reinforcement learning is generally accepted to be an appropriate and successful method to learn robot control. Symbolic action planning is useful to resolve causal dependencies and to break a causally complex problem down into a sequence of simpler high-level actions. A problem with the integration of both approaches is that action planning is based on discrete high-level action- and state spaces, whereas reinforcement learning is usually driven by a continuous reward function. Recent advances in model-free reinforcement learning, specifically, universal value function approximators and hindsight experience replay, have focused on goal-independent methods based on sparse rewards that are only given at the end of a rollout, and only if the goal has been fully achieved. In this article, we build on these novel methods to facilitate the integration of action planning with model-free reinforcement learning. Specifically, the paper demonstrates how the reward-sparsity can serve as a bridge between the high-level and low-level state- and action spaces. As a result, we demonstrate that the integrated method is able to solve robotic tasks that involve non-trivial causal dependencies under noisy conditions, exploiting both data and knowledge.