%A Kinjo,Ken
%A Uchibe,Eiji
%A Doya,Kenji
%D 2013
%J Frontiers in Neurorobotics
%C
%F
%G English
%K optimal control,linearly solvable Markov decision process,model-based reinforcement learn- ing,model learning,robot navigation
%Q
%R 10.3389/fnbot.2013.00007
%W
%L
%N 7
%M
%P
%7
%8 2013-April-05
%9 Original Research
%+ Mr Ken Kinjo,Nara Institute of Science and Technology,Graduate school of Information science,8916-5 Takayama,Ikoma,630-0192,Nara,Japan,ken-k@oist.jp
%+ Mr Ken Kinjo,Okinawa Institute of Science and Technology,1919-1 Tancha Okinawa,Onna-son, Kunigami-gun,904-0495,Okinawa,Japan,ken-k@oist.jp
%#
%! Evaluating LMDP with model learning
%*
%<
%T Evaluation of linearly solvable Markov decision process with dynamic model learning in a mobile robot navigation task
%U https://www.frontiersin.org/article/10.3389/fnbot.2013.00007
%V 7
%0 JOURNAL ARTICLE
%@ 1662-5218
%X Linearly solvable Markov Decision Process (LMDP) is a class of optimal control problem in which the Bellman's equation can be converted into a linear equation by an exponential transformation of the state value function (Todorov, 2009b). In an LMDP, the optimal value function and the corresponding control policy are obtained by solving an eigenvalue problem in a discrete state space or an eigenfunction problem in a continuous state using the knowledge of the system dynamics and the action, state, and terminal cost functions. In this study, we evaluate the effectiveness of the LMDP framework in real robot control, in which the dynamics of the body and the environment have to be learned from experience. We first perform a simulation study of a pole swing-up task to evaluate the effect of the accuracy of the learned dynamics model on the derived the action policy. The result shows that a crude linear approximation of the non-linear dynamics can still allow solution of the task, despite with a higher total cost. We then perform real robot experiments of a battery-catching task using our Spring Dog mobile robot platform. The state is given by the position and the size of a battery in its camera view and two neck joint angles. The action is the velocities of two wheels, while the neck joints were controlled by a visual servo controller. We test linear and bilinear dynamic models in tasks with quadratic and Guassian state cost functions. In the quadratic cost task, the LMDP controller derived from a learned linear dynamics model performed equivalently with the optimal linear quadratic regulator (LQR). In the non-quadratic task, the LMDP controller with a linear dynamics model showed the best performance. The results demonstrate the usefulness of the LMDP framework in real robot control even when simple linear models are used for dynamics learning.