Impact Factor 3.000 | CiteScore 3.51
More on impact ›

Editorial ARTICLE Provisionally accepted The full-text will be published soon. Notify me

Front. Neurorobot. | doi: 10.3389/fnbot.2019.00083

Machine Learning Methods for High-Level Cognitive Capabilities in Robotics

  • 1Ritsumeikan University, Japan
  • 2Boğaziçi University, Turkey
  • 3Waseda University, Japan
  • 4Osaka University, Japan
  • 5Imperial College London, United Kingdom

Adaptive learning and emergence of integrative cognitive system that involve not only low-level but also high-level cognitive capabilities are crucially important in robotics [1,2,3,4,5,6]. Recent advancement in machine learning methods, e.g., deep learning and hierarchical Bayesian modeling, enables us to develop cognitive systems that integrate multi-level sensory-motor and cognitive capabilities. Low-level cognitive capabilities includes sensory perception, physical control, and behavioral motion generation, while high-level cognitive capabilities include logical inference, planning, and language acquisition. To create robots that can deal with uncertainty in our daily environment, developing machine learning methods that can integrate low-level and high-level is essential. Following the successfully organized session "the Workshop on Machine Learning Methods for High-Level Cognitive Capabilities in Robotics 2016" held in IEEE-IROS 2016 1 , we organized this research topic. We aimed to publish original papers about the state-of-the-art machine learning methods that contribute to modeling sensory-motor and cognitive capabilities in robotics.We are pleased to present 9 research articles, related to motor and behavior learning, concept formation, language acquisition, and cognitive architecture. In this section, we briefly introduce each paper.First, three papers focused on action and behavior learning. Imitation learning is an important topic related to the integration of high-level and low-level cognitive capability because it enables a robot to acquire behavioral primitives from social interaction including observation of human behaviors. Nakajo et al. proposed a machine learning method for viewpoint transformation and action mapping using a neural network having encoder-decoder architecture, i.e., sequence to sequence. In imitation learning, demonstrator and imitator have different perspectives. The method deals with the problem and produced a successful result. Nakamura et al. proposed a new machine learning method called Gaussian process-hidden semi-Markov model (GP-HSMM). GP-HSMM can segment continuous motion trajectories without defining a parametric model for each primitive. That comprises Gaussian process, which is a regression method based on Bayesian noparametrics, and hidden semi-Markov model. This method enables a robot to find motion primitives from complex human motion in an imitation learning scenario. Manipulation using the left and right arms is an essential capability for a cognitive robot. Zhang et al. proposed a neural-dynamic based synchronous-optimization scheme manipulators. It was demonstrated that the method enables a robot to track complex paths.Second, two papers focused on the relationship between action and object concept. Andries et al. proposes the formalism for defining and identifying affordance equivalence. The concept of affordance can be regarded as a relationship between an actor, an action performed by this actor, an object on which the action is performed, and the resulting effect. Learning affordance, i.e., inter-dependency between action and object concept, is an important topic in this field. Taniguchi et al. proposed a new active perception method based on multimodal hierarchical Dirichlet process, which is a hierarchical Bayesian model for multimodal object concept formation method. The important aspect of the approach is that the policy for active perception is derived based on the result of unsupervised learning without any manually designed label data and reward signals.Third, three papers are related to language acquisition and concept formation. Hagiwara et al. proposed hirarchical spatial concept formation method based on hierarchical multimodal latent Dirichlet allocatoin (hMLDA). They demonstrated that a robot could form concept for places having hierarchical structure, e.g., "around a table" is a part of "dining room," using hMLDA, and became able to understand utterances indicating places in a domestic environment given by a human user. Yamada et al. described representation learning method that enables a robot to understand not only action-related words, but also logical words, e.g., "or," "and" and "not." They introduced an neural network having an encoder-decoder architecture, and obtained successful and suggestive results. Taniguchi et al. proposed a new multimodal crosssituational learning method for language acqusition. A robot became able to estimate estimate of each word in relation with modality via which each word is grounded.The final paper presents a framework for cognitive architecture based on hierarchical Bayesian models. Nakamura et al. proposed Symbol Emergence in Robotics tool KIT (Serket) that can integrate many cognitive modules developed using hierarchical Bayesian models, i.e., probabilistic generative models, effectively without re-implementation of each module. Integration of low-level and high-level cognitive capability and developing an integrative cognitive system requires researchers and developers to construct very complex software modules, and this is expected to cause practical problems. Serket can be regarded as a practical solution for the problem, and expected to push the research field forward.With the tremendous success of the past three Special issues of this Research Topic, we organized follow-up workshops 2 and a research topic 3 . Two survey papers related to the series of workshops have already been published [4,7]. We will also organize a workshop with the special emphasis on deep probabilistic generative models 4 We believe that in order to create an artificial cognitive system, i.e. a robot, it is important to integrate low-level and high-level cognitive capabilities based on machine learning-based methods. We hope that this special issue will contribute to accelerating the robotics and machine learning studies that aims to create human-like cognitive systems that can behave in our realworld environment in collaboration with people.

Keywords: Machine leadning, Cognitive Robotics, Language acquisition, neural networks, cognitive architcture, Probabilistic models, robot learning

Received: 19 Aug 2019; Accepted: 25 Sep 2019.

Copyright: © 2019 Taniguchi, Ugur, Ogata, Nagai and Demiris. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Prof. Tadahiro Taniguchi, Ritsumeikan University, Kyoto, 604-8520, Kyōto, Japan, taniguchi@ci.ritsumei.ac.jp