Impact Factor 2.486

The Frontiers in Neuroscience journal series is the 1st most cited in Neurosciences

Original Research ARTICLE Provisionally accepted The full-text will be published soon. Notify me

Front. Neurorobot. | doi: 10.3389/fnbot.2017.00066

Cross-Situational Learning with Bayesian Generative Models for Multimodal Category and Word Learning in Robots

 Akira Taniguchi1*, Tadahiro Taniguchi1 and Angelo Cangelosi2
  • 1The Graduate School of Information Science and Engineering, Ritsumeikan University, Japan
  • 2School of Computing and Mathematics, Plymouth University, United Kingdom

In this paper, we propose a Bayesian generative model that can form multiple categories based on each sensory-channel and can associate words with any of four sensory-channels (action, position, object, and color).
This paper focuses on cross-situational learning using the co-occurrence between words and information of sensory-channels in complex situations rather than conventional situations of cross-situational learning.
We conducted a learning scenario using a simulator and a real humanoid iCub robot.
In the scenario, a human tutor provided a sentence that describes an object of visual attention and an accompanying action to the robot.
The scenario was set as follows: the number of words per sensory-channel was three or four, and the number of trials for learning was 20 and 40 for the simulator and 25 and 40 for the real robot.
The experimental results showed that the proposed method was able to estimate the multiple categorizations and to learn the relationships between multiple sensory-channels and words accurately.
In addition, we conducted an action generation task and an action description task based on word meanings learned in the cross-situational learning scenario.
The experimental results showed the robot could successfully use the word meanings learned by using the proposed method.

Keywords: Bayesian model, cross-situational learning, Lexical acquisition, multimodal categorization, symbol grounding, Word meaning

Received: 17 Jul 2017; Accepted: 21 Nov 2017.

Edited by:

Frank Van Der Velde, University of Twente, Netherlands

Reviewed by:

Yulia Sandamirskaya, University of Zurich, Switzerland
Maxime Petit, Imperial College London, United Kingdom  

Copyright: © 2017 Taniguchi, Taniguchi and Cangelosi. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Mr. Akira Taniguchi, Ritsumeikan University, The Graduate School of Information Science and Engineering, 3F Creation-Core, BKC, 1-1-1 Noji-Higashi, Kusatsu, Shiga, 525-8577, Japan,