Original Research ARTICLE
Autonomous Sequence Generation for a Neural Dynamic Robot: Scene perception, Serial Order, and Object-oriented Movement
- 1Institute of Neuroinformatics, Ruhr-University Bochum, Germany
- 2UMR7503 Laboratoire lorrain de recherche en informatique et ses applications (LORIA), France
- 3Institute of Neuroinformatics, ETH Zurich, Switzerland
Neurally inspired robotics already has a long history that includes reactive systems emulating reflexes, neural oscillators to generate movement patterns, and neural networks as trainable filters for high-dimensional sensory information. Neural inspiration has been less successful at the level of cognition. Decision-making, planning, building and using memories, for instance, are more often addressed in terms of computational algorithms than through neural process models. To move neural process models beyond reactive behavior toward cognition, the capacity to autonomously generate sequences of processing steps is critical.
We review a potential solution to this problem that is based on strongly recurrent neural networks described as neural dynamic systems. Their stable states perform elementary motor or cognitive functions while coupled to sensory inputs. The state of the neural dynamics transitions to a new motor or cognitive function when a previously stable neural state becomes unstable.
Only when a neural robotic system is capable of acting autonomously does it become a useful to a human user.
We demonstrate how a neural dynamic architecture that supports autonomous sequence generation can engage in such interaction. A human user presents colored objects to the robot in a particular order, thus defining a serial order of color concepts. The user then exposes the system to a visual scene that contains the colored objects in a new spatial arrangement. The robot autonomously builds a scene representation by sequentially bringing objects into the attentional foreground. Scene memory updates if the scene changes. The robot performs visual search and then reaches for the objects in the instructed serial order. In doing so, the robot generalizes across time and space, is capable of waiting when an element is missing, and updates its action plans online when the scene changes. The entire flow of behavior emerges from a time-continuous neural dynamics without any controlling or supervisory algorithm.
Keywords: Neural dynamic modelling, autonomous robot, sequence generation, scene perception, Reaching movement
Received: 16 Apr 2019;
Accepted: 28 Oct 2019.
Copyright: © 2019 Tekülve, Fois, Sandamirskaya and Schöner. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Mr. Jan Tekülve, Institute of Neuroinformatics, Ruhr-University Bochum, Bochum, North Rhine-Westphalia, Germany, email@example.com