Event Abstract

Scene Representation Based on Dynamic Field Theory: From Human to Machine

  • 1 Ruhr-Universität Bochum, Institut für Neuroinformatik, Germany

A typical human-robot cooperation task puts an autonomous robot in the role of an assistant for the human user. Fulfilling requests of the human user like “hand me the red screwdriver” demands an internal representation of the spatial layout of the scene as well as an understanding of labels and identifiers like “red”, associated with the spatial information. In addition, the representation of the scene must be updated according to ongoing changes in the outside world.Studies trying to comprehend the nature and extent of the internal representation of humans [1] hint that we keep a limited, non-pictorial representation of our perception in memory. Experimental studies on human visual working memory using a change detection paradigm [2] define space as being the key to bind multiple features of an object. Dynamic Field Theory, which is a theory of embodied cognition[4], provides the process models to explain such results. The models are based on Dynamic Neural Fields, which are a model of neural activity in the human cortex related to metric spaces.Applying the theory in the paradigm of autonomous robots provides a guideline for designing a neurally plausible robotic scene representation.We approach the problems of building up, maintaining, and updating a robotic scene representation with an architecture built from a set of Dynamic Neural Fields [5]. Core of this architecture are three-dimensional Dynamic Neural Fields, which, inspired by [2], dynamically associate two-dimensional object locations in an allocentric reference frame with extracted low-dimensional object features like color. The unit of information representing an association in these fields is localized supra-threshold activity, called a peak, that both gives a detection decision on input and an estimate of continuous parameters like space or color hue represented by the field’s dimensions. Through the architectural connections, the three-dimensional fields are sequentially filled up with associative peaks as soon as the robot perceives a scene. Using the stability characteristics of fields and a steady coupling to current camera input, changes in object positions are tracked even for multiple moving objects, objects are memorized when they get out of view, and associations are removed automatically once an object is removed from the scene.The resulting application is tested on the robotic platform CoRA (Cooperative Robotic Assistant).The resulting peaks of an autonomous scanning sequence can be successfully held in the three-dimensional associative Dynamic Neural Field, even when objects get out of view. Here, the application shows a capacity limit of four to five objects that can be kept concurrently in the scene representation. If multiple objects are moved within the robot’s field of view, a smaller amount of objects can be tracked simultaneously. The capacity limit is reduced to three objects for multiobject tracking, bearing in mind that not only the spatial position of the objects is updated correctly, but the stored feature associations are carried along as well. The decrease of capacity between static and moving objects is also observable in the human counterpart [3].

References

[1] A. Hollingworth, C. C. Williams, and J. M. Henderson. To see and remember: Visually
specific information is retained in memory from previously attended objects in natural scenes. Psychonomic Bulletin & Review, 8(4):761, 2001.

[2] J. S. Johnson, J. P. Spencer, S. J. Luck, and G. Sch¨oner. A dynamic neural field model of visual working memory and change detection. Psychological Science, 20:568–577, 2009.

[3] J. Saiki. Feature binding in object-file representations of multiple moving items. Journal of Vision, 3(1):6–21, 2003.

[4] S. Schneegans and G. Sch¨oner. Handbook of Cognitive Science: An Embodied Approach,
chapter 13 - Dynamic Field Theory As A Framework For Understanding Embodied Cognition, pages 241–271. Elsevier, 2008.

[5] S. K. U. Zibner, C. Faubel, I. Iossifidis, and G. Sch¨oner. Scene representation for anthropomorphic robots: A dynamic neural field approach. In ISR / ROBOTIK 2010, Munich, Germany, 2010.

Keywords: computational neuroscience

Conference: Bernstein Conference on Computational Neuroscience, Berlin, Germany, 27 Sep - 1 Oct, 2010.

Presentation Type: Presentation

Topic: Bernstein Conference on Computational Neuroscience

Citation: Zibner SK, Faubel C, Iossifidis I and Schöner G (2010). Scene Representation Based on Dynamic Field Theory: From Human to Machine. Front. Comput. Neurosci. Conference Abstract: Bernstein Conference on Computational Neuroscience. doi: 10.3389/conf.fncom.2010.51.00019

Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.

The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.

Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.

For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.

Received: 22 Sep 2010; Published Online: 23 Sep 2010.

* Correspondence: Dr. Stephan K Zibner, Ruhr-Universität Bochum, Institut für Neuroinformatik, Bochum, Germany, stephan.zibner@ini.rub.de