Event Abstract

Learning Coordinated Eye and Head Movements: Unifying Principles and Architectures

  • 1 Johann Wolfgang Goethe University, Frankfurt Institute for Advanced Studies, Germany
  • 2 University of Hamburg, Faculty of Mathematics, Informatics and Natural Sciences, Germany

Various optimality principles have been proposed to explain the characteristics of coordinated eye and head movements during visual orienting behavior. The minimum-variance principle suggested by Harris and Wolpert [1] considers additive white noise on the neural command signal whose variance is proportional to the power of the signal. Minimizing the variance of the final eye position driven by such a command signal leads to biologically plausible eye movement characteristics. A second optimality principle based on the minimum effort rule has also been suggested, which leads to realistic behavior [2]. In both studies, however, the neural substrate of the underlying computations is left unspecified.

At the same time, several models of the neural substrate underlying the generation of such movements have been suggested. These include feedback models that use a gaze error feedback and perform feedback control to direct the gaze to the desired orientation [3,4] and feedforward models that rely on the dynamics of their burst generator model neurons to generate the control signal [5,6]. These models do not include a biologically-plausible learning rule as a mechanism of optimization.

Here, we unify the two methodologies by introducing an open-loop neural controller with a biologically plausible adaptation mechanism that minimizes a proposed cost function. The cost function consists of two terms, one regarding the visual error integrated over time, and the other penalizing large control signals in terms of the weight values of the neural architecture. The latter term reflects the effect of signal-dependent noise [1]. The adaptation mechanism uses local weight adaptation rules to gradually optimize the behavior with respect to the cost function.

Simulations show that the characteristics of the coordinated eye and head movements generated by our model match the experimental data in many aspects, including the relationships between amplitude, duration and peak velocity in the head-restrained case and the contribution of eye and head to total gaze shift in the head-free conditions. The proposed model is not restricted to eye and head movements, and it may be a suitable model for learning various ballistic movements.

References

[1] Harris, C. M., and Wolpert, D. M. Signal-dependent noise determines motor planning. Nature: 394: 780-784, 1998.
[2] Kardamakis, A. A., and Moschovakis, A. K. Optimal control of gaze shifts. J Neurosci 29: 7723-7730, 2009.
[3] Guitton D.,Munoz D. P., and Galiana H. L. Gaze control in the cat: studies and modeling of the coupling between orienting eye and head movements in different behavioral tasks. J Neurophysiol
64: 509-531, 1990.
[4] Goossens H. H. L. M., Van Opstal A. J. Human eye-head coordination in two dimensions under different sensorimotor conditions. Exp Brain Res 114: 542-560, 1997.
[5] Freedman, E. G. Interactions between eye and head control signals can account for movement kinematics. Biol Cybern 84: 453-462, 2001.
[6] Kardamakis, A. A., Grantyn A., and Moschovakis A. K. Neural network simulations of the primate oculomotor system. V. Eye-head gaze shifts. Biol Cybern 102: 209-225, 2010.

Keywords: computational neuroscience

Conference: Bernstein Conference on Computational Neuroscience, Berlin, Germany, 27 Sep - 1 Oct, 2010.

Presentation Type: Presentation

Topic: Bernstein Conference on Computational Neuroscience

Citation: Saeb S, Weber C and Triesch J (2010). Learning Coordinated Eye and Head Movements: Unifying Principles and Architectures. Front. Comput. Neurosci. Conference Abstract: Bernstein Conference on Computational Neuroscience. doi: 10.3389/conf.fncom.2010.51.00065

Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.

The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.

Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.

For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.

Received: 16 Sep 2010; Published Online: 23 Sep 2010.

* Correspondence: Dr. Sohrab Saeb, Johann Wolfgang Goethe University, Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany, saeb@fias.uni-frankfurt.de