Adaptive Velocity Tuning on a Short Time Scale for Visual Motion Estimation
-
1
Honda Research Institute Europe GmbH, Germany
-
2
Technical University Darmstadt, Germany
Visual Motion is a central perceptual cue that helps to improve object detection, scene interpretation and navigation. One major problem for visual motion estimation is the so called aperture problem which states that visual movement cannot be unambiguously estimated based on temporal correspondences between local intensity patterns alone. It is widely accepted that velocity-selective neurons in visual area MT solve this problem via a spatiotemporal integration of local motion information which leads to temporal dynamics of the neural responses of MT neurons. There are several contributions that propose models that simulate the dynamical characteristics of MT neurons, like [1]. All of these models are based on a number of motion detectors each responding to the same retinotopic location but tuned to different speeds and directions. The different tunings sample the entire velocity space of interest densely and equally distributed. For each retinotopic location the number of the motion detectors is assumed to be fixed and also the different velocity tunings do not change over time.
Recent studies concerning the tuning of neurons in area MT in macaques point out that even on a short time scale the tunings of motion-sensitive neurons adapt strongly to the movement direction and to the temporal history of the speed of the current stimulus [2,3]. We propose a model for dynamic motion estimation that incorporates a temporal adaptation of the response properties of motion detectors. Compared to existing models, it is able to adapt not only the tuning of motion detectors but additionally allows to change the number of detectors per image location.
For this reason, we provide a dynamic Bayesian filter with a special transition probability that propagates velocity hypotheses over space and time whereas the set of velocity hypotheses is adaptable both in the number of the set and the velocity values. Additionally, we propose methods how to adapt the number and the values of velocity hypotheses based on the statistics of the motion detector responses. We discuss different adaptation techniques using velocity histograms or applying approximate expectation maximization for optimizing free parameters, in this case velocity values and set numbers. We show that reducing the number of velocity detectors in conjunction with keeping them smartly adaptive to be able to cluster around some relevant velocities has several advantages. The computational load can be reduced by a factor of three while the accuracy of the estimate reduces only marginally. Additionally, motion outliers are suppressed and the estimation uncertainty is reduced due to the reduction of motion hypotheses to a minimal set that is still able to describe the movement of the relevant scene parts.
References
1. P.Burgi, A. Yuille and N. Grzywacz, Probabilistic Motion Estimation Based on Temporal Coherence, Neural Computation, 12, 1839-1867, 2000.
2. A. Kohn and J.A. Movshon, Adaptation changes the direction tuning of macaque MT neurons, Nature Neuroscience, 7, 764-72. Epub, 2004.
3. A. Schlack, B. Krekelberg and T. Albright, Recent History of Stimulus Speeds Affects the Speed Tuning of Neurons in Area MT, Journal of Neuroscience, 27, 11009-11018, 2007.
Conference:
Bernstein Conference on Computational Neuroscience, Frankfurt am Main, Germany, 30 Sep - 2 Oct, 2009.
Presentation Type:
Poster Presentation
Topic:
Abstracts
Citation:
Willert
V and
Eggert
J
(2009). Adaptive Velocity Tuning on a Short Time Scale for Visual Motion Estimation.
Front. Comput. Neurosci.
Conference Abstract:
Bernstein Conference on Computational Neuroscience.
doi: 10.3389/conf.neuro.10.2009.14.024
Copyright:
The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers.
They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.
The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.
Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.
For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.
Received:
25 Aug 2009;
Published Online:
25 Aug 2009.
*
Correspondence:
Volker Willert, Honda Research Institute Europe GmbH, Offenbach, Germany, volker.willert@rtr.tu-darmstadt.de