Event Abstract

Learning Temporal Coincidences

  • 1 J.W.G University Frankfurt, Visual Sensorics and Information Processing Group, Germany

We propose a neuro-morphic learning scheme for fully automatic determination of the geometric and photometric relationships among retinae (or cameras in stereo and multi-camera systems). The method is based on learning a statistical model for local spatio-temporal image patches, determining rare and expressive temporal events from this model, and matching these events across multiple views. Accumulating multi-image temporal coincidences of such events over time in a reinforcement learning style allows to learn the desired geometric and photometric relations.
Our approach is based on the principle of temporal coincidences which we formulate as a highly probable candidate both to explain the emergence of correspondence finding in binocular biological vision, as well as to allow for the automatic determination of geometric and photometric correspondences between multiple views in a technical vision system. Corresponding receptors in different views often have a much more characteristic, individual, and distinctive temporal signature, compared to the extent of uniqueness in the local spatial area around them. Furthermore, temporal signatures are subject to photometric transformations, but not to geometric transforms in time direction. Thus it can be expected that they can be associated across images much more reliably. The distribution information allows for finding the overall image-to-image mapping, and the fully automatic parameterization of stereo and multi-camera algorithms.
To estimate a correspondence distribution for a single pixel between between two views, we detect spatio-temporal events on the regarded pixel in the first view and match them among all occuring events in a second view. Successfully matched events are then accumulated within an accumulator array. Depending on the overall scene structure, the accumulator can encode three different correspondence types: If (i) low depth variations occur, the accumulator will show a sharp peak, marking the true spatial correspondence, if (ii) high depth variations occur, the accumulator will in general contain an elongated peak, which is the part of the epipolar ray that is actually attained by 3D points in the scene, or if (iii) no correspondence exists, the accumulator will contain a noisy and scattered structure. Based on an eigenvalue analysis of the accumulators' covariance matrix, the accumulators can be classified according to the cases (i)-(iii).
The scheme is shown to work for stationary as well as moving cameras with strongly different viewpoints and camera settings, including substantial rotation, translation and amplitude scaling. The only constraint is that the relative orientation between the cameras is kept fixed.

Keywords: computational neuroscience

Conference: Bernstein Conference on Computational Neuroscience, Berlin, Germany, 27 Sep - 1 Oct, 2010.

Presentation Type: Presentation

Topic: Bernstein Conference on Computational Neuroscience

Citation: Conrad C, Guevara A and Mester R (2010). Learning Temporal Coincidences. Front. Comput. Neurosci. Conference Abstract: Bernstein Conference on Computational Neuroscience. doi: 10.3389/conf.fncom.2010.51.00120

Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.

The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.

Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.

For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.

Received: 01 Sep 2010; Published Online: 23 Sep 2010.

* Correspondence: Dr. Christian Conrad, J.W.G University Frankfurt, Visual Sensorics and Information Processing Group, Frankfurt, Germany, christian.conrad@vsi.cs.uni-frankfurt.de