Event Abstract

Predictable Feature Analysis

  • 1 Ruhr-Universität Bochum, Institute for Neural Computation, Germany

Slow feature analysis (SFA) is an algorithm that has proven valuable in several fields and problems concerning signal- and data analysis. The idea is that a drastic, yet reasonable dimensionality reduction can be obtained by focusing on slowly varying sub-signals, the so-called “slow features”. Typical data-analysis and recognition tasks, such as regression and classification, become much more feasible on the reduced signal and can be applied afterwards.

Our current research is focused on a way to handle interactive scenarios, which involve notions of control, planning and decision-making. In order to perform any kind of planning or intelligent control it is crucial to have a model that is capable of estimating the consequences of possible actions. In control theory, such models are usually formulated as a set of (partial) differential equations in a problem specific manner. In contrast to that, we follow an SFA-inspired approach that preserves the main advantages of SFA - namely its unsupervised nature and the ability to build a model in a self-organizing fashion. We aim to achieve this by replacing the objective of slowness by an objective of predictability, because predictability is a desired property of the needed consequence-estimating model by definition. We call this approach “Predictable Feature Analysis” (PFA).

This work deals with the involved problem of recognizing and extracting predictable features from an input signal. To this end we first have to specify the meaning of “predictable”. While there exist model independent notions from information theory (cf. information bottleneck approach), we consider predictability with respect to a certain prediction model. In the current setup, we consider aspects as predictable if they can be predicted by a linear autoregressive model after an optional, non-linear preprocessing. This results in a nested optimization problem, which is quite involved. The features extracted must be optimized for predictability, but judging their predictability is an optimization problem by itself.

We present a tractable algorithm with relaxed constraints and some preliminary results on artificial data sets.


The authors acknowledge support from the German Federal Ministry of
Education and Research within the National Network Computational
Neuroscience - Bernstein Fokus: "Learning behavioral models: From human
experiment to technical assistance", grant FKZ 01GQ0951.


Creutzig, F., and Sprekeler, H. (2008). Predictive Coding and the Slowness Principle: An Information-Theoretic Approach. Neural Computation , 20(4), 1026-1041.

Franzius, M., Sprekeler, H., and Wiskott, L. (2007). Slowness and Sparseness Lead to Place, Head-Direction, and Spatial-View Cells. PLoS Computational Biology , 3(8), e166.

Berkes, P., and Wiskott, L. (2005). Slow feature analysis yields a rich repertoire of complex cell properties. Journal of Vision , 5(6), 579-602.

Berkes, P., and Wiskott, L. (2002). Applying Slow Feature Analysis to Image Sequences Yields a Rich Repertoire of Complex Cell Properties. Proc. Intl. Conf. on Artificial Neural Networks (ICANN'02) , Lecture Notes in Computer Science , 81-86. Ed. Dorronsoro, J. R. Publ. Springer.

Wiskott, L., Berkes, P., Franzius, M., Sprekeler, H., and Wilbert, N. (2011). Slow feature analysis. Scholarpedia , 6(4), 5282.

Zito, T., Wilbert, N., Wiskott, L., and Berkes, P. (2009). Modular toolkit for Data Processing (MDP): a Python data processing frame work, Front. Neuroinform. (2008) 2:8. doi:10.3389/neuro.11.008.2008.

Lütkepohl, H (2005). New Introduction to Multiple Time Series Analysis. Springer 2005.

Haardt, M., Hüper, K., Moore, J., and Nossek J. (1996). Simultaneous Schur Decomposition of several matrices to achieve automatic pairing in multidimensional harmonic retrieval problems. in Signal Processing VIII: Theories and Applications, Proceedings of EUSIPCO-96,
Trieste, Italy, 1996, Vol. 1, pp. 531–534.

Cardoso, J. F., and Souloumiac, A. (1996). Jacobi angles for simultaneous diagonalization. SIAM J. Matrix Anal. Appl., vol. 17, pp. 161–164, Jan. 1996.

Golub, G. H., and van Loan, C. F. (1989). Matrix Computations. Johns Hopkins University Press, Baltimore, MD, 2nd edition, 1989.

Keywords: data analysis, dimensionality reduction, feature extraction, model building, predictability, self organization, Signal analysis, unsupervised learning

Conference: Bernstein Conference 2012, Munich, Germany, 12 Sep - 14 Sep, 2012.

Presentation Type: Poster

Topic: Data analysis, machine learning, neuroinformatics

Citation: Richthofer S, Weghenkel B and Wiskott L (2012). Predictable Feature Analysis. Front. Comput. Neurosci. Conference Abstract: Bernstein Conference 2012. doi: 10.3389/conf.fncom.2012.55.00120

Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.

The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.

Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.

For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.

Received: 11 May 2012; Published Online: 12 Sep 2012.

* Correspondence: Mr. Stefan Richthofer, Ruhr-Universität Bochum, Institute for Neural Computation, Bochum, North Rhine-Westphalia, 44780, Germany, stefan.richthofer@gmx.de