About this Research Topic
The brain is a complex system that adapts to sensory stimuli, allowing to efficiently use information in order to solve difficult perceptual tasks. However, the principles, constraints, and goals by which the brain computes those stimuli remains an open question.
One possible principle that has been put forward is symmetry. Formally, symmetry is the property of an object (physical or mathematical) of remaining unchanged (invariant) under a set of operations or transformations.In the context of vision, and in particular for object categorization, it consists of the brain’s ability to generate a representation of the essential structural character of an object which is invariant to transformations in appearance from many different points of view.
More generally, variations which are irrelevant to a task can be considered as symmetries of the task, and populations of neurons may be able to exploit these symmetries to effectively compress information.
The problem of understanding if and how symmetries, in sensory input or their statistics, define important aspects of the brain's information processing, has gained a lot of interest recently, notably due to the surge of deep convolutional neural networks. This class of algorithms, which mimics the visual cortex architecture, shows impressive performance in many different tasks, the most striking one being visual recognition. They seem to have an implicit bias towards invariance to transformations in the input, and selectivity to complex high level features. To date, deep convolutional networks are considered the best models in predicting neural responses, suggesting that the brain may use analogous algorithms.
However, these are neurally implausible because of several limitations. One example of interest in the present Research Topic is the following: neurons' synaptic connectivity is not learned but hard wired to implement convolutions, and their synaptic strength update is based on static stimuli, ignoring time continuity, crucial in capturing the notion of object transformations.
New computational models that go beyond convolutional ones and incorporate local synaptic constraints and time contiguity, together with good robustness and selective properties are needed. Those models will most probably extend to other senses such as touch, hearing, and olfaction, defining a new guiding principle for understanding how the brain efficiently compresses and uses high-dimensional information through symmetry.
The aim of this Research Topic is to cover promising, recent, and novel research trends regarding on the role of symmetry in shaping the learning/ information processing and neuronal organization in the brain, within a theoretical and computational perspective.
Areas to be covered in this Research Topic may include, but are not limited to:
• New architectures beyond convolutional networks: biologically plausible sensory representations that are simultaneously selective for object identity and invariant to nuisance variation (including but not limited to group transformations, beyond translations) in mammalian cortex.
• Sample complexity and invariant representations: is the brain using symmetries to perform efficient computation?
• Variants of common plasticity rules or new class of rules that incorporate sensory data symmetries: how can the brain discover data symmetries and learn to represent sensory input in an invariant way?
• Role of time continuity, crucial in the sensory experience, in defining classes of symmetries (implicit unsupervised labelling).
Keywords: Plasticity, Symmetry, Invariance, Equivariance, Artificial Neural Networks
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.