Symmetry as a Guiding Principle in Artificial and Brain Neural Networks

64.5K
views
39
authors
11
articles
Cover image for research topic "Symmetry as a Guiding Principle in Artificial and Brain Neural Networks"
Editors
4
Impact
Loading...
5,321 views
12 citations
11,567 views
30 citations
Hypothesis and Theory
23 August 2021
The Concept of Symmetry and the Theory of Perception
Zygmunt Pizlo
 and 
J. Acacio de Barros
Three subjects' performance and the averaged performance (d') across the six types of objects in Li and Pizlo (2011) study.

Perceptual constancy refers to the fact that the perceived geometrical and physical characteristics of objects remain constant despite transformations of the objects such as rigid motion. Perceptual constancy is essential in everything we do, like recognition of familiar objects and scenes, planning and executing visual navigation, visuomotor coordination, and many more. Perceptual constancy would not exist without the geometrical and physical permanence of objects: their shape, size, and weight. Formally, perceptual constancy and permanence of objects are invariants, also known in mathematics and physics as symmetries. Symmetries of the Laws of Physics received a central status due to mathematical theorems of Emmy Noether formulated and proved over 100 years ago. These theorems connected symmetries of the physical laws to conservation laws through the least-action principle. We show how Noether's theorem is applied to mirror-symmetrical objects and establishes mental shape representation (perceptual conservation) through the application of a simplicity (least-action) principle. This way, the formalism of Noether's theorem provides a computational explanation of the relation between the physical world and its mental representation.

13,696 views
15 citations
Review
21 July 2021

First, neurophysiological evidence for the learning of invariant representations in the inferior temporal visual cortex is described. This includes object and face representations with invariance for position, size, lighting, view and morphological transforms in the temporal lobe visual cortex; global object motion in the cortex in the superior temporal sulcus; and spatial view representations in the hippocampus that are invariant with respect to eye position, head direction, and place. Second, computational mechanisms that enable the brain to learn these invariant representations are proposed. For the ventral visual system, one key adaptation is the use of information available in the statistics of the environment in slow unsupervised learning to learn transform-invariant representations of objects. This contrasts with deep supervised learning in artificial neural networks, which uses training with thousands of exemplars forced into different categories by neuronal teachers. Similar slow learning principles apply to the learning of global object motion in the dorsal visual system leading to the cortex in the superior temporal sulcus. The learning rule that has been explored in VisNet is an associative rule with a short-term memory trace. The feed-forward architecture has four stages, with convergence from stage to stage. This type of slow learning is implemented in the brain in hierarchically organized competitive neuronal networks with convergence from stage to stage, with only 4-5 stages in the hierarchy. Slow learning is also shown to help the learning of coordinate transforms using gain modulation in the dorsal visual system extending into the parietal cortex and retrosplenial cortex. Representations are learned that are in allocentric spatial view coordinates of locations in the world and that are independent of eye position, head direction, and the place where the individual is located. This enables hippocampal spatial view cells to use idiothetic, self-motion, signals for navigation when the view details are obscured for short periods.

9,026 views
29 citations
Open for submission
Frontiers Logo

Frontiers in Robotics and AI

Narrow and General Intelligence: Embodied, Self-Referential Social Cognition and Novelty Production in Humans, AI and Robots
Edited by Emily S Cross, Sheri Marina Markose, Georg Northoff, Tony J Prescott, Karl Friston
Deadline
30 September 2022
Submit a paper
Recommended Research Topics
Frontiers Logo

Frontiers in Neurorobotics

Explainable Artificial Intelligence and Neuroscience: Cross-disciplinary Perspectives
Edited by James Leland Olds, Jeffrey L Krichmar, Huajin Tang, Juan V. Sanchez-Andres
68.9K
views
35
authors
8
articles
Frontiers Logo

Frontiers in Neuroscience

Machine Learning in Neuroscience, Volume II
Edited by Reza Lashgari, Ali Ghazizadeh, Babak A Ardekani, Hamid R Rabiee
53.9K
views
66
authors
13
articles
Frontiers Logo

Frontiers in Signal Processing

Adversarial Machine Learning and Domain Generalization in Neurophysiological Signal Analysis
Edited by Ozan Özdenizci, Ulysse Côté-Allard, Xiang Zhang, Pouya Bashivan, Moritz Grosse-Wentrup, Tomas Emmanuel Ward
11.8K
views
15
authors
4
articles
Frontiers Logo

Frontiers in Computational Neuroscience

Symmetry as a Guiding Principle in Artificial and Brain Neural Networks, Volume II
Edited by Fabio Anselmi, Ankit B Patel, Tomaso A Poggio, Eugenio Piasini
23.8K
views
22
authors
7
articles
Frontiers Logo

Frontiers in Artificial Intelligence

Artificial Intelligence in Bioinformatics and Genomics
Edited by Pan Zheng, Xiangxiang Zeng, Shihua Zhou
17.9K
views
20
authors
4
articles