ORIGINAL RESEARCH article
Front. Netw. Physiol.
Sec. Networks of Dynamical Systems
Volume 5 - 2025 | doi: 10.3389/fnetp.2025.1693772
This article is part of the Research TopicSelf-Organization of Complex Physiological Networks: Synergetic Principles and Applications — In Memory of Hermann HakenView all 12 articles
Population coding and self-organized ring attractors in recurrent neural networks for continuous variable integration
Provisionally accepted- 1Institute of Applied Physics (RAS), Nizhny Novgorod, Russia
- 2Nacional'nyj issledovatel'skij Nizegorodskij gosudarstvennyj universitet imeni N I Lobacevskogo, Nizhny Novgorod, Russia
- 3Laboratory of Complex Networks, Center for Neurophysics and Neuromorphic Technologies, Moscow, Russia
- 4Phystech School of Applied Mathematics and Computer Science, Moscow Institute of Physics and Technology, Dolgoprudny, Russia
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Representing and integrating continuous variables, like spatial orientation, is a fundamental capability of the brain. This process often relies on ring attractors—specialized neural circuits that maintain a persistent “bump” of activity to encode a circular variable. Here, we investigate how such structures can self-organize in a recurrent neural network (RNN) trained to perform path integration on a ring. We show that by providing the network with velocity inputs encoded by a population of neurons, it autonomously develops a modular architecture. One subpopulation learns to form a stable ring attractor that accurately tracks and maintains the integrated position. A second, distinct subpopulation organizes into a dissipative structure that acts as a dynamic control unit, translating velocity inputs into directional signals for the ring attractor. Through systematic perturbations, we demonstrate that the precise topological alignment between these two modules is essential for reliable integration. Our findings illustrate how functional specialization and biologically plausible representations can emerge from a general learning objective, offering insights into the principles of self-organization in neural circuits and providing a framework for designing more interpretable and robust neuromorphic systems for navigation and control.
Keywords: recurrent neural networks, Bump attractors, population coding, continuous variable integration, Nonlinear Dynamics, Network physiology, neural representation
Received: 27 Aug 2025; Accepted: 14 Oct 2025.
Copyright: © 2025 Kononov, Tiselko, Maslennikov and Nekorkin. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Oleg Maslennikov, oleg.maov@gmail.com
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.