AUTHOR=Dietrich Robin , Waniek Nicolai , Stemmler Martin , Knoll Alois TITLE=Grid codes vs. multi-scale, multi-field place codes for space JOURNAL=Frontiers in Computational Neuroscience VOLUME=Volume 18 - 2024 YEAR=2024 URL=https://www.frontiersin.org/journals/computational-neuroscience/articles/10.3389/fncom.2024.1276292 DOI=10.3389/fncom.2024.1276292 ISSN=1662-5188 ABSTRACT=Recent work on bats flying over long distances has revealed that single hippocampal cells have multiple place fields of different sizes. At the network level, a multi-scale, multi-field place cell code outperforms classical single-scale, single-field place codes, yet the performance boundaries of such a code remain an open question. In particular, it is unknown how general multi-field codes compare to a highly regular grid code, in which cells form distinct modules that obey attractor dynamics. In this paper, we address the coding properties of theoretical spatial coding models with rigorous analyses of comprehensive simulations. Starting from a multi-scale, multi-field network, we performed evolutionary optimization. This resulted in multi-field networks, which sometimes retained the multi-scale property at the single-cell level, but most often converged to a single scale, with all place fields in a given cell having the same size. We compared the results against a single-scale, single-field code as well as a one-dimensional grid code, focusing on two main characteristics: the performance of the code itself and the dynamics of the network generating it. Our simulation experiments revealed that, under normal conditions, a regular grid code outperforms all other codes with respect to decoding accuracy. However, multi-field codes demonstrate increased robustness against noise and lesions, such as random drop-out of neurons, given that the significantly higher number of fields provides redundancy. The grid code, by comparison, requires a significantly smaller number of neurons and fields. Contrary to our expectations, the network dynamics of all models, from the original multi-scale model before optimization to the multi-field models that resulted from optimization, did not exhibit properties of continuous attractor networks. In fact, when position-specific external input was removed, the simulated networks did not maintain activity bumps at their original locations. Optimized multi-field codes, therefore, appear to strike a compromise between a place code and a grid code that reflects a trade-off between accurate positional encoding and robustness. Surprisingly, the recurrent neural network models we implemented and optimized for either multi- or single-scale, multi-field codes did not intrinsically produce a persistent "memory" of attractor states. These models, therefore, were not continuous attractor networks.