%A Solovyeva,Ksenia P.
%A Karandashev,Iakov M.
%A Zhavoronkov,Alex
%A Dunin-Barkowski,Witali L.
%D 2016
%J Frontiers in Systems Neuroscience
%C
%F
%G English
%K neural networks,bump attractor,Hopfield Networks,innate connections,Self-organizing mapping,cortical column,Dynamic attractor
%Q
%R 10.3389/fnsys.2015.00178
%W
%L
%M
%P
%7
%8 2016-January-05
%9 Original Research
%+ Prof Witali L. Dunin-Barkowski,Department of Neuroinformatics, Center for Optical Neural Technologies, Scientific Research Institute for System Analysis, Russian Academy of Sciences,Moscow, Russia,wldbar@gmail.com
%+ Prof Witali L. Dunin-Barkowski,Laboratory of Functional Materials and Devices for Nanoelectronics, Department of Nanometrology and Nanomaterials, Moscow Institute of Physics and Technology,Dolgoprudny, Russia,wldbar@gmail.com
%#
%! INNATE NEURAL ATTRACTORS
%*
%<
%T Models of Innate Neural Attractors and Their Applications for Neural Information Processing
%U https://www.frontiersin.org/articles/10.3389/fnsys.2015.00178
%V 9
%0 JOURNAL ARTICLE
%@ 1662-5137
%X In this work we reveal and explore a new class of attractor neural networks, based on inborn connections provided by model molecular markers, the molecular marker based attractor neural networks (MMBANN). Each set of markers has a metric, which is used to make connections between neurons containing the markers. We have explored conditions for the existence of attractor states, critical relations between their parameters and the spectrum of single neuron models, which can implement the MMBANN. Besides, we describe functional models (perceptron and SOM), which obtain significant advantages over the traditional implementation of these models, while using MMBANN. In particular, a perceptron, based on MMBANN, gets specificity gain in orders of error probabilities values, MMBANN SOM obtains real neurophysiological meaning, the number of possible grandma cells increases 1000-fold with MMBANN. MMBANN have sets of attractor states, which can serve as finite grids for representation of variables in computations. These grids may show dimensions of d = 0, 1, 2,…. We work with static and dynamic attractor neural networks of the dimensions d = 0 and 1. We also argue that the number of dimensions which can be represented by attractors of activities of neural networks with the number of elements N = 10^{4} does not exceed 8.