Models of Innate Neural Attractors and Their Applications for Neural Information Processing

In this work we reveal and explore a new class of attractor neural networks, based on inborn connections provided by model molecular markers, the molecular marker based attractor neural networks (MMBANN). Each set of markers has a metric, which is used to make connections between neurons containing the markers. We have explored conditions for the existence of attractor states, critical relations between their parameters and the spectrum of single neuron models, which can implement the MMBANN. Besides, we describe functional models (perceptron and SOM), which obtain significant advantages over the traditional implementation of these models, while using MMBANN. In particular, a perceptron, based on MMBANN, gets specificity gain in orders of error probabilities values, MMBANN SOM obtains real neurophysiological meaning, the number of possible grandma cells increases 1000-fold with MMBANN. MMBANN have sets of attractor states, which can serve as finite grids for representation of variables in computations. These grids may show dimensions of d = 0, 1, 2,…. We work with static and dynamic attractor neural networks of the dimensions d = 0 and 1. We also argue that the number of dimensions which can be represented by attractors of activities of neural networks with the number of elements N = 104 does not exceed 8.

1,..., i N  ) neuron is described by its phase function, () There are two constants: the duration of excitation, w, and the duration of refractoriness, r. If  . The rule, described above, is known as synchronous dynamics: the phases of all neurons are updated simultaneously. Sometimes, we used the asynchronous random dynamics. In this case, the updating is performed in cycles of N updates. In each cycle of updates, the order of neurons is selected randomly and, in this order, neurons are updated one by one; the freshly updated neuron takes part in updating the next neurons in that cycle of N updates. To explore the set of equilibrium states of the neural networks we introduce the accommodation mechanism into neural network dynamics (Dunin-Barkowski, Osovets, 1995). With this purpose in mind we accept that in moments, when neuron switches into the excited state, its threshold gets increases of a constant value   , while it decreases all the time exponentially with the time constant  toward the final value of 0  : Where i t is the moment of the i-th transition of the neuron into the excited state and 1, Thus, when the neural network stays for a long time in a fixed stationary state, the Hopfield energy of this state increases and the network activity moves to the adjacent state with lower energy.
A1.2. LIF model The dynamics of these neurons is described by the following equation , is the same for all excitatory neurons in the network and represents a global variable which controls the activity of the network. It is controlled by the following equations: The synaptic delays between neurons were uniformly randomly distributed in the range 1.0 -5.0 ms.

Supplementary Material 2. Number of inborn attractor states for d=0
The mean value of matrix elements is: Now, let the network have at the input one of its theoretical attractor patterns. Then, the probability, that a "foreign" neuron have L excitatory inputs is . So, the probability that at least one of (N-L) foreign neurons will get L units of excitation is: and, after non-complicated transformations, leaving only first order (by 1/N) terms, we have: 3) Comparison of (A2.2) and (A2.3) finally yields:  Fig. A1 presents a fragment of the layout of all neurons in order of the order numbers of the markers, which they contain (as each neuron has k markers, each neuron is presented k times in this layout). The neurons become excited in the same order, while the activity propagates over the bump attractor. We now consider the distance between some initial state, 0 X and states which are following it,   With L=15, k=3, M=900, we have 29 D  , which coincides with the results of the computational experiments.

Supplementary Material 4. Evaluation of k c in one-dimensional bump attractors
As each neuron takes part in k attractor states, each line of the matrix T contains about 2 k  positive (equal to 1) matrix elements. The probability of positive matrix element is 2 / kN  . That means that the probability of firing of one excessive neuron, which is not active in 0 X , is (2 / ) L kN  . On the contrary, the probability that this will not happen for any of the remaining N L  neurons is   In computational experiments, the critical value c k was defined as the value of k, such that five random networks with a given k show perfect cycles. Thus, for c k we obtain an equation: And finally, for the critical value of k, we have:

Supplementary Material 5 Trade-off relations for network attractors
This section uses the computational experiments first presented to the conference "Neuroinformatics 2013" (Karandashev, 2013). For neural network applications, it is important to know how the network behaves depending on the values of its parameters. In particular, it is important to know how stable the attractor points are. Fig. A3 shows the "error" in states, as a function of N, M at , obtained in Monte Carlo computational experiments. The synchronous L winners dynamics (Supplementary Material 1) of the network was used and the "error" was considered to be the Hamming distance between the experimentally obtained stable state and the "theoretical" attractor point which served as the initial condition. It can be seen that the error grows with M. For M less than a critical value, cr M , the "theoretical" attractor points are stable.   Here ij Τ is the average value of matrix element of Τ . The distinction between the right parts of (A5.2) and (A5.3) enables the neural network discriminate between attractor and non-attractor states. Supplementary Material 2 gives the analytical reasoning, which qualitatively explain the data of computational experiments, displayed in Figs. A3 and A4.
It is known that not all sets of points can be separated by a plane into two subsets. For example, any four points in 3-d space can be divided by a plane in any combinations, but for five points that can not always be done. A general result related to this problem has been obtained by E. Gardner (Gardner, 1988). It states that if there are randomly chosen 2R points in the R-dimensional space, painted randomly in two colors, R points in each group, then at R   , with probability approaching 1, exists the plane which separates the colored points by colors. But when the number of points exceeds 2R, the probability of separation approaches 0 as R . This result states that at maximum, we can separate only 2R points in R-dimensional space. In the case of separating a few points from the others, the situation changes. Let us have M points in R-dimensional space. Let then divide them into two parts: k points in one of them and M k  in the other. In this case (formula (40)