Hierarchical Chunking of Sequential Memory on Neuromorphic Architecture with Reduced Synaptic Plasticity

Chunking refers to a phenomenon whereby individuals group items together when performing a memory task to improve the performance of sequential memory. In this work, we build a bio-plausible hierarchical chunking of sequential memory (HCSM) model to explain why such improvement happens. We address this issue by linking hierarchical chunking with synaptic plasticity and neuromorphic engineering. We uncover that a chunking mechanism reduces the requirements of synaptic plasticity since it allows applying synapses with narrow dynamic range and low precision to perform a memory task. We validate a hardware version of the model through simulation, based on measured memristor behavior with narrow dynamic range in neuromorphic circuits, which reveals how chunking works and what role it plays in encoding sequential memory. Our work deepens the understanding of sequential memory and enables incorporating it for the investigation of the brain-inspired computing on neuromorphic architecture.

for i = 1, ...N 0 and j = 1, ..., N 0 . Let Re(λ 1 ) ≥ ... ≥ Re(λ r−1 ) > 0 > Re(λ r ) ≥ ... ≥ Re(λ N0 ) be the ordered real parts of the eigenvalues of the Hessian matrix ∇ 2 E(x) at A. If the saddle value of A defined as ν(A) = |Re(λr )| Re(λ1) satisfying that the saddle point A is dissipative, which implies that there is a contraction of a deviation after the system state passing the neighborhood of the point A . Here we are at the point to prove that the neurons I k with coordinates A k for 1 ≤ k < k are dissipative while A k is a stable equilibrium point of the dynamic system in Eq. (2).
The encoding process is to design the σ i and w ij for 1 ≤ i, j ≤ N 0 . It can be checked that A k = [0, ..., σ I k , ..., 0] is a nontrivial fixed point of Eq. (2) and ∇ 2 E(x), which is given by In the beginning, x = A k and obviously the neuron I k is the temporary winner. The eigenvalues of ∇ 2 E(x) at A k are given as follows: The eigenvalues will be illustrated case by case based on the above equation.
If I k > I 1 , because both w I k−1 I k and w I k+1 I k belongs to S 1 , Eq. (3)-(5) implies that as σI k+1 σI k = g. For the Fibonacci sequence, we have 2( Then, i.e., Combining the above four cases and Eq. (S5)-(S13), when either I k = I 1 or I k > I 1 with k = k 0 , I k is dissipative and I k+1 will be the next temporal winner since only the eigenvalue σ I k+1 − w I k+1 I k σ I k is positive and its eigenvector pointed to A k+1 = [0, ..., σ I k+1 , ..., 0]. Then, the states will go to the coordinates of the next neuron in the trace until it reaches the last one. Note that, though the noise is small, it is necessary to avoid the dynamical system states stopping at a saddle point. For the last temporal winner neuron I k , all eigenvalues of .., 0] are negative. This implies that A k is a stable equilibrium point of the dynamic system in Eq. (2).

Figures and explanations
Here eight figures are shown in this supplementary information. Figure S1 shows a typical memristor of sandwich structure, and Figure S2 shows the memristor-based synapses. Figure S3 shows the neuron model and Fibonacci sequence generator. Figure S4 and Figure S5 present the scalable neuromorphic architecture for HCSM and the corresponding programming scheme. Figure S6 and Figure  Synaptic device fabrication and measurement. As shown in Figure S1 Figure S1(c). Clearly, positive pulses incrementally potentiate the weight while negative pulses incrementally depress the weight. The phenomenon corresponds to the short/long-term potentiation (STP/LTP) and the short/long-term depression (STP/LTD) process of synaptic plasticity [5]. In Figure S1(d), simulation results of SPICE model of the iron oxide memristor are provided, which show excellent resemblance with the measurement results in Figure S1(c). It is worth noting that, as strong nonlinearity exists in the modulation process of synaptic weight, the weight will not significantly change if low voltage is applied, whereas it will abruptly start the gradual tuning process if the voltage amplitude of the applied pulse is higher than its threshold [6]. This is why efficient write and non-disturbing read make it possible to precisely modulate and measure the state of memristor-based neuromorphic networks during training.
(b) (a) Multiple-input-one-output structure, the 'multiple memristors & single amplifier' is able to calculate a multiplication and accumulation (MAC) operation, which is a basic operator for most neural networks.
The accumulation function of the amplifier results from the parallel structure of memristors, which is similar to the dendritic integration. Scaling this structure to a 'memristor crossbar & amplifier array', the vector-matrix multiplication (VMM) operation can be easily implemented.

Winner Neuron Activator
Neurons Clock Activation signal Figure S4: Illustration of the scalability of hierarchical neuromorphic architecture for HCSM.
Each block has the same structure and function of the single chunk shown in Figure 3. The Winner Neuron Activator in the PC activates its connected CCs in turn based on the current winner neuron. By this hierarchical way, the neuromorphic architecture is scalable to perform the multi-layer model in Figure   2(b). The weight calculator produces a target matrix of synaptic weights, according to the pre-defined winner sequence of a specific memory trace task. Depending on following modulation methods in Figure S6

Waveform Generator
Modulation Check: G current If OK?

Waveform Generator
Modulation Check: G current If OK?