A Non-spiking Neuron Model With Dynamic Leak to Avoid Instability in Recurrent Networks

Recurrent circuitry components are distributed widely within the brain, including both excitatory and inhibitory synaptic connections. Recurrent neuronal networks have potential stability problems, perhaps a predisposition to epilepsy. More generally, instability risks making internal representations of information unreliable. To assess the inherent stability properties of such recurrent networks, we tested a linear summation, non-spiking neuron model with and without a “dynamic leak”, corresponding to the low-pass filtering of synaptic input current by the RC circuit of the biological membrane. We first show that the output of this neuron model, in either of its two forms, follows its input at a higher fidelity than a wide range of spiking neuron models across a range of input frequencies. Then we constructed fully connected recurrent networks with equal numbers of excitatory and inhibitory neurons and randomly distributed weights across all synapses. When the networks were driven by pseudorandom sensory inputs with varying frequency, the recurrent network activity tended to induce high frequency self-amplifying components, sometimes evident as distinct transients, which were not present in the input data. The addition of a dynamic leak based on known membrane properties consistently removed such spurious high frequency noise across all networks. Furthermore, we found that the neuron model with dynamic leak imparts a network stability that seamlessly scales with the size of the network, conduction delays, the input density of the sensory signal and a wide range of synaptic weight distributions. Our findings suggest that neuronal dynamic leak serves the beneficial function of protecting recurrent neuronal circuitry from the self-induction of spurious high frequency signals, thereby permitting the brain to utilize this architectural circuitry component regardless of network size or recurrency.


INTRODUCTION
Recurrent excitatory loops are a common feature in the central nervous system, such as in neocortical circuits (Binzegger et al., 2004;Song et al., 2005;Koestinger et al., 2018;Kar and DiCarlo, 2020), thalamocortical loops (Steriade, 1997;Hooks et al., 2013), cerebrocerebellar and spinocerebellar loops (Allen and Tsukahara, 1974;Jörntell, 2017). Inhibitory interneurons have been described to provide lateral inhibition (Zhu and Lo, 2000;Douglas and Martin, 2009;Obermayer et al., 2018;Rongala et al., 2018) and feed-forward inhibition (Swadlow, 2003;Isaacson and Scanziani, 2011), but they also make synapses on other inhibitory neurons, thereby potentially forming recurrent disinhibitory loops as well (Jörntell and Ekerot, 2003;Pi et al., 2013;Sultan and Shi, 2018). Furthermore, such excitatory and inhibitory connectivity has been reported to be balanced (Anderson et al., 2000;Wehr and Zador, 2003;Okun and Lampl, 2008). Functionally, recurrent connections enable a network to use preceding states to impact the processing of the present state. Such state memory can, for example, improve learning performance (Sutskever et al., 2014). However, due to the many potential positive feedback loops in larger networks with extensive recurrent connections, imbalances in excitatory (E) and inhibitory (I) synaptic activity could lead to activity saturation (Brunel, 2000;Vogels and Abbott, 2005), such as observed in epilepsy (Chakravarthy et al., 2009;Liou et al., 2020), or, in milder cases, a noise-like perturbation of the information content of internal signals, which would be disadvantageous for learning.
We explored potential noise and stability issues that could arise in recurrent neuronal networks. In order to focus on the network architecture aspect of this problem, we used a nonspiking neuron model designed to be simple and computationally efficient, while embodying fundamental properties of Hodgkin-Huxley conductance-based models. The relevance of a nonspiking neuron model stems from the stochasticity inherent in neuronal spike generation (Naundorf et al., 2006;Saarinen et al., 2008;Spanne et al., 2014;Nilsson and Jörntell, 2021), which renders the spiking output of the individual neuron to some degree unreliable in terms of information content.
To compensate for such unreliability, the brain could encode each representation across a population of neurons (below referred to as an ensemble of redundant neurons), as has been observed in the brain in vivo (Spanne and Jörntell, 2015). The input-output relationships across a range of neuron types in the central nervous system in vivo indicate that overall, each neuron's spike output is a probability density function (PDF) of the underlying membrane potential of the neuron (Spanne et al., 2014). That PDF thereby approximates the membrane potential and could be considered to correspond to the spike output of an ensemble of neurons with similar inputs. Thus, simulating a non-spiking neuron and providing the PDF of the neuron as its output avoids the extreme resource demands of both simulating the highly complex spike generation stochasticity (Saarinen et al., 2008) and compensating for that stochasticity by simulating large populations of redundant neurons. Synaptic input creates modulation of the neuronal membrane potential, hence its PDF, by temporarily activating conductances that are added to the static leak conductances. The synaptic conductances and currents can modulate very rapidly but the membrane capacitance together with the static leak channels forms an RC circuit that constitutes a low-pass filter (herein, dynamic leak) for the resultant membrane potential. We hypothesized that this dynamic leak would improve network stability without compromising information transfer.
To test this hypothesis, we constructed a highly recurrent, twolayer neuronal network, with five excitatory and five inhibitory neurons in the first layer and four excitatory and four inhibitory neurons in the second layer. All neurons in both layers were reciprocally connected with randomized gains. All first layer neurons were provided with six randomized and broadly distributed input signals. A striking finding was that for all tested network configurations, synaptic weight distributions, various conduction delays and input density of sensory inputs, recurrent networks tended to generate high frequency components that were not present in the sensory input data. In all cases these transients were eliminated by incorporating a dynamic leak in the neuron models without compromising the representation of the input signals.
We note that the fully reciprocal connectivity employed in the networks described herein encompasses the wide range of connectivity that has been identified experimentally in cortical and other central neural structures (see above). The strictly layered connectivity of many popular neural network models for deep learning reflects only a small subset of the known complexity of biological networks. Attempts to add limited recurrency into such models have encountered stability problems (Brunel, 2000;Vogels and Abbott, 2005), for which dynamic leak appears to offer substantial mitigation. Chowdhury et al. (2020) demonstrated that a leaky component (a low-pass filter effect) in a spiking neuron model (Leaky-Integrate and Fire, LIF, model) eliminates the high-frequency components from the input, which resulted in improved robustness against random noise in a multi-layer feed forward network trained with backpropagation.

Linear Summation Neuron Model (LSM)
The neuron model implemented for this study was a non-spiking, linear input summation model with an additional dynamic leak component. For the version without dynamic leak, the activity (A No Dyn ) was given by the total weighted input activity (w * a) (where a is the activity of each individual input and w is the weight of that input) across all individual synapses (i) (Eq. 1). Electrotonic compactness in the neuron is assumed, so that all synapses have equal impact on the activity of the neuron. This simplified model of synaptic input activity integration can be shown to be closely related to a Hodgkin-Huxley (H-H) model (see Appendix 1), for example resulting in the preservation of two key dependencies of EPSPs and IPSPs on membrane biophysics: (i) input response magnitude depends on the difference between the membrane potential and the reversal potentials for the relevant corresponding "ion channels" (i.e., depending on if the input is provided through an excitatory or an inhibitory synapse); (ii) input response magnitude depends on relative shunting of synaptic currents by conductances resulting from the background of synaptic input activity (Eq. 1). The responsive properties of the LSM and the H-H neuron model are shown to be highly similar in Supplementary Figure 1.
The LSM implemented a degree of activity normalization (denominator of Eq. 1) by introducing a static leak, which was calculated as the product of a constant (k static ) multiplied by the number of synapses on the neuron, plus a term reflecting the total number of open channels, which is activity dependent.
To mimic the effect of the RC circuit created by the ion channels and the capacitance of the membrane, we added a dynamic leak function to the neuron. To test the impact of the dynamic leak on network dynamics, we compared the networks composed of neurons with the dynamic leak with the same network when the neuron model did not include this dynamic leak. The neuron activity for the neuron model variant with dynamic leak (A Dyn ) is given by the linear summation model with an additional leak time constant (τ Dyn ). Larger neurons with more synapses tend to have longer time constants (Zhang, 2004), so we tried various ways of scaling τ Dyn with number of synapses i. Thereby, the dynamic leak integrates the function of the capacitance in the RC circuit of the biological neuron into the LSM (Eq. 2). The neuron activity of this model is given by the following equations, 0 ≤ A No Dyn and 0 ≤ A Dyn (3) Figure 1 illustrates the output activity of individual LSM neurons (Eqs. 1-3), which were isolated in the sense that they were not connected to any neuronal network other than the provided inputs, for different input combinations (from left to right in Figure 1) of emulated excitatory and inhibitory synaptic inputs (Figures 1A,B). The input spike trains were convoluted using a kernel function in order to emulate post-synaptic-potential inputs (detailed below, Eq. 6), that were fed to the LSM neuron (Figures 1C-E). The LSM activity without dynamic leak (A No Dyn , Figure 1F) shows the activity normalization resulting from the static leak constant (k static = 1, for this illustration), along with the effect of the neuron output activity threshold at zero (Eq. 3). The activity of the LSM neuron would also be expected to fall back toward this zero level of activity without any external or internal input. This level hence corresponds to a threshold for spike initiation among a population of similarly connected neurons that are typically represented by the one modeled neuron. The output activity for the LSM neuron with dynamic leak (A Dyn , Figure 1G) exhibits a low pass filtering effect on the output activity, which is reflective of the effect of the RC component integrated in the LSM neuron model with the dynamic leak. Figure 2 illustrates the impact of various static and dynamic leaks. As indicated in Figure 2A, the static leak constant acts as a normalization factor for the total neuron activity, without diminishing the underlying dynamics of that activity (Supplementary Figure 2). At very low values of the static The excitatory and inhibitory sensory input spike trains were convoluted using a kernel function (see section "Materials and Methods") to create input that resembles post-synaptic potentials. Note that the input to the LSM neuron can exceed 1 a.u., while the output of the LSM neuron cannot. Calibration applies to (C-E) (traces in the shaded region). (E) The input from summation of the excitatory and inhibitory inputs. (F) LSM (without dynamic leak, k static = 1) output activity for the given PSP inputs. Calibration applies to (F-G). (G) LSM (with dynamic leak, k static = 1, τ dyn = 1/100 s) output activity for the given PSP inputs.
leak constant, the mean activity reached sufficiently high levels for the reversal potential to start having a significant dampening effect on the activity dynamics (see uppermost trace in Figure 2A), substantially reducing the coefficient of variation (CV in Figure 2B). Figures 2C,D illustrates the additional impact of various values of the dynamic leak constant. Figure 2C and Supplementary Figure 3 demonstrate the filtering effect of the dynamic leak constant on the total neuron activity. A high value of this dynamic leak constant substantially smoothens the activity dynamics, which was reflected in the resulting low CV value ( Figure 2D). The dynamic leak constant (τ Dyn ) was set to 1/100 for the rest of this study, unless otherwise specified.
In Appendix 1, we show that the LSM neuron model can be derived from a simplified H-H type conductance-based neuron model. In the H-H model, the leak is proportional to the membrane voltage and the synaptic currents are scaled depending on the membrane voltage, so that the voltage is limited to a fixed range. The differential equation describing this model suffers from numerical instability, therefore we solve it with the implicit Euler method. The model is simple enough so that an analytical solution can be obtained. Key H-H model features that are captured by the LSM neuron: (i) the response to a given input scales with the difference between the current activity level (membrane potential, V) and the reversal potentials of the excitatory/inhibitory inputs (which have been normalized to +1 and -1, respectively); and (ii) the impact of a given input is scaled by the degree of the shunting caused by the total synaptic activity the neuron receives at that time.

Izhikevich Neuron Model (IZ)
For the Izhikevich neuron model (Izhikevich, 2003), the membrane potential (IZ v ) and the adaptation variable (IZ u ) were updated via the following nonlinear differential equations discretized using Euler's method.
When the membrane potential reached the spike depolarization threshold of 30 mV, one spike was produced followed by a reset: The IZ A , IZ B , andIZ C parameters and the spiking threshold were the standard ones of the Izhikevich artificial neuron model, whereas the parameters IZ a , IZ b , IZ c , and IZ d were selected (Table 1 and Figures 3E,F) to mimic a regular spiking behavior (Izhikevich, 2003(Izhikevich, , 2004. IZ input was the input current to the neuron model, that was weighted synaptic activity (w * a) in this article and IZ k is the input gain factor. Further, to analyze the IZ model behavior across different spiking and bursting behaviors, we have explored the parametric space (Table 2 and Figure 3G) of IZ a , IZ b , IZ c , IZ d , and IZ k (parameters in Eqs. 4, 5) within the boundaries identified in Izhikevich, 2003. We investigated the IZ neuron model responses ( Figure 3G) across 405 different parameter settings for each given input spike frequency. The parameter space was defined by the possible combinations of parameters listed in Table 2.

Network Connectivity
Our network was a two-layer fully connected neuronal network that comprised both inhibitory neurons (IN) and excitatory neurons (EN) ( Figure 4A). This network configuration provides a simple system that includes the critical element of recurrency. The network architecture is defined based on the following two rules: (a) The sensory inputs are projected as excitatory synapses to all neurons in layer 1 only; (b) All excitatory and inhibitory neurons were fully reciprocally connected both within and between layers. Most of the analysis reported here utilized a "5 × 4" network architecture (five ENs and five INs in layer 1 and four ENs and four INs in layer 2). In the analysis of Figure 8, where different network sizes were explored, we simply scaled up the number of neurons in each layer using the same connectivity rules (Figure 8).
A two-layer, fully reciprocally connected neuronal network architecture with self-recurrent connections (autapses) was also investigated. In this specific network architecture, in addition to the network connectivity defined above, the excitatory neurons  and provided as excitatory input to both excitatory and inhibitory neurons. We also tested our recurrent networks with higher input sensor density (#sensors = 6, 15, 30, and 50, Figures 7C,D), the inputs of which were also pseudorandomly generated.

Pseudo-Random Inputs
For the sensory inputs to the LSM and the IZ neurons, we generated pseudorandom spike trains for several different average frequencies (50, 100, 150, and 200 Hz, Figure 3A and Supplementary Figure 4) with uniform normal distributions. We used an inbuilt MATLAB function "randi" to generate the spike time distributions in these spike trains. Furthermore, these spikes were convoluted to resemble post-synaptic-potentials  Figure 3G).
(time continuous activity) using the following kernel equation (Mazzoni et al., 2008), Where, t * is the input spike time,τ kd is the decay time (4 ms),τ kr is the rise time (12.5 ms) and τ km is the constant to calculate ration between rise time and decay time (21.3 ms), and τ kl is the latency time which is zero in this case. These values were chosen based on the previous work (Rongala et al., 2018). The convoluted sensor signal was then provided as synaptic input to the neuronal network.
In order to analyze the network dynamics, we provided 50 presentations of the same pseudorandom spike trains FIGURE 4 | Activity dynamics in a sample recurrent network. (A) Principles of the connectivity structure in the recurrent network studied. Note that the default network, and the one used in panels (B-I) of this figure, contained excitatory neurons (five in the input layer, four in the output layer) and inhibitory neurons (same numbers) with the same connectivity, whereas only the neurons in layer 1 also received sensory inputs (the same six sensory inputs to each neuron) with all synapses having randomly generated Gaussian weights (mu = 0.4). (B) Frequency plot of the activity in an excitatory neuron. (for each given average frequency). Each input presentation differed by an addition of random noise of ±10 ms to individual spike times (Figures 3C-E, black lines) to the reference pseudorandom spike train ( Figure 3A, for spike frequency of 50 Hz). These presentations were concatenated without pause or reset between them, so the input subdivided into 50 presentations was in effect one long presentation lasting for 50,000 ms.
To allow a comparison with the output of the LSM, we convoluted the output spike trains also of the spiking neuron model (IZ). The process of convolution emulated a post synaptic response that would have been generated in a receiving neuron, whereas the LSM output itself directly corresponded to such a signal.

Synaptic Weights
All excitatory and inhibitory synaptic weights in the network were randomly distributed, including the excitatory sensory inputs to only the layer 1 neurons. The synaptic weight distributions were either normal, lognormal or binary. The normal and the log-normal distributions were generated for different mean weights (µ) (values between 0.1 and 0.5) each with a fixed coefficiency of variation (cv) of 20% [where sigma (σ) = (cv / 100) * µ]. For binary distributions, we tested different probabilities of high weight synapses (w = 1) (probability varied between 10 and 50%), whereas the remainder of the synapses were set to zero weight (Figures 5A-C).

Cross-Correlation
The correlation index measure was used to compute the similarity of the responses of the neuron models (Figures 3F,G and Supplementary Figure 4E). The correlation between two signals was computed with an inbuilt MATLAB function "xcorr" (with zero lag), which produces values from 0 (uncorrelated) to 1 (identical).

Frequency Analysis
We performed a continuous wavelet transform (using an inbuilt MATLAB function "cwt") in order to define the frequency composition of the input signal over time. The wavelet transform was used to extract the power of each frequency band as a function of time for the continuous neuron activity signal. Here, we reported (Figures 4-8 and Supplementary Figures 5-7), for each frequency band, the maximum power of the signal within each input presentation time window (1 s).
In Supplementary Figure 5, the frequency analysis was performed on the sensory input signals (on the convoluted signal for each given average spiking frequency) across all the 6 sensory inputs for all 50 presentations (see section "Materials and Methods"). The maximum power was computed for each sensory input and each presentation, and the average across all six input sensors was reported in this figure.
In Figure 4, the frequency analysis was performed on the activity of one excitatory and one inhibitory neuron in layer 1 (Figures 4B,C,F

Comparison to Spiking Neuron Models
We first characterized the input-output relationship of the neuron model in isolation (Figure 3) for a standardized sensor input, consisting of randomized spike times in six sensor neurons that were convoluted to time-continuous input signals. These were synaptically integrated by a single modeled neuron (Figures 3A,B). The activity of the non-spiking LSM (linear summation neuron model) was compared with that of a spiking neuron model (Izhikevich, IZ), in terms of how well their output (Figures 3C-E) correlated with the input (Figure 3F). The IZ neuron model was chosen for this comparison, as it was created to mimic a rich neuronal response dynamics with computational efficiency (Izhikevich, 2003). The spikes generated by the IZ neuron model were convoluted (see section "Materials and Methods") to a time continuous signal ( Figure 3E) in order for it to be comparable with the output of the LSM.
Both neuron models (LSM and IZ) were provided with the same pseudo-random sensory inputs (average firing frequency of 50 Hz in each of six sensors, see section "Materials and Methods") connected via six different synapses ( Figure 3B). The IZ neuron model parameters for this particular comparison (Figures 3E,F) were chosen to mimic the regular spiking behavior (hypothesized to be a common neuron behavior in cortex; Izhikevich, 2003Izhikevich, , 2004. The LSM neuron without dynamic leak reproduced on average a close representation of the source convolution signal for the input but the individual traces were considerably noisier without dynamic leak ( Figure 3C) than with dynamic leak (Figure 3D). The main difference between the LSM and the IZ neuron model responses was that the IZ neuron model tended to create output dynamic behavior that was not present in the input signal (Figures 3B,E), a consequence of the binary nature of the spike output. A cross-correlation analysis between the neurons' responses ( Figure 3F) showed that the IZ neuron model reflected the input signal less faithfully than the LSM. Note that the cross-correlation is slightly poorer for the LSM with dynamic leak than without, which is due to that the some of the fine-timing details of the high frequency components of the underlying convoluted signal is slightly filtered by the dynamic leak.
We tested if this observation depended on the frequency of the spiking in the sensory inputs. The LSM consistently showed a higher correlation with the input signal than the IZ neuron model across a range of input spike frequencies (Supplementary Figure 3). Next we tested if the specific parameters chosen for the IZ neuron model (Figure 3E, also indicated by an asterisk in Figure 3G) were responsible for these results ( Figure 3F). Therefore, we tested a range of parameter settings (405 different parameter combinations), which are known to reproduce specific output dynamics (bursting, for example) observed in a variety of neuron types in vitro (Izhikevich, 2003). The correlation analysis showed that LSM was more consistent than the IZ model in maintaining high correlation with the sensory inputs across the full range of sensory input spike frequencies ( Figure 3G). The exception was the highest sensory input frequencies, but that can be explained by that the dynamics of the sensory input diminishes due to the density of the inputs (Supplementary Figures 3A-D), as previously described also for neurons in vivo (Bengtsson et al., 2011). This effect, which we will refer to as the input density problem, is also evident in Figure 3G.

Network Dynamics (With and Without Neuronal Dynamic Leak)
We next investigated the activity dynamics of a standardized recurrent neuronal network implemented using the LSM (Figure 4A). The sensory input was fed as excitatory input to both the excitatory and inhibitory neurons of the first layer for 50 presentations, where the sequential presentation differed by added Gaussian noise to the sensory signal (see section "Materials and Methods"). In the network with the neuron model without dynamic leak, there was initially a gradual increase in the power across the higher frequency components of the activity in both excitatory and inhibitory neurons (Figures 4B,C) (more extensively illustrated in Supplementary Figure 6, where the first few presentations of sensory input evoked a lower power response). These high frequency components were not present in the sensory input (Supplementary Figure 5A) and were therefore generated by the network, most likely as a consequence of the parallel excitatory and inhibitory connections, which would be expected to lead to some degree of signal derivation (Wehr and Zador, 2003). Interestingly, in the illustrated IN1 the high frequency components gradually built up (until presentation #10, approximately) and then faded away (after presentation #20, approximately), despite that the average intensity of the sensory input did not vary over time, which suggests a relatively rich internal dynamics in this type of recurrent network, despite its limited size. In neurons of the second layer, high frequency components typically faded away more slowly (Supplementary  Figure 6). The appearance of these high frequency components was sometimes associated with the appearance of transients in the neuron activity (Figures 4D,E). In contrast, in the same network but with the neuron model including the dynamic leak, the transients and the high-frequency components of the neuron activity disappeared (Figures 4F-I). Hence, the low-pass filtering effect of the dynamic leak "rescued" the recurrent network from generating spurious high-frequency components.
The recurrent connections of the network were likely strongly contributing to these high-frequency components. An extreme case of recurrent connectivity is when a neuron makes synapse on itself (autapse). It is not clear to what extent autapses exist in adult neuronal circuitry, but they have been shown to be present in early development (Lübke et al., 1996;Tamás et al., 1997) and they are widely used in the field of RNN/computational neuroscience (Graves, 2012). To explore the impact of autapses we used the exact same network architecture used in Figure 4 but added autapses to all neurons (Supplementary Figure 7A). In this scenario, the highfrequency components were strongly amplified ( Supplementary  Figures 7B-E). However, in the same network with the neuron model with the dynamic leak, the transients and the high-frequency components of the neuron activity were again effectively removed (Supplementary Figures 7F-I). We did not explore networks with autapses any further.
We next compared the frequency power distributions of the neuronal activity in this recurrent network across a range of different synaptic weight distributions (Figure 5). We studied three different types of synaptic weight distributions (Gaussian, log-normal, and binary distributions). For each type of distribution, we tested five different mean synaptic weights (Figures 5A-C). Moreover, for each given synaptic weight distribution and mean weight, we generated five random weight distributions. The average signal of these five random weight distributions was used to calculate each frequency power distribution illustrated (Figures 5D,E), where each line represents the average activity across all the neurons of the network (Figures 5D,E). In the network with the neuron model without dynamic leak (Figure 5D), the relative power of the high-frequency components was amplified for synaptic weight distributions at mean synaptic weights of 0.3-0.4 or above (µ ≥ 0.4 for Gaussian and µ ≥ 0.3 for log-normal distributions) and for p > 10% for binary distribution, compared to the sensory input (Supplementary Figure 5). For other synaptic weight distributions (µ = 0.1 for Gaussian and log-normal distributions and for p = 10% for binary distributions, for example), there was much lower overall activity in the network, which could be the reason why the high frequency components were not induced in these networks. In the network with the neuron model with the dynamic leak component, the transients and the high-frequency components of the neuron activity disappeared for all settings (Figure 5E), though the setting of the dynamic leak component used also appeared to over-dampen the sensory input dynamics between 20 and 200 Hz. Note that each curve in Figures 5D,E represents the mean across five randomized repetitions. Supplementary Figure 8 instead shows the frequency power distribution of the neuronal activity for each individual network sorted by synaptic weight distribution. The overlap between these frequency power distribution curves across randomized weights and different distribution means implies that there was no simple linear relationship between the network structure and the spurious high frequency components.
To further explore if the high-frequency components observed were induced by the recurrent network, we tested if we could affect the "center of gravity" of the high-frequency components by introducing different conduction delays in signal transmission between the neurons (Figure 6). In the brain in vivo, these would correspond to the axonal conduction delays and synaptic delays combined. The delays were randomized between all the neurons, and several different mean delays were tested in different simulations. Interestingly, a supplementary frequency peak component (frequency peaks that were not present in the input sensory data / network dynamics without conduction delays, indicated with dashed vertical lines in Figure 6A) was observed without dynamic leak. These supplementary frequency peaks were approximately inversely proportional to the mean conduction delay. From previous studies (Jirsa and Stefanescu, 2011;Petkoski et al., 2018) we know that conduction delays could introduce such additional dynamics into the recurrent networks. These peaks were removed by adding dynamic leak ( Figure 6B).
The high frequency components also appeared for lower average sensor input spike frequency (50 and 100 Hz, Figure 7A) and for lower number of sensory inputs per neuron (nSensors = 6 and 15, Figure 7C). In contrast, for higher input spike frequencies (150 and 200 Hz) and higher number of sensory inputs (nSensors = 30 and 50) the increased density of the inputs resulted in a paradoxical decrease in the power of the neuron activity across all frequencies analyzed (i.e., as shown in Supplementary Figure 1), most likely due to the large number of randomized inputs regressing toward the constant mean frequency of each sensory signal. In each case, in the network with the neuron model with the dynamic leak component, the high-frequency components of the neuron activity disappeared for the sensor input configurations where it had been present (Figures 7B,D).
We also explored if the size of the network could be a factor for the appearance of the high frequency components. We found that these high frequency components appeared for different network sizes and that in those cases the network activity was "rescued" when the LSM was implemented with the dynamic leak (Figure 8). Depending on the specific synaptic weight distribution, the high frequency components became unequally dominant for different network sizes (Figures 8A,E) according to unclear relationships. The largest network as a rule had the weakest overall dynamics, which could be due to the same input density problem discussed above, where the density of synaptic input increased as the larger network has a higher number of recurrent synaptic inputs per neuron, which caused the dynamics of the neuron activity to go down. As there is a tendency for membrane time constants to grow with the size of the neuron (Zhang, 2004), we scaled the τ Dyn with the network size (as the neurons of the larger networks had a higher number of synapses) (Figures 8C,D,G,H, for two different weight distributions). A moderate scaling of the τ Dyn (with the square root of the number of synapses, Figures 8C,G) actually increased the dynamics of some network sizes, while eliminating high frequency components. In contrast, a linear scaling (Figures 8D,H) instead appeared to dampen such dynamics and, unsurprisingly, low pass-filtered also signals well below 100 Hz for the largest networks.

DISCUSSION
We explored the properties of a non-spiking neuronal model derived from the differential conductance-based H-H model when deployed in various recurrent neuronal networks. We found that in these recurrent networks, many different factors would tend to trigger network induction of high frequency signal components of a somewhat unpredictable magnitude and distribution (i.e., Figures 5-8 and Supplementary Figure 8). These signal components were not present in the input data (Supplementary Figure 5) and sometimes peaked to create overt spurious transients (Figures 4B,C). The dynamic leak in our neuron model invariably "rescued" the recurrent networks from their tendency to self-generate these high-frequency signal components (Figures 4-8 and Supplementary Figure 7). Corresponding to the capacitive component and the ion channels of the membrane circuit, dynamic leak is an inevitable feature of real neurons. Furthermore, this low-pass filter component made the behavior of recurrent networks more predictable for networks of different sizes.
We worked under the scenario that neuronal networks in the brain are recurrent and that excitatory and inhibitory connections are both pervasive, without any a priori assumed structure. Our network architecture contained the circuitry elements of previously reported "classical" network connectivity patterns (feedback and feedforward inhibition, for example). Feed-forward and feedback inhibition running in parallel with excitatory connections was likely the main network feature that caused the signal derivation effects/the high frequency components in the networks without the dynamic leak. The inclusion of autapses in the recurrent network strongly amplified these high frequency components (Supplementary Figure 7), presumably primarily through self-amplification of excitatory neurons. But note that in a recurrent network, any local circuity feature will at the global level automatically result in other functional network features as well. Hence, in contrast to a non-recurrent, feed forward neuronal network, in a recurrent network these circuitry features will hence become less clear-cut from a functional point of view, which could cause additional dynamic network effects that for example could explain our observations of gradual build-up of high frequency power components (Figure 4 and Supplementary Figure 5) while there was steady sensory input level to keep the network activity up. However, understanding such network dynamics at a deeper level was outside the scope of this paper, but would need to be addressed if such networks are to be used in a functional setting.
In our recurrent networks, apparently spurious high frequency components could be induced for different types of synaptic weight distributions, delays between neurons, sensory input densities and network sizes. It was hard to predict under what exact conditions such high frequency components would become more or less dominant (i.e., Figure 8 and Supplementary Figure 8), but in each case the dynamic leak effectively canceled them out. From the point of view of the functionality of a processing recurrent network, the fact that the frequency distribution of any given network did not match that of the sensory input is not automatically to be considered a disadvantage because the goal of a processing network would not be to perfectly replicate the sensory input. However, the fact that these high frequency components sometimes took the shape of clear-cut transients with no obvious counterpart in the sensor signal suggests that, at least in part, they should be considered spurious, i.e., noise injected into the signal due to the dynamics of the specific network.
In some cases, the activity of the network became highly suppressed relative to the sensory input (i.e., for low mean weights in Figure 5 and for the largest network in Figure 8). This effect can be ascribed to the input density problem, i.e., when too many unrelated but continuously active synaptic inputs converge on the same neuron, their signal dynamics would tend to cancel out, leaving the neuron with very little signal dynamics (Bengtsson et al., 2011). As these signals, due to the network structure, are paralleled by inhibitory connections, when the signal dynamics is lost, inhibition and excitation cancel each other out and the activity dynamics is lost in the network as a whole.
How would spiking neuron networks fare with respect to rescuing a recurrent network from spurious high frequency components? The phasic nature of discrete spike output would be expected to worsen the problem, whereas refractoriness would tend to dampen it. Refractoriness could certainly rescue the system from the extreme transients observed in networks that included autapses. Refractoriness, however, would not rescue the system from high frequency components generated through longer range recurrent excitatory loops.
Global stability has long been a concern in recurrent neural networks (RNNs) due to the non-linear dynamics that can arise within such networks due to the recurrency and the resulting feedback loops (Shen and Wang, 2011;Zhu and Shen, 2013). Periodic oscillations and stability issues in RNNs can arise, for example, as a consequence of input noise (Pham et al., 1998) and neuron activation delays (Gopalsamy and Leung, 1996). Moreover, such oscillations are inherent to any dynamical system with recurrency and amplification, such as parasitic oscillations in electronics and steady-state error in control theory. Apparently, the central nervous system, with prodigious recurrent loops (see section "Introduction") and intrinsic noise, found a way to avoid such oscillations. In this study we explored the possibility that neuronal leak dampens such oscillations in a recurrent neuronal network.
Recurrent neuronal networks with balanced excitatory and inhibitory synaptic connections have been extensively studied previously (Brunel, 2000;Vogels and Abbott, 2005;Vogels et al., 2011;Rubin et al., 2017), using spiking neuron models (employing integrate-and-fire or related mechanisms for the spike generation). In these studies, the recurrent connections were sparsely distributed with an overall connection probability of 1-2%, and a ratio of 4:1 excitatory to inhibitory interneurons. These studies point out that factors such as high connection probability and unbalanced excitation-inhibition tend to produce network instability (Rubin et al., 2017) and in some cases failure in signal propagation across the layers of those neuronal networks (Vogels and Abbott, 2005). From the stability we observed across a wide range of recurrent network configurations, always at 100% connection probability (though weighted), it would seem that the LSM with dynamic leak would be beneficial for ensuring stable recurrent neuronal network behavior across a range of network sizes and density of connectivity.
The present findings suggest that the biological feature of neuronal dynamic leak, which causes the polarization (i.e., the activity) of the neuron to settle toward resting potential with a time constant, is an important functional feature. It allows brain networks to fully utilize recurrent neuronal network architectures with variable numbers of participating neurons without risking self-generated noise embodied as high frequency components and spurious transients.

DATA AVAILABILITY STATEMENT
The original contributions presented in the study are publicly available. This data can be found here: https://figshare.com/projects/Non-Spiking_Neuron_Model_ With_Dynamic_Leak_Rongala_et_al_2021_/112854. Further inquiries can be directed to the corresponding author/s.

AUTHOR CONTRIBUTIONS
UR: design of study, design of model, data analysis, and writing the manuscript. JE and MK: design of model. GL: design of model and writing the manuscript. HJ: design of study, design of model, and writing the manuscript. All authors contributed to the article and approved the submitted version. A comparison between the output responses for LSM (green line is the mean across 50 presentations) and the H-H (derived using backward Euler method, blue line is the mean across 50 presentations), for a given pseudo-random sensory input at 50 Hz for each of six sensors (see Figure 3). The responses of the LSM output were offset by 0.1 activity (a.u.) in order to visualize the coherence between the responses of both neuron models. The cross correlation (with zero lag) was 0.99.

APPENDIX 1 Neuron Model Derivation From H-H Model
We first describe the rationale for our Linear Summation neuron model (LSM). In brief, the LSM aims to provide a simple and computationally efficient neuron model, while capturing important characteristics of H-H conductance models (Hodgkin and Huxley, 1952;Rongala et al., 2018). The membrane potential in the LSM model is normalized between +1 and -1 with a resting potential of zero. The output of an LSM neuron is a continuous, non-spiking signal that reflects the portion of the membrane potential that exceeds some threshold, which we assumed to be the zero resting potential. This would be suitable to represent one neuron or a population of similarly connected neurons that is biased by background activity to be at or near spontaneous activity (such as is hypothesized for stochastic resonance to prevent dead bands) (Tougaard, 2002). This continuous output signal is intended to reflect the mean spike rate that a population of similarly connected neurons would transmit to other centers in the nervous system.
In H-H models the various ion channels associated with ionic pumps and leaks define a resting membrane potential where there are no net currents. Any change in membrane potential away from this resting potential will settle back to the resting potential according to the combined conductance of all of these ion channels, which is called the static leak. Synaptic activation leads to opening of specific ion channels, which in the H-H models as a change of synaptic conductance. The ion(s) that are made permeable by the synapse have a reversal potential that is different from the resting membrane potential. When the ion channels of a particular synapse are open, the membrane potential will be driven toward that reversal potential, with a strength that depends on the strength of the synaptic conductance relative to the static leak conductance.
The synaptic currents charge the neuron, which is modeled as a single capacitor (assuming that the electrotonic distances between different parts of the neuron are negligible). Various synaptic signals are thus integrated and converted into a dynamically changing membrane potential. The static leak is in parallel with this capacitor, thereby defining a time-constant τ for these dynamic changes. The effect is that of low-pass filtering of the integrated synaptic currents (Lindner, 2014) to produce the membrane potential that defines the output state of the neuron. In an H-H model, the output state is created by converting the membrane voltage into patterns of spike output, with the help of a threshold for spike generation. Spike generation is omitted in the LSM model and the output of the neuron is instead the part of the membrane potential that exceeds some threshold (herein equal to the resting membrane potential).
First, we describe a conductance-based model similar to previously presented models (Hodgkin and Huxley, 1952;Rongala et al., 2018) and then show how a model similar to the LSM model can be derived. In this category of ion channel conductance-based models, the dynamics of one type of ion channel is lumped together into one single conductance. Compared to other conductancebased models, no spike generation is modeled and the neurons resting potential is set to zero. The neuron is modeled as a capacitor with capacitance (C) and is charged by excitatory (I syn,exc ) and inhibitory synaptic currents (I syn,inh ) and discharged by a leak current (I leak ). Therefore, the neurons membrane potential (V) measured across the capacitor follows the following equation.
C dV dt = I leak +I syn,exc +I syn,inh (A1) The leak current is set proportional to the membrane potential by constant conductance g L according to Ohm's law: If the synaptic currents are zero, the neuron's membrane potential will decay to zero. Therefore, this model neuron's resting membrane potential is zero. At each excitatory synapse i the firing rate of the presynaptic neuron (v + i ) produces a current. The synapse has a baseline conductance (w + i ), called synaptic weight. This conductance is scaled by the presynaptic neuron's firing rate. The direction and magnitude of the current is determined by the difference between the constant reversal potential (E exc ) and the neuron's membrane potential. For an excitatory synapse, the current would reverse if the membrane potential rose above the positive reversal voltage, so synaptic activity can never produce a membrane potential about the reversal potential. Therefore if the neuron has already reached this voltage, the neuron's voltage cannot increase further. The current contributed by an individual synapse can be modeled as The same model is applied for the current generated by an inhibitory synapse i, with reversal potential (E inh ), synaptic weight (w − i ) and presynaptic firing rate (v − i ). The reversal potential of an inhibitory synapse is set so that the neurons potential cannot decrease below this potential.