Self-Organization Toward Criticality by Synaptic Plasticity
- 1International Max Planck Research School for the Mechanisms of Mental Function and Dysfunction, University of Tübingen, Tübingen, Germany
- 2Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- 3Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
- 4Department of Physics, University of Göttingen, Göttingen, Germany
- 5Department of Computer Science, University of Tübingen, Tübingen, Germany
- 6Bernstein Center for Computational Neuroscience Tübingen, Tübingen, Germany
Self-organized criticality has been proposed to be a universal mechanism for the emergence of scale-free dynamics in many complex systems, and possibly in the brain. While such scale-free patterns were identified experimentally in many different types of neural recordings, the biological principles behind their emergence remained unknown. Utilizing different network models and motivated by experimental observations, synaptic plasticity was proposed as a possible mechanism to self-organize brain dynamics toward a critical point. In this review, we discuss how various biologically plausible plasticity rules operating across multiple timescales are implemented in the models and how they alter the network’s dynamical state through modification of number and strength of the connections between the neurons. Some of these rules help to stabilize criticality, some need additional mechanisms to prevent divergence from the critical state. We propose that rules that are capable of bringing the network to criticality can be classified by how long the near-critical dynamics persists after their disabling. Finally, we discuss the role of self-organization and criticality in computation. Overall, the concept of criticality helps to shed light on brain function and self-organization, yet the overall dynamics of living neural networks seem to harnesses not only criticality for computation, but also deviations thereof.
More than 30 years ago, Per Bak, Chao Tang, and Kurt Wiesenfeld  discovered a strikingly simple way to generate scale-free relaxation dynamics and pattern statistic, that had been observed in systems as different as earthquakes [2, 3], snow avalanches , forest fires , or river networks [6, 7]. Thereafter, hopes were expressed that this self-organization mechanism for scale-free emergent phenomena would explain how any complex system in nature worked, and hence it did not take long until the hypothesis sparked that brains should be self-organized critical as well .
The idea that potentially the most complex object we know, the human brain, self-organizes to a critical state was explored early on by theoretical studies [9–12], but it took more than 15 years until the first scale-free “neuronal avalanches” were discovered . Since then, we have seen a continuous, and very active interaction between experiment and theory. The initial, simple and optimistic idea that the brain is self-organized critical similar to a sandpile has been refined and diversified. Now we have a multitude of neuroscience-inspired models, some showing classical self-organized critical dynamics, but many employing a set of crucial parameters to switch between critical and non-critical states [12–16]. Likewise the views on neural activity have been extended: We now have the means to quantify the distance to criticality even from the very few neurons we can record in parallel . Overall, we have observed in experiments, how developing networks self-organize to a critical state [18–20], how states may change from wakefulness to deep sleep [21–25], under drugs  or in a disease like epilepsy [27–30]. Criticality was mainly investigated in in vivo neural activity during the resting state dynamics [31–34], but there are also some studies during task-induced changes and in presence of external stimuli [35–39]. These results show how criticality and the deviations thereof can be harnessed for computation, but can also reflect cases where self-organization fails.
Parallel to the rapid accumulation of experimental data, models describing the complex brain dynamics were developed to draw a richer picture. It is worthwhile noting that the seminal sandpile model  already bears a striking similarity with the brain: The distribution of heights at each site of the system beautifully corresponds to the membrane potential of neurons, and in both systems, small perturbations can lead to scale-free distributed avalanches. However, whereas in the sandpile the number of grains naturally obeys a conservation law, the number of spikes or the summed potential in a neural network does not.
This points to a significant difference between classical SOC models and the brain: While in the SOC model, the conservation law fixes the interaction between sites [40–44], in neuroscience, connections strengths are ever-changing. Incorporating biologically plausible interactions is one of the largest challenges, but also the greatest opportunity for building the neuronal equivalent of a SOC model. Synaptic plasticity rules governing changes in the connections strengths often couple the interactions to the activity on different timescales. Thus, they can serve as the perfect mechanism for the self-organization and tuning the network’s activity to the desired regime.
Here we systematically review biologically plausible models of avalanche-related criticality with plastic connections. We discuss the degree to which they can be considered SOC proper, quasi-critical, or hovering around a critical state. We examine how they can be tuned toward and away from the classical critical state, and in particular, what are the biological control mechanisms that determine self-organization. Our main focus is on models that exhibit scale-free dynamics as measured by avalanche size distributions. Such models are usually referred to as critical, although the presence of power laws in avalanches properties is not a sufficient condition for the dynamics to be critical [45–48].
Modeling Neural Networks With Plastic Synapses
Let us briefly introduce the very basics of neural networks, modeling neural circuits and synaptic plasticity. Most of these knowledge can be found in larger details in neuroscience text-books [49–51]. The human brain contains about 80 billion neurons. Each neuron is connected to thousands of other neurons. The connections between the neurons are located on fine and long trees of “cables”. Each neuron has one such tree to collect signals from other neurons (dendritic tree), and a different tree to send out signals to another set of neurons (axonal tree). Biophysically, the connections between two neurons are realized by synapses. These synapses are special: Only if a synapse is present between a dendrite and an axon can one neuron activate the other (but not necessarily conversely). The strength or weight
Before we turn to studying synaptic plasticity in a model, the complexity of a living brain has to be reduced into a simplified model. Typically, neural networks are modeled with a few hundred or thousand of neurons. These neurons are either spiking, or approximated by “rate neurons” which represent the joint activity of an ensemble of neurons. Such rate neurons also exist in vivo, e.g., in small animals, releasing graded potentials instead of spikes. Of all neurons in the human cortex, 80% are often modeled as excitatory neurons; when active, excitatory neurons contribute to activating their post-synaptic neurons (i.e., the neurons to whom they send their signal). The other 20% of neurons are inhibitory, bringing their post-synaptic neurons further away from their firing threshold. Effectively, an inhibitory neuron is modeled as having negative outgoing synaptic weights
Numerous types of plasticity mechanisms shape the activity propagation in neuronal systems. One type of plasticity acts at the synapses regulating their creation and deletion, and determining changes in their weights
The reasons and mechanisms of changing synaptic strength and neural excitability differ broadly. Changes of the synaptic strengths and excitability in the brain occur at different timescales that might be particularly important for maintaining the critical dynamics. Some are very rapid acting within tens of milliseconds, or associated with every spike; others only make changes on the order of hours or even slower. For this review we simplified the classification in three temporally and functionally distinct classes, Figure 1.
FIGURE 1. Schematic examples of synaptic plasticity. (A) short-term synaptic depression acts on the timescale of spiking activity, and does not generate long-lasting changes. (B) For spike-timing dependent plasticity (STDP), a synapse is potentiated upon causal pairing of pre- and postsynaptic activity (framed orange) and depressed upon anti-causal pairing (framed green), forming long-lasting changes after multiple repetitions of pairing. (C) Homeostatic plasticity adjusts presynaptic weights (or excitability) to maintain a stable firing rate. After reduction of a neuron’s firing rate (e.g., after a lesion and reduction of input), the strengths of incoming excitatory synapses are increased to re-establish the neuron’s target firing rate. In contrast, if the actual firing rate is higher than the target rate, then synapses are weakened, and the neuron returns to its firing rate–on the timescales of hours or days.
The timescale of a plasticity rule influences how it contributes to the state and collective dynamics of brain networks. At the first level, we separate short-term plasticity acting on the timescale of dozens milliseconds, from the long-term plasticity acting with a time constant of minutes to days. As an illustration for short-term plasticity, we present prominent examples of short-term depression (see Short-Term Synaptic Plasticity). Among the long-term plasticity rules, we separate two distinct classes. First, plasticity rules that are explicitly associated with learning structures for specific activity propagation such as Hebbian and spike-timing-dependent plasticity (STDP, Figure 1, middle). Second, homeostatic plasticity that maintains stable firing rate by up or down regulating neuronal excitability or synaptic strength to achieve a stable target firing rate over long time. This plasticity rule is particularly active after sudden or gradual changes in input to a neuron or neural network, and aims at re-establishing the neuron’s firing rate (Figure 1, right).
Criticality in Network Models
Studying the distributions of avalanches is a common way to characterize critical dynamics in network models. Depending on the model, avalanches can be defined in different ways. When it is meaningful to impose the separation of timescales (STS), an avalanche is measured as the entire cascade of events following a small perturbation (e.g., activation of a single neuron) - until the activity dies out. However, the STS cannot be completely mapped to living neural systems due to the presence of spontaneous activity or external input. The external input and spontaneous activation impedes the pauses between avalanches and makes an unambigous separation difficult . In models, such external input can be explicitly incorporated to make them more realistic. To extract avalanches from living networks or from models with input, a pragmatic approach is often chosen. If the recorded signal can be approximated by a point process (e.g., spikes recorded from neurons), the data is summed over all signal sources (e.g., electrodes or neurons) and then binned in small time bins. This way, we obtain a single discrete time-series representing a number of events in all time-bins. An avalanche is then defined as a sequence of active bins between two silent bins. If the recorded signal is continuous (like EEG, fMRI, and LFP), it is first thresholded at a certain level and then binned in time . For each signal source (e.g., each electrode or channel), an individual binary sequence is obtained: one if the signal in the bin is larger than the threshold and zero otherwise. After that, the binary data is summed up across all the signal sources, and the same definition as above is applied. Another option to define avalanches in continuous signals is to first sum over the signals across different sources (e.g., electrodes) and then threshold the compound continuous signal. In this method, the beginning of an avalanche is defined as a crossing of the threshold level by the compound activity process from below, and the end is defined as the threshold crossing from above [53, 54]. In this case the proper measure of the avalanche sizes would be the integral between two crossings of the threshold-subtracted compound process .
While both binning and thresholding methods are widely used, concerns were raised that depending on the bin size [8, 21, 52, 56], the value of the threshold , or the intensity of input  distribution of observed avalanches and estimated power-law exponents might be altered. Therefore, to characterize critical dynamics using avalanches it is important to investigate the fundamental scaling relations between the exponents of avalanche size, duration and shapes to avoid misleading results [58, 59], or instead use approaches to assess criticality that do not require the definition of avalanches [17, 60]. We elaborate on these challenges and bias-free solutions in a review book chapter ; for the remainder of this review, we assume that avalanches can be assessed unambiguously.
The timescale of a plasticity rule might play a deciding role for the plasticity’s ability of reaching and maintaining closeness to criticality. While short-term plasticity acts very quickly, it does not generate long-lasting, stable modifications of the network; and it can clearly serve as a feedback between activity and connection strength. Long-term plasticity, on the other side, takes longer to act, but can result in a stable convergence to critical dynamics, Figure 2. To summarize their properties:
• Long-term plasticity is timescale-separated from activity propagation, whereas short-term plasticity evolves at similar timescales.
• Long-term plasticity can self-organize a network to a critical state.
• Short-term plasticity constitutes an inseparable part of the network dynamics. It generates critical statistics in the data, working as a negative feedback.
The core difference: long-term plasticity, after convergence, can be switched off and the system will remain at criticality. Switching off short-term plasticity will almost surely destroy apparent critical dynamics.
• There is a continuum of mechanisms on different timescales between these two extremes. Rules from this continuum can generate critical states that persist for varying time after rule-disabling, potentially even infinitely.
FIGURE 2. Classical plasticity rules and set-points of network activity. (A) Short-term plasticity serves as immediate feedback (top). The resulting long-term behavior of the network hovers near the critical point (orange trace, bottom panel). (B) Long-term plasticity results in slow (timescale of hours or longer) convergence to the fixpoint of global coupling strength. In some settings, this fixpoint may correspond to the second-order phase-transition point (bottom), rendering the critical point a global attractor of dynamics.
Short-Term Synaptic Plasticity
The short-term plasticity (STP) describes activity-related changes in connection strength at a timescale close to the timescale of activity propagation, typically on the order of hundreds to thousands of milliseconds. There are two dominant contributors to the short-term synaptic plasticity: the depletion of synaptic resources used for synaptic transmission, and the transient accumulation of the
At every spike, a synapse needs resources. In more detail, at the presynaptic side, vesicles from the readily-releasable pool are fused with the membrane; once fused, the vesicle is not available until it is replaced by a new one. This fast fusion, and slow filling of the readily releasable pool leads to synaptic depression, i.e., decreasing coupling strength after one or more spikes (Figure 1A). Synapses whose dynamics is dominated by depletion are called depressing synapses . At the same time, for some types of synapses, recent firing increases the probability of release for the vesicles in a readily-releasable pool. This mechanism leads to the increase of the coupling strength for a range of firing frequencies. Synapses with measurable contributions from it are called facilitating synapses . Hence, depending on their past activity, some synapses lower their release (i.e., activation) probability, others increase it, leading effectively to a weakening or strengthening of the synaptic strength.
Short-term plasticity (STP) appears to be an inevitable consequence of synaptic physiology. Nonetheless, numerous studies found that it can play an essential role in multiple brain functions. The most straightforward role is in the temporal filtering of inputs, i.e., short-term depression will result in low-pass filtering  that can be employed to reduce redundancy in the incoming signals . Additionally, it was shown to explain aspects of working memory .
We consider a network of neurons (in the simplest case, non-leaky threshold integrators) that interact by exchanging spikes. The state
To model the changes in the connection strength associated with short-term synaptic plasticity, it is sufficient to introduce two additional dynamic variables:
with δ denoting Dirac delta function. To add synaptic facilitation, we equip
Including depressing synapses (Eq. 2) in the integrate-and-fire neuronal network was shown to increase the range of coupling parameters leading to the power-law scaling of avalanche size distribution  as compared to the network without synaptic dynamics (Figure 3). If facilitation (Eq. 3) is included in the model, an additional first-order transition arises  (Figure 3). Both models have an analytical mean-field solution. In the limit of the infinite network size, the critical dynamics is obtained for any large enough coupling parameter. It was later suggested that the state reached by the system equipped with depressing synapses is not SOC, but self-organized quasi-criticality , as it is not locally energy preserving.
FIGURE 3. Short-term plasticity increases the range of the near critical regime. Left: model without plasticity reaches critical point only for single coupling parameter,
The mechanism of the near-critical region extension with depressing synapses is rather intuitive. If there is a large event propagating through the network, the massive usage of synaptic resources effectively decouples the network. This in turn prevents the next large event for a while, until the resources are recovered. At the same time, series of small events allow to build up connection strength increasing the probability of large avalanche. Thus, for the coupling parameters above the critical values, the negative feedback generated by the synaptic depression allows to bring the system closer to the critical state. Complimentary, short-term facilitation can help to shift slightly subcritical systems to a critical state.
A network with STD is essentially a two-dimensional dynamical system (with one variable corresponding to activity, and other to momentary coupling strength). Critical behavior is observed in the activity-dimension, over a long period of time while the coupling is hovering around the mean value as response to the changing activity. If the plasticity is “switched off”, the system may be close to–or relatively far from the critical point of the networks. The probability that the system happens to be precisely at its critical state when plasticity is switched off goes to zero, because 1) criticality only presents one point in this one-dimensional phase transition, and 2) for the large system size, already the smallest parameter deviation results in a big difference in the observed avalanche distribution, rendering the probability to switch off plasticity at the moment of critical coupling strength effectively 0.
In critical systems, not only the avalanche size distribution, but also the absence or presence of correlation between the avalanches are of interest. Already in the classical Bak-Tang-Wiesenfeld model , subsequent avalanche sizes are not statistically independent, whereas in the branching process they are. Hence, the correlation structure of subsequent avalanche sizes allows inference about the underlying model and self-organization mechanisms. In the presence of such correlations, fitting the avalanche distributions and investigating power-law statistics should also be applied properly . For models with short-term plasticity, both the avalanche sizes and inter-avalanche intervals are correlated, and similar correlations were observed in the neuronal data in vitro .
After the first publication , short-term depression was employed in multiple models discussing other mechanisms or different model for individual neurons. To name just a few: in binary probabilistic networks , in networks with long-term plasticity [75, 76], in spatially pre-structured networks [77, 78]. In one of the few studies using leaky integrate-and-fire neurons, short term depression was also found to result in critical dynamics if neuronal avalanches are defined by following the causal activation chains between the neurons . However, it was shown later that the causal definition of avalanches will lead to power-law statistics even in clearly non-critical systems . In all cases, the short-term plasticity contributes to the generation of a stable critical regime for a broad parameter range.
Long-Term Synaptic Plasticity and Network Reorganization
Long-term modifications in neuronal networks are created by two mechanisms: long-term synaptic plasticity and structural plasticity (i.e., changes of the topology). With long-term synaptic plasticity, synaptic weights change over a timescale of hours or slower, but the adjacency matrix of the network remains unchanged. However, with structural plasticity, new synapses are created or removed. Both these mechanisms can contribute to self-organizing the network dynamics toward or away from criticality.
Three types of long-term synaptic plasticity have been proposed as possible mechanisms for SOC: Hebbian plasticity, Spike-timing-dependent plasticity (STDP) and homeostatic plasticity. In Hebbian plasticity connections between near-synchronously active neurons are strengthened. In STDP, a temporally asymmetric rule is applied, where weights are strengthened or weakened depending on the order of pre- and post-synaptic spike-times. Last, homeostatic plasticity adapts the synaptic strength as a negative feedback, decreasing excitatory synapses if the firing rate is too high, and increasing it otherwise. Thereby, it stabilizes the network’s firing rate. In the following, we will discuss how each of these mechanisms can contribute to creating self-organized critical dynamics and deviations thereof.
Hebbian plasticity is typically formulated in a slogan-like form: Neurons that fire together, wire together. This means that connections between neurons with similar spike-timing will be strengthened. This rule can imprint stable attractors into the network’s dynamics, constituting the best candidate mechanism for memory formation. Hebbian plasticity in its standard form does not reduce coupling strength, thus without additional stabilization mechanisms Hebbian plasticity leads to runaway excitation. Additionally, presence of stable attractors makes it hard to maintain the scale-free distribution of avalanche sizes.
The first papers uniting Hebbian-like plasticity and criticality came from Lucilla de Arcangelis’ and Hans J. Herrmann’s labs [81–83]. In a series of publications, they demonstrated that a network of non-leaky integrators, equipped with plasticity and stabilizing synaptic scaling develops both power-law scaling of avalanches (with exponent 1.2 or 1.5 depending on the external drive) and power-law scaling of spectral density [81, 82]. In the follow up paper, they realized multiple logical gates using additional supervised learning paradigm .
Using Hebbian-like plasticity to imprint patterns in the network and simultaneously maintain critical dynamics is a very non-trivial task. Uhlig et al.  achieved it by alternating Hebbian learning epochs with the epochs of normalizing synaptic strength to return to a critical state. The memory capacity of the trained network was close to the maximal possible capacity and remain close to criticality. However, the network without homeostatic regulation toward a critical state achieved better retrieval. This might point to the possibility that classical criticality is not an optimal substrate for storing simple memories as attractors. However, in the so-far unstudied setting of storing memories as dynamic attractors, the critical system’s sensitivity might make it the best solution.
Spike-timing-dependent plasticity (STDP) is a form of activity-dependent plasticity in which synaptic strength is adjusted as a function of timing of spikes in pre- and post-synaptic neurons. It can appear both in the form of long-term potentiation (LTP) or long-term depression (LTD) . Suppose the post-synaptic neuron fires shortly after the pre-synaptic neuron. In that case, the connection from pre-to the post-synaptic neuron is strengthened (LTP), but if the post-synaptic neuron fires after the pre-synaptic neuron, the connection is weakened (LTP), Figure 1B. Millisecond temporal resolution measurements of pre- and postsynaptic spikes experimentally by Markram et al. [86–88] together with theoretical model proposed by Gerstner et al.  put forward STDP as a mechanism for sequence learning. Shortly after that other theoretical studies [90–94] incorporated STDP in their models as a local learning rule.
Different functional forms of STDP are observed in different brain areas and across various species (for a review see ). For example, STDP in hippocampal excitatory synapses appear to have equal temporal windows for LTD and LTP [86, 96, 97], while in neocortical synapses it exhibits longer LTD temporal windows [98, 99]. Interestingly, an even broader variety of different STDP kernels were observed for inhibitory connections .
The classical STDP is often modeled by modifying the synaptic weight
There are two types of critical points that can be attained by networks with STDP. The first transition type is characterized by statistics of weights in the converged network. For instance, at this point synaptic coupling strengths  or the fluctuations in coupling strengths  follow a power-law distribution. The second transition type is related to network’s dynamics, it is characterized by presence of scale-free avalanches [101, 102, 106]. In these models STDP is usually accompanied by fine-tuning of some parameters or properties of the network to create critical dynamics. This suggests that STDP alone might not be sufficient for SOC.
Rubinov et al.  developed a leaky integrate-and-fire (LIF) network model with modular connectivity (Figures 4A,B). In their model, STDP only gives rise to power-law distributions of avalanches when the ratio of connection between and within modules is tuned to a particular value. Their results were unchanged for STDP rules with both soft and hard bounds. However, they reported that switching off the STDP dynamics leads to the deterioration of the critical state, which disappears completely after a while. This property places the model in-between truly long-term and short-term mechanisms. Additionally, avalanches were defined based on the activity of modules (simultaneous activation of a large number of neurons within a module). In this modular definition of activity, SOC is achieved by potentiating within-module synaptic weights during module activation and depression of weights in-between module activations. While the module-based definition of avalanches could be relevant to the dynamics of cell-assemblies in the brain or more coarse-grained activity such as local field potentials (LFP), further investigation of avalanches statistics based on individual neurons activity is required.
FIGURE 4. Different STDP rules and their role in creating SOC. (A) Classical STDP rule with asymmetric temporal windows (STDP parameters:
Observation of power-law avalanche distributions was later extended to a network of Izhikevich neurons with a temporally shifted soft-bound STDP rule  (Figures 4C,D). The shift in the boundary between potentiation and depression reduces the immediate synchronization between pre- and post-synaptic neurons that eventually stabilizes the synaptic weights and the post-synaptic firing rate similar to a homeostasis regulation . In the model, the STDP time-shift is set to be equal to the axonal delay time constant that also acts as a control parameter for the state of dynamics in the network. The authors showed that for a physiologically plausible time constant
Homeostatic plasticity is a mechanism that regulates neural activity on a long timescale [108–113]. In a nutshell, one assumes that every neuron has some intrinsic target activity rate. Homeostatic plasticity then presents a negative feedback loop that maintains that target rate and thereby stabilize network dynamics. In general, it reduces (increases) excitatory synaptic strength or neural excitability if the spike rate is above (below) a target rate, Figure 1C. This mechanism can stabilize a potentially unconstrained positive feedback loop through Hebbian-type plasticity [114–121]. The physiological mechanisms of homeostatic plasticity are not fully disentangled yet. It can be implemented by a number of physiological candidate mechanisms, such as redistribution of synaptic efficacy [63, 122], synaptic scaling [108–110, 123], adaptation of membrane excitability [112, 124], or through interactions with glial cells [125, 126]. Recent results highlight the involvement of homeostatic plasticity in generating robust yet complex dynamics in recurrent networks [127–129].
In models, homeostatic plasticity was identified as one of the primary candidates to tune networks to criticality. The mechanism of it is straightforward: taking the analogy of the branching process, where one neuron (or unit) on average activates m neurons in the subsequent time step, the stable sustained activity that is the goal function of the homeostatic regulation requires
Similar ideas have been proposed and implemented first in simple models  and later also in more detailed models. In the latter, homeostatic regulation tunes the ratio between excitatory and inhibitory synaptic strength [53, 129, 134–136]. It then turned out that due to the diverging temporal correlations, which emerge at criticality, the time-scale of homeostasis would also have to diverge . If the time-scale of the homeostasis is faster than the timescale of the dynamics, then the network does not converge to a critical point, but hovers around it, potentially resembling supercritical dynamics [14, 135]. It is now clear that a self-organization to a critical state (instead of hovering around a critical state) requires that the timescale of homeostasis is slower than that of the network dynamics [14, 135].
Whether a system self-organizes to a critical state, or to a sub- or supercritical one is determined by a further parameter, which has been overlooked for a while: The rate of external input. This rate should be close to zero in critical systems to foster a separation of time scales [52, 137]. Hence, basically all models that studied criticality were implemented with a vanishing external input rate. In neural systems, however, sensory input, spontaneous activation, and other brain areas provide continuous drive, and hence a separation of timescales is typically not realized . As a consequence, avalanches merge, coalesce, and separate [51, 56, 137]. It turns out that under homeostatic plasticity, the external input strength can become a control parameter for the dynamics : If the input strength is high, the system self-organizes to a subcritical state (Figure 5, right). With weaker input, the network approaches a critical state (Figure 5, middle). However when the input is too weak, pauses between bursts get so long that the timescale of the homeostasis again plays a role - and the network does not converge to a single state but hovers between sub- and supercritical dynamics (Figure 5, left). This study shows that under homeostasis the external input strength determines the collective dynamics of the network.
FIGURE 5. Homeostatic plasticity regulation can create different types of dynamics in the network depending on input strength h, target firing rate
Assuming that in vivo, cortical activity is subject to some level of non-zero input, one expects a sightly subcritical state - which is indeed found consistently across different animals [14, 17, 30, 139, 140]. In vitro systems, however, which lack external input, are expected to show bursty avalanche dynamics, potentially hovering around a critical point with excursions to supercriticality [14, 136]. Such burst behavior is indeed characteristic for in vitro systems [8, 14, 19, 59].
Recently, Ma and colleagues characterized in experiments how homeostatic scaling might re-establish close-to-critical dynamics in vivo after perturbing sensory input  (Figure 6). The past theoretical results would predict that after blocking sensory input in a living animal, the spike rate should diminish, and with the time-scale of homeostatic plasticity, a state close to critical or even super-critical would be obtained [14, 136]. In a recent experiment, however, the behavior is more intricate. Soon after blocking visual input, the network became subcritical (branching ratio m smaller than one [17, 130]) and not supercritical. It then recovered to a close-to-critical state again within two days, potentially compensating the lack of input by coupling stronger to other brain areas. The avalanche size distributions agree with the transient deviation to subcritical dynamics. This deviation to subcriticality is the opposite of what one might have expected under reduced input, and apparently cannot be attributed to concurrent rate changes (which otherwise can challenge the identification of avalanche distributions ): The firing rate only started to decrease one day after blocking visual input. The authors attribute this delay in rate decay to excitation and inhibition reacting with different time constants to the blocking of visual input .
FIGURE 6. Homeostatic regulations in visual cortex of rats tune the network dynamics to near criticality. (A)(top) Firing rate of excitatory neurons during 7 days of recording exhibit a biphasic response to monocular deprivation (MD). After 37 h following MD firing rates were maximally suppressed (blue arrow) but came back to baseline by 84 h (orange arrow). Rates are normalized to 24 h of baseline recordings before MD. (bottom) Measuring the distance to criticality coefficient (DCC) in the same recordings. The mean DCC was significantly increased (blue arrow) upon MD, but was restored to baseline levels (near-critical regime) at 48 h (orange arrow). (B) An example of estimation of DDC (right) using the power-law exponents from the avalanche-size distribution (left) and the avalanche-duration distribution (middle). Solid gray traces show avalanche distributions in shuffled data. DCC is defined as the difference between the empirical scaling (dashed gray line) and the theoretical value (solid gray line) predicted from the exponents for a critical system as the displayed formula . (C) Avalanche-size distributions and DCCs computed from 4 h of single-unit data in three example animals show the diversity of experimental observations (reproduced from the bioRxiv version of  under CC-BY-NC-ND 4.0 international license).
Overall, although the exact implementation of the homeostatic plasticity on the pre- and postsynaptic sides of excitatory and inhibitory neurons remains a topic of current research, the general mechanism allows for the long-term convergence of the system to the critical point, Figure 1B. Homeostasis importantly contributes to many models including different learning mechanisms, stabilizing them.
Network Rewiring and Growth
Specific network structures such as small-world [68, 81, 141, 142] or scale-free [75, 83, 143–146] networks were found to be beneficial for the emergence of critical dynamics. These network structures are in particular interesting since they have been also observed in both anatomical and functional brain networks [147–150]. To create such topologies in neural networks long-term plasticity mechanisms have been used. For instance, scale-free and small-world structures emerge as a consequence of STDP between the neurons . In addition, Hebbian plasticity can generate small-world networks .
Another prominent form of network structures are hierarchical modular networks (HMN) that can sustain critical regime for a broader range of control parameters [77, 101, 152]. Unlike a conventional critical point where control parameter at a single value leads to scale-free avalanches, in HMNs power-law scaling emerges for a wide range of parameters. This extended critical-like region can correspond to a Griffits phase in statistical mechanics . Different rewiring algorithms have been proposed to generate HMN from an initially randomly connected  or a fully connected modular network [101, 152].
Experimental observations in developing neural cultures suggest that connections between neurons grow in a way such that the dynamics of the network eventually self-organizes to a critical point (i.e., observation of scale-free avalanches) [18, 19]. Motivated by this observation, different models have been developed to explain how neural networks can grow connections to achieve and maintain such critical dynamics using homeostatic structural plasticity [19, 153–158] (for a review see ). In addition to homeostatic plasticity, other rewiring rules inspired by Hebbian learning were also proposed to bring the network dynamics toward criticality [160–162]. However, implementation of such network reorganizations seems to be less biologically plausible.
In most of the models with homeostatic structural plasticity, the growth of neuronal axons and dendrites is modeled as an expanding (or shrinking) circular neurite field. The growth of the neurite field for each neuron is defined based on the neuron’s firing rate (or internal
FIGURE 7. Growing connections based on the homeostatic structural plasticity in a network model leads to SOC. (A) Size of neurite fields (top) and spiking activity (bottom) change during the network growth process (from 25 sample neurons). From left to right: initial state (red), state with average growth (blue), stationary state reaching the homeostatic target rate (green). (B) Corresponding scaled total overlaps of 25 sample neurons (gray) and the population average (black) to the three different time points in (A). (C) Avalanche-size (top) and avalanche-duration (bottom) distributions. If the homeostatic target rate
Tetzlaff et al.  proposed a slightly different mechanism where two neurites fields are assigned separately for axonal growth and dendritic growth to each neuron. While changes in the size of dendritic neurite fields follows the same rule as explained above, neurite fields of axons follow an exact opposite rule. The model simulations start with all excitatoryry neurons, but in the middle phase 20% of the neurons are changed into inhibitory ones. This switch is motivated by the transformation of GABA neurotransmitters from excitatory to inhibitory during development . They showed that when the network dynamics converge to a steady-state regime, avalanche-size distributions follow a power-law.
Hybrid Mechanisms of Learning and Task Performance
In living neural networks, multiple plasticity mechanisms occur simultaneously. The joint contribution of diverse mechanisms has been studied in the context of criticality in a set of models [75, 164, 165]. A combination with homeostatic-like regulation is typically necessary to stabilize Hebbian or spike-timing-dependent plasticity (STDP), e.g., learning binary tasks such as an XOR rule with Hebbian plasticity  or sequence learning with STDP [166–170]. These classic plasticity rules have been paired with regulatory normalization of synaptic weights to avoid a self-amplified destabilization [119–121]. Additionally, short-term synaptic depression stabilizes the critical regime, and if it is augmented with meta-plasticity  the stability interval is increased even further, possibly allowing for stable learning.
In a series of studies, Scarpetta and colleagues investigated how sequences can be memorized by STDP, while criticality is maintained [166–168]. By controlling the excitability of the neurons, they achieved a balance between partial replays and noise resulting in power-law distributed avalanche sizes and durations . They later reformulated the model and used the average connection strength as a control parameter, obtaining similar results [167, 168]. Whereas STDP fosters the formation of sequence memory, Hebbian plasticity is known to form assemblies (associations), and in the Hopfield network enables memory completion and recall . A number of studies showed that the formation of such Hebbian ensembles is also possible while maintaining critical dynamics [84, 168, 172]. These studies show that critical dynamics can be maintained in networks, which are learning classical tasks.
The critical network can support not only memory but also real computations such as performing logical operations (OR, AND or even XOR) [75, 83]. To achieve this, the authors build upon the model with Hebbian-like plasticity that previously shown to bring the network to a critical point . They added the central learning signal , resembling dopaminergic neuromodulation. Authors demonstrated both with  and without  short-term plasticity that the network can be trained to solve XOR-gate task.
These examples lead to the natural question of whether criticality is always optimal for learning. The criticality hypothesis attracted much attention, precisely because models at criticality show properties supporting optimal task performance. A core properties of criticality is a maximization of the dynamic range [174, 175], the sensitivity to input, and diverging spatial and temporal correlation lengths [176, 177]. In recurrent network models and experiments, such boost of input sensitivity and memory have been demonstrated by tuning networks systematically toward and away from criticality [174, 178–182].
When not explicitly incorporating a mechanism that drives the network to criticality, learning networks can be pushed away from criticality to a subcritical regime [15, 16, 170, 183, 184]. This is in line with the results above that networks with homeostatic mechanisms become subcritical under increasing network input (Figure 5). Subcritical dynamics might indeed be favorable when reliable task performance is required, as the inherent variability of critical systems may corroborate performance variability [52, 185–190].
Recently, the optimal working points of recurrent neural networks on a neuromorphic chip were demonstrated to depend on task complexity [15, 16]. The neuromorphic chip implements spiking integrate-and-fire neurons with STDP-like depressive plasticity and slow homeostatic recovery of synaptic strength. It was found that complex tasks, which require integration of information over long time-windows, indeed profit from critical dynamics, whereas for simple tasks the optimal working point of the recurrent network was in the subcritical regime [15, 16]. Criticality thus seems to be optimal particularly when a task makes use of this large variability, or explicitly requires the long-range correlation in time or space, e.g., for active memory storage.
We summarized how different types of plasticity contribute to the convergence and maintenance of the critical state in neuronal models. The short-term plasticity rules were generally leading to hovering around the critical point, which extended the critical-like dynamics for an extensive range of parameters. The long-term homeostatic network growth and homeostatic plasticity, for some settings, could create a global attractor at the critical state. Long-term plasticity associated with learning sequences, patterns or tasks required additional mechanisms (i.e., homeostasis) to maintain criticality.
The first problem with finding the best recipe for criticality in the brain is our inability to identify the brain’s state from the observations we can make. We are slowly learning how to deal with strong subsampling (under-observation) of the brain network [17, 20, 56, 191–193]. However, even if we obtained a perfectly resolved observation of all activity in the brain, we would face the problem of constant input and spontaneous activation that renders it impossible to find natural pauses between avalanches, and hence makes avalanche-based analyses ambiguous . Hence, multiple avalanche-independent options were proposed as alternative assessments of criticality in the brain: 1) detrended fluctuation analysis  allows to capture the scale-free behavior in long-range temporal correlations of EEG/MEG data, 2) critical slowing down  suggests a closeness to a bifurcation point, 3) divergence of susceptibility in the maximal entropy model fitted to the neural data , divergence of Fisher information , or the renormalization group approach  indicates a closeness to criticality in the sense of thermodynamic phase-transitions, and 4) estimating the branching parameter directly became feasible even from a small set of neurons; this estimate returns a quantification of the distance to criticality [17, 39]. It was recently pointed out that the results from fitting the maximal entropy models [198, 199] and coarse-graining [200, 201] based on empirical correlations should be interpreted with caution. Finding the best way to unite these definitions, and select the most suitable ones for a given experiment remains largely an open problem.
Investigating the criticality hypothesis for brain dynamics has strongly evolved in the past decades, but is far from being concluded. On the experimental side, sampling limits our access to collective neural dynamics [20, 202], and hence it is not perfectly clear yet how close to a critical point different brain areas operate. For cortex in awake animals, evidence points to a close-to-critical, but slightly subcritical state [30, 139, 140]. The precise working point might well depend on the specific brain area, cognitive state and task requirement [15, 16, 32, 35, 36, 179, 188, 190, 203–206]. Thus instead of self-organizing precisely to criticality, the brain could make use of the divergence of processing capabilities around the critical point. Thereby, each brain area might optimize its computational properties by tuning itself toward and away from criticality in a flexible, adaptive manner . In the past decades, the community has revealed the local plasticity rules that would enable such a tuning and adaption of the working point. Unlike classical physics systems, which are constrained by conservation laws, the brain and the propagation of neural activity is more flexible and hence can adhere in principle a large repertoire of working points - depending on task requirements.
Criticality has been very inspiring to understand brain dynamics and function. We assume that being perfectly critical is not an optimal solution for many brain areas, during different task epochs. However, studying and modeling brain dynamics from a criticality point of view allows to make sense of the high-dimensional neural data, its large variability, and to formulate meaningful hypothesis about dynamics and computation, many of which still wait to be tested.
RZ, VP, and AL designed the research. RZ and AL prepared the figures. All authors contributed to writing and reviewing the manuscript.
This work was supported by a Sofja Kovalevskaja Award from the Alexander von Humboldt Foundation, endowed by the Federal Ministry of Education and Research (RZ, AL), Max Planck Society (VP), SMARTSTART 2 program provided by Bernstein Center for Computational Neuroscience and Volkswagen Foundation (RZ). We acknowledge the support from the BMBF through the Tübingen AI Center (FKZ: 01IS18039B).
Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
We are very thankful to SK, MG, and SA for reading the initial version of the manuscript and their constructive comments. The authors thank the International Max Planck Research School for the Mechanisms of Mental Function and Dysfunction (IMPRS-MMFD) for supporting RZ. We acknowledge support by Open Access Publishing Fund of University of Tübingen.
9. Chen DM, Wu S, Guo A, Yang ZR. Self-organized criticality in a cellular automaton model of pulse-coupled integrate-and-fire neurons. J Phys A: Math Gen (1995) 28:5177–82. doi:10.1088/0305-4470/28/18/009
10. Corral Á, Pérez CJ, Diaz-Guilera A, Arenas A. Self-organized criticality and synchronization in a lattice model of integrate-and-fire oscillators. Phys Rev Lett (1995) 74:118–21. doi:10.1103/PhysRevLett.74.118
11. Herz AVM, Hopfield JJ. Earthquake cycles and neural reverberations: collective oscillations in systems with pulse-coupled threshold elements. Phys Rev Lett (1995) 75:1222–5. doi:10.1103/PhysRevLett.75.1222
13. Haldeman C, Beggs J. Critical branching captures activity in living neural networks and maximizes the number of metastable states. Phys Rev Lett (2005) 94:058101. doi:10.1103/PhysRevLett.94.058101
15. Cramer B, Stöckel D, Kreft M, Wibral M, Schemmel J, Meier K, et al. Control of criticality and computation in spiking neuromorphic networks with plasticity. Nat Commun (2020) 11:2853. doi:10.1038/s41467-020-16548-3
16. Prosi J, Khajehabdollahi S, Giannakakis E, Martius G, Levina A. The dynamical regime and its importance for evolvability, task performance and generalization. arXiv [preprint arXiv:2103.12184] (2021).
18. Yada Y, Mita T, Sanada A, Yano R, Kanzaki R, Bakkum DJ, et al. Development of neural population activity toward self-organized criticality. Neuroscience (2017) 343:55–65. doi:10.1016/j.neuroscience.2016.11.031
21. Priesemann V, Valderrama M, Wibral M, Le Van Quyen M. Neuronal avalanches differ from wakefulness to deep sleep–evidence from intracranial depth recordings in humans. PLoS Comput Biol (2013) 9:e1002985. doi:10.1371/journal.pcbi.1002985
22. Lo CC, Chou T, Penzel T, Scammell TE, Strecker RE, Stanley HE, et al. Common scale-invariant patterns of sleep–wake transitions across mammalian species. Proc Natl Acad Sci (2004) 101:17545–8. doi:10.1073/pnas.0408242101
23. Allegrini P, Paradisi P, Menicucci D, Laurino M, Piarulli A, Gemignani A. Self-organized dynamical complexity in human wakefulness and sleep: different critical brain-activity feedback for conscious and unconscious states. Phys Rev E (2015) 92:032808. doi:10.1103/PhysRevE.92.032808
24. Bocaccio H, Pallavicini C, Castro MN, Sánchez SM, De Pino G, Laufs H, et al. The avalanche-like behaviour of large-scale haemodynamic activity from wakefulness to deep sleep. J R Soc Interf (2019) 16:20190262. doi:10.1098/rsif.2019.0262
25. Lombardi F, Gómez-Extremera M, Bernaola-Galván P, Vetrivelan R, Saper CB, Scammell TE, et al. Critical dynamics and coupling in bursts of cortical rhythms indicate non-homeostatic mechanism for sleep-stage transitions and dual role of VLPO neurons in both sleep and wake. J Neurosci (2020) 40:171–90. doi:10.1523/JNEUROSCI.1278-19.2019
28. Meisel C, Storch A, Hallmeyer-Elgner S, Bullmore E, Gross T. Failure of adaptive self-organized criticality during epileptic seizure attacks. PLoS Comput Biol (2012) 8:e1002312. doi:10.1371/journal.pcbi.1002312
29. Arviv O, Medvedovsky M, Sheintuch L, Goldstein A, Shriki O. Deviations from critical dynamics in interictal epileptiform activity. J Neurosci (2016) 36:12276–92. doi:10.1523/JNEUROSCI.0809-16.2016
30. Hagemann A, Wilting J, Samimizad B, Mormann F, Priesemann V. No evidence that epilepsy impacts criticality in pre-seizure single-neuron activity of human cortex. arXiv: 2004.10642 [physics, q-bio] (2020).
32. Tagliazucchi E, von Wegner F, Morzelewski A, Brodbeck V, Jahnke K, Laufs H. Breakdown of long-range temporal dependence in default mode and attention networks during deep sleep. Proc Natl Acad Sci (2013) 110(38):15419–24. doi:10.1073/pnas.1312848110
33. Hahn G, Ponce-Alvarez A, Monier C, Benvenuti G, Kumar A, Chavane F, et al. Spontaneous cortical activity is transiently poised close to criticality. PLoS Comput Biol (2017) 13:e1005543. doi:10.1371/journal.pcbi.1005543
34. Petermann T, Thiagarajan TC, Lebedev MA, Nicolelis MA, Chialvo DR, Plenz D. Spontaneous cortical activity in awake monkeys composed of neuronal avalanches. Proc Natl Acad Sci USA (2009) 106:15921–6. doi:10.1073/pnas.0904089106
35. Yu S, Ribeiro TL, Meisel C, Chou S, Mitz A, Saunders R, et al. Maintained avalanche dynamics during task-induced changes of neuronal activity in nonhuman primates. eLife Sci (2017) 6:e27119. doi:10.7554/eLife.27119
37. Gautam SH, Hoang TT, McClanahan K, Grady SK, Shew WL. Maximizing sensory dynamic range by tuning the cortical state to criticality. PLoS Comput Biol (2015) 11:e1004576. doi:10.1371/journal.pcbi.1004576
38. Stephani T, Waterstraat G, Haufe S, Curio G, Villringer A, Nikulin VV. Temporal signatures of criticality in human cortical excitability as probed by early somatosensory responses. J Neurosci (2020) 40:6572–83. doi:10.1523/JNEUROSCI.0241-20.2020
39. de Heuvel J, Wilting J, Becker M, Priesemann V, Zierenberg J. Characterizing spreading dynamics of subsampled systems with nonstationary external input. Phys Rev E (2020) 102:040301. doi:10.1103/PhysRevE.102.040301
52. Priesemann V, Wibral M, Valderrama M, Pröpper R, Le Van Quyen M, Geisel T, et al. Spike avalanches in vivo suggest a driven, slightly subcritical brain state. Front Syst Neurosci (2014) 8:108. doi:10.3389/fnsys.2014.00108
53. Poil SS, Hardstone R, Mansvelder HD, Linkenkaer-Hansen K. Critical-state dynamics of avalanches and oscillations jointly emerge from balanced excitation/inhibition in neuronal networks. J Neurosci (2012) 32:9817–23. doi:10.1523/JNEUROSCI.5990-11.2012
59. Friedman N, Ito S, Brinkman BAW, Shimono M, DeVille REL, Dahmen KA, et al. Universal critical dynamics in high resolution neuronal avalanche data. Phys Rev Lett (2012) 108:208102. doi:10.1103/PhysRevLett.108.208102
60. Linkenkaer-Hansen K, Nikouline VV, Palva JM, Ilmoniemi RJ. Long-range temporal correlations and scaling behavior in human brain oscillations. J Neurosci (2001) 21:1370–7. doi:10.1002/anie.201106423
61. Priesemann V, Levina A, Wilting J. Assessing criticality in experiments. In: N Tomen, JM Herrmann, and U Ernst., editors. The functional role of critical dynamics in neural systems. Springer Series on Bio- and Neurosystems. Cham: Springer International Publishing (2019). p. 199–232. doi:10.1007/978-3-030-20965-0˙11
70. Bonachela JA, De Franciscis S, Torres JJ, Munoz MA. Self-organization without conservation: are neuronal avalanches generically critical? J Stat Mech Theor Exp (2010) 2010:P02015. doi:10.1088/1742-5468/2010/02/P02015
73. Levina A, Herrmann JM, Geisel T. Dynamical synapses give rise to a power-law distribution of neuronal avalanches. In: Y Weiss, B Schölkopf, and J Platt., editors. Advances in neural information processing systems. Cambridge, MA: MIT Press (2006), 18. p. 771–8.
74. Kinouchi O, Brochini L, Costa AA, Campos JGF, Copelli M. Stochastic oscillations and dragon king avalanches in self-organized quasi-critical systems. Sci Rep (2019) 9:1–12. doi:10.1038/s41598-019-40473-1
76. Zeng G, Huang X, Jiang T, Yu S. Short-term synaptic plasticity expands the operational range of long-term synaptic changes in neural networks. Neural Networks (2019) 118:140–7. doi:10.1016/j.neunet.2019.06.002
86. Bi G, Poo M. Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type. J Neurosci (1998) 18:10464–72. doi:10.1523/JNEUROSCI.18-24-10464.1998
87. Markram H, Helm PJ, Sakmann B. Dendritic calcium transients evoked by single back-propagating action potentials in rat neocortical pyramidal neurons. J Physiol (1995) 485:1–20. doi:10.1113/jphysiol.1995.sp020708
101. Rubinov M, Sporns O, Thivierge JP, Breakspear M. Neurobiologically realistic determinants of self-organized criticality in networks of spiking neurons. PLoS Comput Biol (2011) 7:e1002038. doi:10.1371/journal.pcbi.1002038
102. Khoshkhou M, Montakhab A. Spike-timing-dependent plasticity with axonal delay tunes networks of Izhikevich neurons to the edge of synchronization transition with scale-free avalanches. Front Syst Neurosci (2019) 13:73. doi:10.3389/fnsys.2019.00073
109. Lissin DV, Gomperts SN, Carroll RC, Christine CW, Kalman D, Kitamura M, et al. Activity differentially regulates the surface expression of synaptic AMPA and NMDA glutamate receptors. Proc Natl Acad Sci (1998) 95:7097–102. doi:10.1073/pnas.95.12.7097
110. O’Brien RJ, Kamboj S, Ehlers MD, Rosen KR, Fischbach GD, Huganir RL. Activity-dependent modulation of synaptic AMPA receptor accumulation. Neuron (1998) 21:1067–78. doi:10.1016/S0896-6273(00)80624-8
114. Bienenstock EL, Cooper LN, Munro PW. Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. J Neurosci (1982) 2:32–48. doi:10.1523/JNEUROSCI.02-01-00032.1982
120. Keck T, Toyoizumi T, Chen L, Doiron B, Feldman DE, Fox K, et al. Integrating Hebbian and homeostatic plasticity: the current state of the field and future research directions. Philos Trans R Soc B: Biol Sci (2017) 372:20160158. doi:10.1098/rstb.2016.0158
126. Virkar YS, Shew WL, Restrepo JG, Ott E. Feedback control stabilization of critical dynamics via resource transport on multilayer networks: how glia enable learning dynamics in the brain. Phys Rev E (2016) 94:042310. doi:10.1103/PhysRevE.94.042310
127. Naudé J, Cessac B, Berry H, Delord B. Effects of cellular homeostatic intrinsic plasticity on dynamical and computational properties of biological recurrent neural networks. J Neurosci (2013) 33:15032–43. doi:10.1523/JNEUROSCI.0870-13.2013
129. Hellyer PJ, Jachs B, Clopath C, Leech R. Local inhibitory plasticity tunes macroscopic brain dynamics and allows the emergence of functional brain networks. NeuroImage (2016) 124:85–95. doi:10.1016/j.neuroimage.2015.08.069
133. Rocha RP, Koçillari L, Suweis S, Corbetta M, Maritan A. Homeostatic plasticity and emergence of functional networks in a whole-brain model at criticality. Sci Rep (2018) 8:15682. doi:10.1038/s41598-018-33923-9
134. Girardi-Schappo M, Brochini L, Costa AA, Carvalho TTA, Kinouchi O. Synaptic balance due to homeostatically self-organized quasicritical dynamics. Phys Rev Res (2020) 2:012042. doi:10.1103/PhysRevResearch.2.012042
135. Brochini L, de Andrade Costa A, Abadi M, Roque AC, Stolfi J, Kinouchi O. Phase transitions and self-organized criticality in networks of stochastic spiking neurons. Scientific Rep (2016) 6:35831. doi:10.1038/srep35831
138. Zierenberg J, Wilting J, Priesemann V, Levina A. Description of spreading dynamics by microscopic network models and macroscopic branching processes can differ due to coalescence. Phys Rev E (2020a) 101:022301. doi:10.1103/PhysRevE.101.022301
140. Wilting J, Priesemann V. Between perfectly critical and fully irregular: a reverberating model captures and predicts cortical spike propagation. Cereb Cortex (2019a) 29:2759–70. doi:10.1093/cercor/bhz049
151. Siri B, Quoy M, Delord B, Cessac B, Berry H. Effects of Hebbian learning on the dynamics and structure of random networks with inhibitory and excitatory neurons. J Physiol-Paris (2007) 101:136–48. doi:10.1016/j.jphysparis.2007.10.003
156. Kalle Kossio FY, Goedeke S, van den Akker B, Ibarz B, Memmesheimer RM. Growing critical: self-organized criticality in a developing neural system. Phys Rev Lett (2018) 121(5):058301. doi:10.1103/PhysRevLett.121.058301
159. van Ooyen A, Butz-Ostendorf M. Homeostatic structural plasticity can build critical networks. In: N Tomen, JM Herrmann, and U Ernst., editors. The functional role of critical dynamics in neural systems. Springer series on bio- and neurosystems. Cham: Springer International Publishing (2019). p. 117–37. doi:10.1007/978-3-030-20965-0˙7
166. Scarpetta S, Giacco F, Lombardi F, de Candia A. Effects of Poisson noise in a IF model with STDP and spontaneous replay of periodic spatiotemporal patterns, in absence of cue stimulation. Biosystems (2013) 112:258–64. doi:10.1016/j.biosystems.2013.03.017
167. Scarpetta S, de Candia A. Alternation of up and down states at a dynamical phase-transition of a neural network with spatiotemporal attractors. Front Syst Neurosci (2014) 8:88. doi:10.3389/fnsys.2014.00088
168. Scarpetta S, Apicella I, Minati L, de Candia A. Hysteresis, neural avalanches, and critical behavior near a first-order transition of a spiking neural network. Phys Rev E (2018) 97:062305. doi:10.1103/PhysRevE.97.062305
175. Zierenberg J, Wilting J, Priesemann V, Levina A. Tailored ensembles of neural networks optimize sensitivity to stimulus statistics. Phys Rev Res (2020b) 2:013115. doi:10.1103/PhysRevResearch.2.013115
178. Shew WL, Yang H, Petermann T, Roy R, Plenz D. Neuronal avalanches imply maximum dynamic range in cortical networks at criticality. J Neurosci (2009) 29:15595–600. doi:10.1523/JNEUROSCI.3864-09.2009
181. Boedecker J, Lampe T, Riedmiller M. Modeling effects of intrinsic and extrinsic rewards on the competition between striatal learning systems. Front Psychol (2013) 4:739. doi:10.3389/fpsyg.2013.00739
184. Del Papa B, Priesemann V, Triesch J. Fading memory, plasticity, and criticality in recurrent networks. In: N Tomen, JM Herrmann, and U Ernst., editors. The functional role of critical dynamics in neural systems. Springer series on bio- and neurosystems. Cham: Springer International Publishing (2019). p. 95–115. doi:10.1007/978-3-030-20965-0_6
188. Wilting J, Dehning J, Pinheiro Neto J, Rudelt L, Wibral M, Zierenberg J, et al. Operating in a reverberating regime enables rapid tuning of network states to task requirements. Front Syst Neurosci (2018) 12. doi:10.3389/fnsys.2018.00055
189. Wilting J, Priesemann V. 25 years of criticality in neuroscience â€” established results, open controversies, novel concepts. Curr Opin Neurobiol (2019b) 58:105–11. doi:10.1016/j.conb.2019.08.002
191. Ribeiro TL, Copelli M, Caixeta F, Belchior H, Chialvo DR, Nicolelis MA, et al. Spike avalanches exhibit universal dynamics across the sleep-wake cycle. PLoS One (2010) 5:e14129. doi:10.1371/journal.pone.0014129
192. Spitzner FP, Dehning J, Wilting J, Hagemann A, Neto JP, Zierenberg J, et al. MR. Estimator, a toolbox to determine intrinsic timescales from subsampled spiking activity. arXiv:2007.03367 [physics, q-bio] (2020).
196. Hidalgo J, Grilli J, Suweis S, Munoz MA, Banavar JR, Maritan A. Information-based fitness and the emergence of criticality in living systems. Proc Natl Acad Sci (2014) 111:10095–100. doi:10.1073/pnas.1319166111
197. Meshulam L, Gauthier JL, Brody CD, Tank DW, Bialek W. Coarse graining, fixed points, and scaling in a large population of neurons. Phys Rev Lett (2019) 123:178103. doi:10.1103/PhysRevLett.123.178103
198. Nonnenmacher M, Behrens C, Berens P, Bethge M, Macke JH. Signatures of criticality arise from random subsampling in simple population models. PLoS Comput Biol (2017) 13:e1005718. doi:10.1371/journal.pcbi.1005718
202. Neto JP, Spitzner FP, Priesemann V. A unified picture of neuronal avalanches arises from the understanding of sampling effects. arXiv:1910.09984 [cond-mat, physics:nlin, physics:physics, q-bio] (2020).
204. Clawson WP, Wright NC, Wessel R, Shew WL. Adaptation towards scale-free dynamics improves cortical stimulus discrimination at the cost of reduced detection. PLoS Comput Biol (2017) 13:e1005574. doi:10.1371/journal.pcbi.1005574
205. Carhart-Harris RL, Muthukumaraswamy S, Roseman L, Kaelen M, Droog W, Murphy K, et al. Neural correlates of the LSD experience revealed by multimodal neuroimaging. Proc Natl Acad Sci (2016) 113:4853–8. doi:10.1073/pnas.1518377113
Keywords: self-organized criticality, neuronal avalanches, synaptic plasticity, learning, neuronal networks, homeostasis, synaptic depression, self-organization
Citation: Zeraati R, Priesemann V and Levina A (2021) Self-Organization Toward Criticality by Synaptic Plasticity. Front. Phys. 9:619661. doi: 10.3389/fphy.2021.619661
Received: 20 October 2020; Accepted: 09 February 2021;
Published: 22 April 2021.
Edited by:Attilio L. Stella, University of Padua, Italy
Reviewed by:Ignazio Licata, Institute for Scientific Methodology (ISEM), Italy
Jae Woo Lee, Inha University, South Korea
Samir Suweis, University of Padua, Italy
Copyright © 2021 Zeraati, Priesemann and Levina. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Anna Levina, email@example.com