REVIEW article

Front. Phys., 23 December 2020

Sec. Interdisciplinary Physics

Volume 8 - 2020 | https://doi.org/10.3389/fphy.2020.583213

Mechanisms of Self-Organized Quasicriticality in Neuronal Network Models

  • 1. Laboratório de Física Estatística e Biologia Computacional, Faculdade de Filosofia, Ciências e Letras de Ribeirão Preto, Departmento de Física, Universidade de São Paulo, Ribeirão Preto, Brazil

  • 2. Departamento de Física, Universidade Federal de Pernambuco, Recife, Brazil

Abstract

The critical brain hypothesis states that there are information processing advantages for neuronal networks working close to the critical region of a phase transition. If this is true, we must ask how the networks achieve and maintain this critical state. Here, we review several proposed biological mechanisms that turn the critical region into an attractor of a dynamics in network parameters like synapses, neuronal gains, and firing thresholds. Since neuronal networks (biological and models) are not conservative but dissipative, we expect not exact criticality but self-organized quasicriticality, where the system hovers around the critical point.

1 Introduction

Thirty-three years after the initial formulation of the self-organized criticality (SOC) concept [1] (and 37 years after the self-organizing extremal invasion percolation model [2]), one of the most active areas that employ these ideas is theoretical neuroscience. However, neuronal networks, similar to earthquakes and forest fires, are nonconservative systems, in contrast to canonical SOC systems like sandpile models [3, 4]. To model such systems, one uses nonconservative networks of elements represented by cellular automata, discrete time maps, or differential equations. Such models have distinct features from conservative systems. A large fraction of them, in particular neuronal networks, have been described as displaying self-organized quasi-criticality (SOqC) [57] or weak criticality [8, 9], which is the subject of this review.

The first person that made an analogy between brain activity and a critical branching process probably was Alan Turing, in his memorable paper Computing machinery and intelligence [10]. Decades later, the idea that SOC models could be important to describe the activity of neuronal networks was in the air as early as 1995 [1116], eight years before the fundamental 2003 experimental article of Beggs and Plenz [17] reporting neuronal avalanches. This occurred because several authors, working with models for earthquakes and pulse-coupled threshold elements, noticed the formal analogy between such systems and networks of integrate-and-fire neurons. Critical learning was also conjectured by Chialvo and Bak [1820]. However, in the absence of experimental support, these works, although prescient, were basically theoretical conjectures. A historical question would be to determine in what extent this early literature motivated Beggs and Plenz to perform their experiments.

Since 2003, however, the study of criticality in neuronal networks developed itself as a research paradigm, with a large literature, diverse experimental approaches, and several problems addressed theoretically and computationally (some reviews include Refs. [7, 2127]). One of the main results is that information processing seems to be optimized at a second-order absorbing phase transition [2842]. This transition occurs between no activity (the absorbing phase) and nonzero steady-state activity (the active phase). Such transition is familiar from the SOC literature and pertains to the directed percolation (DP) or the conservative-DP (C-DP or Manna) universality classes [7, 4245].

An important question is how neuronal networks self-organize toward the critical region. The question arises because, like earthquake and forest-fire models, neuronal networks are not conservative systems, which means that in principle they cannot be exactly critical [5, 6, 45, 46]. In these networks, we can vary control parameters like the strength of synapses and obtain subcritical, critical, and supercritical behavior. The critical point is therefore achieved only by fine-tuning.

Over time, several authors proposed different biological mechanisms that could eliminate the fine-tuning and make the critical region a self-organized attractor. The obtained criticality is not perfect, but it is sufficient to account for the experimental data. Also, the mechanisms (mainly based on dynamic synapses but also on dynamic neuronal gains and adaptive firing thresholds) are biologically plausible and should be viewed as a research topic per se.

The literature about these homeostatic mechanisms is vast, and we do not intend to present an exhaustive review. However, we discuss here some prototypical mechanisms and try to connect them to self-organized quasicriticality (SOqC), a concept developed to account for nonconservative systems that hover around but do not exactly sit on the critical point [57].

For a better comparison between the models, we will not rely on the original notation of the reviewed articles, but will try to use a universal notation instead. For example, the synaptic strength between a presynaptic neuron j and a postsynaptic neuron i will be always denoted by (notice the convention in the order of the indexes), the membrane potential is , the binary firing state is , the gain of the firing function is , and the firing threshold is . To prevent an excess of index subscripts as is usual in dynamical systems, like , we use the convention for continuous time and for discrete time.

Last, before we begin, a few words about the fine-tuning problem. Even perfect SOC systems are in a sense fine-tuned: they must be conservative and require infinite separation of time scales with driving rate

and dissipation rate

with

[

3

,

4

,

7

,

43

,

45

]. For homeostatic systems, we turn a control parameter like the coupling

W

into a time-dependent slow variable

by imposing a local dynamics in the individual

. This dynamics could depend on new parameters (here called hyperparameters) which need some tuning (in some cases, this tuning can be very coarse in the large τ case). Have we exchanged the fine tuning on

W

by several tuning operations on the homeostatic hyperparameters? Not exactly, as nicely discussed by Hernandez-Urbina and Herrmann [

47

]:

To Tune or Not to Tune

In this article, we have shown how systems self-organize into a critical state through [homeostasis]. Thus, we became relieved from the task of fine-tuning the control parameter W, but instead, we acquire a new task: that of estimating the appropriate values for parameters , and D. Is there no way to be relieved from tuning any parameter in the system?

The issue of tuning or not tuning depends mainly on what we understand by control parameter. (…) a control parameter can be thought of a knob or dial that when turned the system exhibits some quantifiable change. We say that the system self-organizes if nobody turns that knob but the system itself. In order to achieve this, the elements comprising the system require a feedback mechanism to be able to change their inner dynamics in response to their surroundings. (…) The latter does not require an external entity to turn the dial for the system to exhibit critical dynamics. However, its internal dynamics are configured in a particular way in order to allow feedback mechanisms at the level of individual elements.

Did we fine-tune their configuration? Yes. Otherwise, we would have not achieved what was desired, as nothing comes out of nothing. Did we change control parameter from W to and D? No, the control parameter is still intact, and now it is “in the hands” of the system. (…) Last and most importantly, the new configuration stresses the difference between global and local mechanisms. The control parameter W (the dial) is an external quantity that observes and governs the global (i.e., the collective), whereas [homeostasis] provides the system with local mechanisms that have an effect over the collective. This is the main feature of a complex system.

2 Plastic Synapses

Consider an absorbing-state second-order phase transition where the activity is below a critical point andfor , where E is a generic control parameter (see Figures 1A,B). For topologies such as random and complete graphs, one typically obtains , which is consistent with a transition in the mean-field directed percolation (DP) class (or perhaps, the compact-DP (Manna) class usual in SOC models, which has the same mean-field exponents but different ones below the upper critical dimension; see Refs. 3, 7, 42, 48).

FIGURE 1

The basic idea underlying most of the proposed mechanism for homeostatic self-organization is to define a slow dynamics in the individual links such that if the network is in the subcritical state, their average value grows toward , but if the network is in the supercritical state, decreases toward (see Figure 1C). Ideally, these mechanisms should be local, that is, they should not have access to global network information such as the density of active sites ρ (the order parameter) but rather only to the local firing of the neurons connected by . In the following, we give several examples from the literature.

2.1 Short-Term Synaptic Plasticity

Markram and Tsodyks [49, 50] proposed a short-term synaptic model that inspired several authors in the area of self-organization to criticality. The Markram–Tsodyks (MT) dynamics iswhere is the available neurotransmitter resources, u is the fraction used after the presynaptic firing at time (so that the effective synaptic efficacy is ), A and U are baseline constants (hyperparameters), and τ and are recovery time constants.

In an influential article, Levina, Herrmann, and Geisel (LHG) [51] proposed to use depressing–recovering synapses. In their model, we have leaky integrate-and-fire (LIF) neurons in a complete-graph topology. As a self-organizing mechanism, they used a simplified version of the MT dynamics with constant u, that is, only Eq. 2. They studied the system varying A and found that although we need some tuning in the hyperparameter A, any initial distribution of synapses converges to a stationary distribution with . We will refer to Eq. 2 with constant u as the LHG dynamics. These authors found quasicriticality for and . Levina et al. also studied synapses with the full MT model in Refs. 52, 53.

Bonachela et al. [6] studied in depth the LHG model and found that, like forest-fire models, it is an instance of SOqC. The system presents the characteristic hovering around the critical point in the form of stochastic sawtooth oscillations in the that do not disappear in the thermodynamic limit. Using the same model, Wang and Zhou [54] showed that the LHG dynamics also works in hierarchical modular networks, with an apparent improvement in SOqC robustness in this topology.

Note that the LHG dynamics can be written in terms of the synaptic efficacy by multiplying Eq. 2 by u, leading to

Brochini et al. [55] studied a complete graph of stochastic discrete time LIFs [56, 57] and proposed a discrete time LHG dynamics:where the firing index denotes spikes. Kinouchi et al. [58], in the same system, studied the stability of the fixed points of the joint neuronal LHG dynamics. They found that, for the average synaptic value W, the fixed point is , meaning that for large , the systems approach the critical point if . However, since it is not biologically plausible to assume an infinite recovering time τ, one always obtains a system which is slightly supercritical. This work also showed that the fixed point is a barely stable focus, around which the system is excited by finite size (demographic) noise, leading to the characteristic sawtooth oscillations of SOqC. A similar scenario was already found by Grassberger and Kantz for forest-fire models [59].

The discrete time LHG dynamics was also studied for cellular automata neurons in random networks with an average of K neighbors connected by probabilistic synapses (Costa et al. [60], Campos et al. [61] and Kinouchi et al. [58]):with an upper limit . Multiplying by K and summing over i, we get an equation for the local branching ratio:

It has been found that such depressing synapses induce correlations inside the synaptic matrix, affecting the global branching ratio

, so that criticality does not occur at the branching ratio

but rather when the largest eigenvalue of the synaptic matrix is

, with

[

61

].

  • After examining this diverse literature, it seems that any homeostatic dynamics of the form

can self-organize the networks, where

R

and

D

are the recovery and depressing processes, for example:

  • In particular, the simplest mechanism would be

a usual dynamics in SOC models [

5

,

7

]. This means that the full LHG dynamics, and also the full MT dynamics, is a sufficient but not a necessary condition for SOqC.

  • The average for this dynamics is

where

is the time-dependent network activity. The stationary state is

, and if

is large, this means that

. Also, if we use

Eq. 1

, we get

. The dissipative term

u

should also be small, meaning that, if we desire absolute separation of time scales, we need

, as is usual in other SOC systems [

3

,

5

,

7

,

43

,

45

].

Here, for biological plausibility, it is better to assume a large but finite recovery time, say ms, in comparison with 1 ms for spikes. Also, to obtain SOqC, u need not be small. We must have because produces subcritical activity [6, 51, 58]. So, moderate , , and large seem to be the coarse tuning conditions for homeostasis. This produces the hovering of the average value around the critical point , with the characteristic sawtooth oscillations of SOqC and power-law avalanches for some decades.

We observe that the original LHG model [6, 51] had to produce the infinite separation of time scales in the large-N limit. This, however, did not prevent the SOqC hovering stochastic oscillations in the thermodynamic limit. Moreover, a recovery time proportional to N is a very unrealistic feature for biological synapses. Curiously, if we use a finite instead, the oscillations are damped in the thermodynamic limit because the fixed point continues to be an attractive focus, but the demographic noise vanishes. On the other hand, when we use , the fixed point loses its stability and continues to be perturbed even by the vanishing fluctuations [58].

As early as 1998, Kinouchi [62] proposed the synaptic dynamics:with small but finite τ and u. The difference here from the former mechanisms is that, like in Eq. 10, depression is not proportional to (but recovery is). He also discussed the several concepts of SOC at the time, and called these homeostatic system as self-tuned criticality, which is equivalent to a SOqC system with finite separation of time scales.

Hsu and Beggs [63] studied a model for the activity of the local field potential at electrode i:where is a spontaneous activity used to prevent the freezing of the system in the absorbing state (this is similar to a time-dependent SOC drive term h). The probabilistic coupling is . Firing-rate homeostasis and critical homeostasis are achieved by increasing or decreasing H and P if the firing rate is too low or too high compared to a target firing rate :where represents a moving average over a memory widow .

Hsu and Beggs found that for , this dynamics leads to a critical branching ratio . They also found that the target firing rate can be maintained by this homeostasis. Equation 15 reminds us of the depressing–recovering synaptic rule of Eq. 9. Indeed, if we examine the small limit (as used by the authors), we havewhere now and . A similar reasoning applies to the equation for , which could be identified with the homeostatic threshold Eq. 60 discussed in Section 4, with .

In another article, Hsu et al. [64] extended the model to include distance-dependent connectivity and Hebbian learning [64]. Changing the homeostasis equations to our standard notation, we havewhere is now a probability of spontaneous firing, is a target average activity, and is the distance between electrodes i and j. The input ratio is . Remember that, for a critical branching process, . These values were chosen as homeostatic targets.

Shew et al. [65] studied experimentally the visual cortex of the turtle and proposed a (complete graph) self-organizing model for the input synapses and the cortical synapses . The stochastic neurons fire with a linear saturating function:where, like in Eq. 13, accounts for external stimuli. For both types of synapses, they used the discrete time LHG dynamics, Eq. 5, and concluded that the computational model accounts very well for the experimental data.

Hernandez-Urbina and Herrmann [

47

] studied a discrete time IF model where they define a local measure called node success:

where

A

is the adjacency matrix of the network, with

if

j

projects onto

i

(

otherwise). Note that we reversed the indices as compared with the original notation [

47

]. Observe that

measures how many postsynaptic neurons are excited by the presynaptic neuron

j

.

  • The authors then define the node success–driven plasticity (NSDP):

where

is the time difference between the spike of node

j

occurring at current time step

t

and its previous spike which occurred at

(the last spike), while

B

and

D

are constants. Notice that the drive term is larger if the node success is small and the dissipation term is larger if the firing rate (inferred locally as

) is large [compare with

Eq. 8

].

They analyzed the relation among the avalanche critical exponents, the largest eigenvalue associated to the weight matrix, and the data collapse of the shape of avalanches for several network topologies. All results are compatible with (quasi-)criticality. They also found that if they stop NSDP and introduce STDP, the criticality vanishes, but if the two dynamics are done together, criticality survives.

Levina et al. [66] proposed a model in a complete graph in which the branching ratio σ is estimated as the local branching of a neuron that initiates an avalanche. The homeostatic rule is to increase the synapses if and decreasing them if . The network converges, with SOqC oscillations, to .

2.2 Meta-Plasticity

Peng and Beggs [

67

] studied a square lattice (

) of IF neurons with open boundary conditions. A random neuron receives a small increment of voltage (slow drive). If the voltage of presynaptic neuron

j

is above a threshold

, we have

  • where Θ is the Heaviside function. The self-organization is made by a LHG dynamics plus a meta-plasticity term:

where

is the total fraction of neurons at the boundary that fired during the

a

-th avalanche and

is the updated value of

u

after the avalanche. Notice that the meta-plasticity term differs from the MT model of

Eq. 3

, because the hyperparameter

u

is updated in a much slower time scale. Peng and Beggs show that this dynamics converges automatically to good values for the parameter

u

; that is, we no longer need set the

u

value in advance. We observe, however, that

is a nonlocal variable.

2.3 Hebbian Synapses

Ever since Donald Hebb’s proposal that neurons that fire together wire together [6870], several attempts have been made to implement this idea in models of self-organization. However, a pure Hebbian mechanism can lead to diverging synapses, so that some kind of normalization or decay needs also be included in Hebbian plasticity.

In 2006, de Arcangelis, Perrone-Capano, and Herrmann introduced a neuronal network with Hebbian synaptic dynamics [71] that we call the APH model. There are several small variations in the models proposed by de Arcangelis et al., but perhaps the simplest one [72] is given by the following neuronal dynamics on a square lattice of neurons: If at time t a presynaptic neuron j has a membrane potential above a firing threshold, , it fires, sending neurotransmitters to all its (nonrefractory) neighbors:where . Then, neuron j enters in a refractory period of one time step. The synaptic self-organizing dynamics is given bywhere is the total number bonds and active (inactive) synapses are the ones used (not used) in Eq. 28. The sum in Eq. 30 is over all synaptic modifications , a step which involves nonlocal information and amounts to a kind of synaptic rescaling. If the synaptic strength falls below some threshold, the synapse is deleted (pruning), so that this mechanism sculpts the network architecture. So, co-activation of pre- and postsynaptic neurons makes the synapse grow, and inactive synapses are depressed, which means that it is a Hebbian process. Several authors explored the APH model in different contexts, including learning phenomena [7280].

Çiftçi [81] studied a neuronal SIRs model on the C. elegans neuronal network topology. The spontaneous activation rate (the drive) is , and the recovery rate to the susceptible state is q. The author studied the system as a function of (separation of time scales ). The probability that a neuron j activates its neighbor i is ( is the probability of synaptic failure in the author notation). The synaptic update occurs after an avalanche (of size S) and affects two neighbors that are active at the same time (Hebbian term):

Ciftçi found robust self-organization to quasicriticality. The author notes, however, that S is nonlocal information.

Uhlig et al. [82] considered the effect of LHG synapses in the presence of an associative Hebb synaptic matrix. They found that, although the two processes are not irreconcilable, the critical state has detrimental effects to the attractor recovery. They interpret this as a suggestion that the standard paradigm of memories as fixed point attractors should be replaced by more general approaches like transient dynamics [83].

2.4 Spike Time–Dependent Plasticity

Rubinov et al. [84] studied a hierarchical modular network of LIF neurons with STDP plasticity. The synapses are modeled by double exponentials:where are the presynaptic firing times. Synaptic weight changes at every spike of a presynaptic neuron, following the STDP rule:where and are weight-dependent functions (see Ref. 84 for details). The authors show an association among modularity, low cost of wiring, STDP, and self-organized criticality in a neurobiologically realistic model of neuronal activity.

Del Papa et al. [

85

] explored the interaction between criticality and learning in the context of self-organized recurrent networks (SORN). The ratio between inhibitory to excitatory neurons is

. These neurons interact via

, and

synapses (no inhibitory self-coupling). Synapses are dynamic, and also the excitatory thresholds

. The neurons evolve as

where

represents membrane noise. Synapses and thresholds evolve following five combined dynamics:

where

is the desired activity level. In the structural plasticity process, excitatory synapses are added with probability

. The authors found that this SORN model presents well-behaved power-law avalanche statistics and that the plastic mechanisms are necessary to drive the network to criticality, but not to maintain it critical; that is, the plasticity can be turned off after the networks reach the critical region. Also, they found that noise was essential to produce the avalanches, but degrade the learning performance. From this, they conclude that the relation between criticality and learning is more complex, and it is not obvious if criticality optimizes learning.

  • Levina et al. [86] studied the combined effect of LHG synapses, homeostatic branching parameter , and STDP:

They found that there is cooperativity of these mechanisms in extending the robustness of the critical state to variations on the hyperparameter A (see Eq. 2).

Stepp et al. [

87

] examined a LIF neuronal network which has both Markram–Tsodyks dynamics and spiking time–dependent plasticity STDP (both excitatory and inhibitory). They found that, although MT dynamics produces some self-organization, the STDP mechanism increases the robustness of the network criticality.

  • Delattre et al. [88] included in the STDP synaptic change a resource depletion term:

where resource availability

evolves as

Here, is a continuous estimator of the network firing rate, is the recovery time of the resources availability, and the term in the denominator ensures that depletion is fast and recovery is slow ( Hz). They called this mechanism as network spiking–dependent plasticity and showed that, in contrast to pure STDP, it leads to power-law avalanches with branching ratio around one.

2.5 Homeostatic Neurite Growth

Kossio et al. [89] studied IF neurons randomly distributed in a plane, with neurites distributed within circles of radii that evolved according towhere are the spike times of neuron i, with τ and u constants. Since the connections are given by , where g is a constant and are the overlapping areas of the synaptic discs, Eq. 46 is not much different from the simple synaptic dynamics of Eq. 10, with constant drive and decay due to spikes.

Tetzlaff et al. [

90

] studied experimentally neuronal avalanches during the maturation of cell cultures, finding that criticality is achieved in a third stage of the dendrites/axons growth process. They modeled the system using neurons with membrane potential

and calcium dynamics

:

where

defines excitatory (inhibitory) neurons, and

is a random number. Dendritic and axonal spatial distributions are again represented by their radii

and

, whose dynamics are governed by calcium dynamics as

  • Finally, the effective connection is defined as

where

is the distance between the neurons. This essentially represents the overlap of the axonal and dendritic zones, which can be understood as an abstract representation for the probability of synapse formation.

3 Dynamic Neuronal Gains

For all-to-all topologies as used in Refs.

6

,

51

,

53

,

55

, the number of synapses is

, which means that simulations become impractical for large

N

. Brochini et al. [

55

] discovered that, in their model with stochastic neurons, adaptation in a single parameter per neuron (the dynamic gain) is sufficient to self-organize the network. This reduces the number of dynamic equations from

to

, enabling large-scale simulations.

  • The stochastic neuron has a probabilistic firing function, say, a linear saturating function or a rational function:

where

means a spike,

V

is the membrane potential,

θ

is the threshold, and

is the neuronal gain.

Now, let us assume that each neuron i has its neuronal gain . Several adaptive dynamics work, similar to LHG and even simpler:

Costa et al. [91] and Kinouchi et al. [58] studied the stability of the fixed points of mechanisms given by Eqs 55 and 56 and concluded that the fixed point solution is of the form , . The fixed point is a barely stable focus for large τ, which means that demographic noise creates the hovering around the critical point (the sawtooth SOqC stochastic oscillations). The peaks of theses oscillations correspond to large excursions in the supercritical region, producing the so-called dragon king avalanches [77].

Zierenberg et al. [92] considered a cellular automaton neuronal model with binary states and probabilistic synapses , where is a homeostatic scaling factor. The homeostasis is given by a negative feedback:where is the time constant of the homeostatic process and is a target level. Notice that this mechanism depends only on the activity of the postsynaptic neuron i, not the presynaptic neuron j as in the LHG model. So, plays the same role of the neuronal gain discussed above.

Indeed, for a cellular automata model similar to [60, 61], a probabilistic synapse with neuronal gains could be written as . In order to compare with the neuronal gain dynamics, we rewrite Eq. 58 aswhere and . So, in Zierenberg et al., we have a neuronal gain dynamics similar to Eq. 10, with hovering around the critical point and the ubiquitous sawtooth oscillations in .

4 Adaptive Firing Thresholds

Girardi-Schappo et al. [93] examined a network with excitatory and inhibitory stochastic LIF neurons. They found a phase diagram very similar to that of the Brunel model [94], with synchronous regular (SR), asynchronous regular (AR), synchronous irregular (SI), and asynchronous irregular (AI) states. Close to the balanced state they found an absorbing-active second-order phase transition with a critical point . The self-organization of the and inhibitory synapses was accomplished by a LHG dynamics.

They noticed, however, that for these stochastic LIF systems, the critical point requires also a zero field , where I is the external input and μ is the leakage parameter. While setting for the critical point of spin systems is natural, obtaining zero field in this case demands self-organization, which is done by an adaptive firing threshold:

Notice the plus signal in the last term, since if the postsynaptic neuron fires () then the threshold must increase to hinder new firings. This mechanism is biologically plausible and also explains classical firing rate adaptation. Remembering that in the critical point, where is the field critical exponent, from Eq. 60, we have for large .

As already seen, Del Pappa et al. [85] considered a similar threshold dynamics, Eq. 41. Bienenstock and Lehmann [95] also studied, at the mean field level, the joint evolution of firing thresholds and dynamic synapses (see Section 6.3).

5 Topological Self-Organization

Consider a cellular automata model [29, 32, 60, 61] in a network with average degree K and average probabilistic synaptic weights . The critical branching ratio is ; that is, critical average weight . Notice that we can study networks with any K, even the complete graph, where . In this network, what is critical is the activity, which does not depend on the topology (the degree K).

In another sense, we call a network topology critical if there is a barely infinite percolating cluster, which for a random network occurs for . Several authors, starting in 2,000 with Bornholdt and Rohlf [96], explored the self-organization toward this type of topological criticality [22, 97104].

So, we can have a critical network with a and any K or a topologically critical network with a well-defined . The two concepts (activity criticality and topological criticality) are different, but sometimes a topological criticality also presents a phase transition with power-law avalanches and critical phenomena. The topological phase transition is continuous and has a critical point, related to the formation of a percolating cluster of nodes, but in the Bornholdt and Rohlf (BR) model, it is related to an order-chaos phase transition, not to an absorbing state phase transition.

We present here a more advanced version of the BR model [

97

]. It follows the idea of deleting synapses from correlated neurons and increasing synapses of uncorrelated neurons. The correlation over time

T

is calculated as

where the stochastic neurons evolve as

The self-organization procedure is as follows:

  • Choose at random a pair of neurons.

  • Calculate the correlation .

  • Define a threshold α. If , i receives a new link randomly drawn from a uniform distribution on from site j, and if , the link is deleted.

  • Then, continue updating the network state and self-organizing the network.

Interesting analytic results for this class of topological models were obtained by Droste et al. [105]. The self-organized connectivity is about , where the order-chaos transition occurs. We must notice, however, that seems to be a very low degree for biological neuronal networks. Kuehn [106] studied how the topological dynamics time scale τ and noise level D affect the BR model, finding that optimal convergence to the critical point occurs for finite values of and .

Zeng et al. [107] combined the rewiring rules of the BR model with the neuronal dynamics of the APH model. They obtained an interesting result: the final topology is a small-world network with a large number of neighbors, say . This avoids the criticism made above about the low number of the BR model.

6 Self-Organization to Other Phase Transitions

6.1 First-Order Transition

Mejias et al. [108] studied a neuronal population model with firing rate , which can be written in terms of the firing density :where is a (deterministic) firing function, is a zero-mean Gaussian noise, and is a noise amplitude. They used a depressing average synaptic weight inspired by a noisy LHG model:where is the synaptic noise amplitude. Within a certain range of noise, they observed up–down states with irregular intervals, leading to a distribution of permanence times T in the upstate as . Notice that this model already starts with the mean-field equations; it is not a microscopic model (although a microscopic model perhaps could be constructed from it).

Millman et al. [109] obtained similar results at a first-order phase transition, but now in a random network of LIF neurons with average of K neighbors and chemical synapses. The synapses follow the LHG mechanism:where in the authors notation ( for probability of releasing vesicles, for synaptic resources) and . They found that the branching ratio is close to one in the upstate, with power-law avalanches with size exponent and lifetime exponent 2.

Di Santo et al. [110, 111] and Buendía et al. [7, 46] studied the self-organization toward a first-order phase transition (called self-organized bistability or SOB). The simplest self-organizing dynamics was used in a two-dimensional model:where are constants, A is the maximum level of charging, D is the diffusion constant, and is a zero-mean Gaussian noise with amplitude ρ. The authors’ original notation is , and E is a (former) control parameter. In the limit , this self-organization is conservative and can produce a tuning to the Maxwell point with power-law avalanches (with mean-field exponents) and dragon-king quasi-periodic events.

Relaxing the conditions of infinite separation of time scales and bulk conservation, the authors studied the model with an LHG dynamics [7, 46, 111]:where W is the synaptic weight and I a small input. They found that this is the equivalent SOqC version for first-order phase transitions, obtaining hysteretic up–down activity, which has been called self-organized collective oscillations (SOCOs) [7, 46, 111]. They also observed bistability phenomena.

Cowan et al. [112] also found hysteresis cycles due to bistability in an IF model from the combination of an excitatory feedback loop with anti-Hebbian synapses in its input pathway. This leads to avalanches both in the upstate and in the downstate, each one with power-law statistics (size exponents close to ). The hysteresis loop leads to a sawtooth oscillation in the average synaptic weight. This is similar to the SOCO scenario.

6.2 Hopf Bifurcation

Absorbing-active phase transitions are associated to transcritical bifurcations in the low-dimensional mean-field description of the order parameter. Other bifurcations (say, between fixed points and periodic orbits) can also appear in the low-dimensional reduction of systems exhibiting other phase transitions, such as between steady states and collective oscillations. They are critical in the sense that they present phenomena like critical slowing down (power-law relaxation to the stationary state) and critical exponents. Some authors explored the homeostatic self-organization toward such bifurcation lines.

In what can be considered a precursor in this field, Bienenstock and Lehmann [

95

] proposed to apply a Hebbian-like dynamics at the level of rate dynamics to the Wilson–Cowan equations, having shown that the model self-organizes near a Hopf bifurcation to/from oscillatory dynamics.

  • The model has excitatory and inhibitory stochastic neurons. The neuronal equations are

where, as before, the binary variable

denotes the firing of the neuron. The update process is an asynchronous (Glauber) dynamics:

where

is the neuronal gain.

The authors proposed a covariance-based regulation for the synapses and and a homeostatic process for the firing thresholds . The homeostatic mechanisms arewhere is the variance of the excitatory activity , is the excitatory–inhibitory covariance, are time constants, and are target constants.

The authors show that there are Hopf and saddle-node lines in this system and that the regulated system self-organizes at the crossing of these lines. So, the system is very close to the oscillatory bifurcation, showing great sensibility to external inputs.

As commented, this article is a pioneer in the sense of searching for homeostatic self-organization at a phase transition in a neuronal network in 1998, well before the work of Beggs and Plenz [

17

]. However, we must recognize some deficiencies that later models tried to avoid. First, all the synapses and thresholds have the same value, instead of an individual dynamics for each one, as we saw in the preceding sections. Most importantly, the network activities

and

are nonlocal quantities, not locally accessible to

Eqs 76

and

77

.

  • Magnasco et al. [113] examined a very stylized model of neural activity with time-dependent anti-Hebbian synapses:

where

is the Kronecker delta. They found that the system self-organizes around a Hopf bifurcation, showing power-law avalanches and hovering phenomena similar to SOqC.

6.3 Edge of Synchronization

Khoshkhou and Montakhab [114] studied a random network with neighbors. The cells are Izhikevich neurons described by

The parameters , and d are chosen to have regular spiking excitatory neurons and fast spiking inhibitory neurons. The synaptic input is composed of chemical double-exponential pulses with time constants and :where are axonal delays from j to i, is the reversal potential of the synapses, and is the in-degree of node i.

The inhibitory synapses are fixed, but the excitatory ones evolve with a STDP dynamics. If the firing difference is , when the postsynaptic neuron i fires, the synapses change by

This system presents a transition from out-of-phase to synchronized spiking. The authors show that a STDP dynamics self-organizes in a robust way the system to the border of this transition, where critical features like avalanches (coexisting with oscillations) appear.

7 Concluding Remarks

In this review, we described several examples of self-organization mechanisms that drive neuronal networks to the border of a phase transition (mostly a second-order absorbing phase transition, but also to first-order, synchronization, Hopf, and order-chaos transitions). Surprisingly, for all cases, it is possible to detect neuronal avalanches with mean-field exponents similar to those obtained in the experiments of Beggs and Plenz [17].

By using a standardized notation, we recognized several common features between the proposed homeostatic mechanisms. Most of them are variants of the fundamental drive-dissipation dynamics of SOC and SOqC and can be grouped into a few classes.

Following Hernandez-Urbina and Herrmann [47], we stress that the coarse tuning on hyperparameters of homeostatic SOqC is not equivalent to the fine-tuning of the original control parameter. This homeostasis is a bona-fide self-organization, in the same sense that the regulation of body temperature is self-organized (although presumably there are hyperparameters in that regulation). The advantage of these explicit homeostatic mechanisms is that they are biologically inspired and could be studied in future experiments to determine which are more relevant to cortical activity.

Due to nonconservative dynamics and the lack of an infinite separation of time scales, all these mechanisms lead to SOqC [57], not SOC. In particular, conservative sandpile models should not be used to model neuronal avalanches because neurons are not conservative. The presence of SOqC is revealed by stochastic sawtooth oscillations in the former control parameter, leading to large excursions in the supercritical and subcritical phases. However, hovering around the critical point seems to be sufficient to account for the current experimental data. Also, perhaps the omnipresent stochastic oscillations could be detected experimentally (some authors conjecture that they are the basis for brain rhythms [91]).

One suggestion for further research is to eliminate nonlocal variables in the homeostatic mechanisms. Another is to study how the branching ratio σ, or better, the synaptic matrix largest eigenvalue , depends on the self-organization hyperparameters (as done in Ref. [61]). As several results in this review have shown, the dependence of criticality on the hyperparameters is always weaker than the dependence on the original control parameter. Finally, one could devise local metaplasticity rules for the hyperparameters, similarly to Peng and Beggs [67] (which, however, is unfortunately nonlocal). An intuitive possibility is that, at each level of metaplasticity, the need for coarse tuning of hyperparameters decreases and criticality will turn out more robust.

Funding

This article was produced as part of the activities of FAPESP Research, Innovation, and Dissemination Center for Neuromathematics (Grant No. 2013/07699-0, São Paulo Research Foundation). We acknowledge the financial support from CNPq (Grant Nos. 425329/2018-6, 301744/2018-1 and 2018/20277-0), FACEPE (Grant No. APQ-0642-1.05/18), and Center for Natural and Artificial Information Processing Systems (CNAIPS)-USP. Support from CAPES (Grant Nos. 88882.378804/2019-01 and 88882.347522/2010-01) and FAPESP (Grant Nos. 2018/20277-0 and 2019/12746-3) is also gratefully acknowledged.

Statements

Author contributions

OK and MC contributed to conception and design of the study; RP organized the database of revised articles and made Figure 1; OK and MC wrote the manuscript. All authors contributed to manuscript revision, and read and approved the submitted version.

Acknowledgments

The authors thank Miguel Muñoz for discussions and advice.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  • 1.

    BakPTangCWiesenfeldK. Self-organized criticality: an explanation of the 1/fnoise. Phys Rev Lett (1987). 59:381. 10.1103/physrevlett.59.381

  • 2.

    WilkinsonDWillemsenJF. Invasion percolation: a new form of percolation theory. J Phys A Math Gen (1983). 16:3365. 10.1088/0305-4470/16/14/028

  • 3.

    JensenHJ. Self-organized criticality: emergent complex behavior in physical and biological systems. Vol. 10. Cambridge: Cambridge University Press (1998).

  • 4.

    PruessnerG. Self-organised criticality: theory, models and characterization. Cambridge: Cambridge University Press (2012).

  • 5.

    BonachelaJAMuñozMA. Self-organization without conservation: true or just apparent scale-invariance?J Stat Mech (2009) 2009:P09009. 10.1088/1742-5468/2009/09/P09009

  • 6.

    BonachelaJADe FranciscisSTorresJJMuñozMA. Self-organization without conservation: are neuronal avalanches generically critical?J Stat Mech2010 (2010). P02015. 10.1088/1742-5468/2010/02/P02015

  • 7.

    BuendíaVdi SantoSBonachelaJAMuñozMA. Feedback mechanisms for self-organization to the edge of a phase transition. Front Phys8 (2020). 333. 10.3389/fphy.2020.00333

  • 8.

    PalmieriLJensenHJ. The emergence of weak criticality in soc systems. Epl123 (2018). 20002. 10.1209/0295-5075/123/20002

  • 9.

    PalmieriLJensenHJ. The forest fire model: the subtleties of criticality and scale invariance. Front Phys8 (2020). 257. 10.3389/fphy.2020.00257

  • 10.

    TuringAM. I.-computing machinery and intelligence. MindLIX (1950). 433. 10.1093/mind/lix.236.433

  • 11.

    UsherMStemmlerMOlamiZ. Dynamic pattern formation leads to1fnoise in neural populations. Phys Rev Lett (1995). 74:326. 10.1103/PhysRevLett.74.326

  • 12.

    CorralÁPérezCJDíaz-GuileraAArenasA. Synchronization in a lattice model of pulse-coupled oscillators. Phys Rev Lett (1995). 75:3697. 10.1103/PhysRevLett.75.3697

  • 13.

    BottaniS. Pulse-coupled relaxation oscillators: from biological synchronization to self-organized criticality. Phys Rev Lett (1995). 74:4189. 10.1103/PhysRevLett.74.4189

  • 14.

    ChenD-MWuSGuoAYangZR. Self-organized criticality in a cellular automaton model of pulse-coupled integrate-and-fire neurons. J Phys Math GenJ Phys A Math Gen (1995). 28:5177. 10.1088/0305-4470/28/18/009

  • 15.

    HerzAVMHopfieldJJ. Earthquake cycles and neural reverberations: collective oscillations in systems with pulse-coupled threshold elements. Phys Rev Lett (1995). 75:1222. 10.1103/PhysRevLett.75.1222

  • 16.

    MiddletonTangC. Self-organized criticality in nonconserved systems. Phys Rev Lett74 (1995). 742. 10.1103/PhysRevLett.74.742

  • 17.

    BeggsJMPlenzD. Neuronal avalanches in neocortical circuits. J Neurosci23 (2003). 1116777. 10.1523/JNEUROSCI.23-35-11167.2003

  • 18.

    StassinopoulosDBakP. Democratic reinforcement: a principle for brain function. Phys Rev E (1995). 51:5033. 10.1103/physreve.51.5033

  • 19.

    ChialvoDRBakP. Learning from mistakes. Neuroscience (1999). 90:113748. 10.1016/S0306-4522(98)00472-2

  • 20.

    BakPChialvoDR. Adaptive learning by extremal dynamics and negative feedback. Phys Rev E (2001). 63:031912. 10.1103/PhysRevE.63.031912

  • 21.

    ChialvoDR. Emergent complex neural dynamics. Nat Phys (2010). 6:74450. 10.1038/nphys1803

  • 22.

    HesseJGrossT. Self-organized criticality as a fundamental property of neural systems. Front Syst Neurosci (2014). 8:166. 10.3389/fnsys.2014.00166

  • 23.

    PlenzDNieburE, editors. Criticality in neural systems. Hoboken: John Wiley & Sons (2014).

  • 24.

    CocchiLGolloLLZaleskyABreakspearM. Criticality in the brain: a synthesis of neurobiology, models and cognition. Prog Neurobiol Prog Neurobiol (2017). 158:13252. 10.1016/j.pneurobio.2017.07.002

  • 25.

    MuñozMA. Colloquium: criticality and dynamical scaling in living systems. Rev Mod Phys (2018). 90:031001. 10.1103/RevModPhys.90.031001

  • 26.

    WiltingJPriesemannV. 25 years of criticality in neuroscience—established results, open controversies, novel concepts. Curr Opin Neurobiol (2019). 58:10511. 10.1016/j.conb.2019.08.002

  • 27.

    ZeraatiRPriesemannVLevinaA. Self-organization toward criticality by synaptic plasticity. arXiv(2020) 2010.07888.

  • 28.

    HaldemanCBeggsJM. Critical branching captures activity in living neural networks and maximizes the number of metastable states. Phys Rev Lett (2005). 94:058101. 10.1103/PhysRevLett.94.058101

  • 29.

    KinouchiOCopelliM. Optimal dynamical range of excitable networks at criticality. Nat Phys (2006). 2:348351. 10.1038/nphys289

  • 30.

    CopelliMCamposPRA. Excitable scale free networks. Eur Phys J B (2007). 56:27378. 10.1140/epjb/e2007-00114-7

  • 31.

    WuA-CXuX-JWangY-H. Excitable Greenberg-Hastings cellular automaton model on scale-free networks. Phys Rev E (2007). 75:032901. 10.1103/PhysRevE.75.032901

  • 32.

    AssisVRVCopelliM. Dynamic range of hypercubic stochastic excitable media. Phys Rev E (2008). 77:011923. 10.1103/PhysRevE.77.011923

  • 33.

    BeggsJM. The criticality hypothesis: how local cortical networks might optimize information processing. Phil Trans R Soc A366 (2008). 29343. 10.1098/rsta.2007.2092

  • 34.

    RibeiroTLCopelliM. Deterministic excitable media under Poisson drive: Power law responses, spiral waves, and dynamic range. Phys Rev E (2008). 77:051911. 10.1103/PhysRevE.77.051911

  • 35.

    ShewWLYangHPetermannTRoyRPlenzD. Neuronal avalanches imply maximum dynamic range in cortical networks at criticality. J Neurosci (2009). 29:15595600. 10.1523/JNEUROSCI.3864-09.2009

  • 36.

    LarremoreDBShewWLRestrepoJG. Predicting criticality and dynamic range in complex networks: effects of topology. Phys Rev Lett (2018). 106:058101. 10.1103/PhysRevLett.106.058101

  • 37.

    ShewWLYangHYuSRoyRPlenzD. Information capacity and transmission are maximized in balanced cortical networks with neuronal avalanches. J Neurosci (2011). 31:5563. 10.1523/JNEUROSCI.4637-10.2011

  • 38.

    ShewWLPlenzD. The functional benefits of criticality in the cortex. Neuroscientist (2013). 19:88100. 10.1177/1073858412445487

  • 39.

    MosqueiroTSMaiaLP. Optimal channel efficiency in a sensory network. Phys Rev E (2013). 88:012712. 10.1103/PhysRevE.88.012712

  • 40.

    WangC-YWuZXChenMZQ. Approximate-master-equation approach for the Kinouchi-Copelli neural model on networks. Phys Rev E (2017). 95:012310. 10.1103/PhysRevE.95.012310

  • 41.

    ZierenbergJWiltingJPriesemannVLevinaA. Tailored ensembles of neural networks optimize sensitivity to stimulus statistics. Phys Rev Res (2020). 2:013115. 10.1103/physrevresearch.2.013115

  • 42.

    GaleraEFKinouchiO. Physics of psychophysics: two coupled square lattices of spiking neurons have huge dynamic range at criticality. arXiv (2020). 11254.

  • 43.

    DickmanR.VespignaniAZapperiS. Self-organized criticality as an absorbing-state phase transition. Phys Rev E (1998). 57:5095. 10.1103/PhysRevE.57.5095

  • 44.

    MuñozMADickmanRVespignaniAZapperiS. Avalanche and spreading exponents in systems with absorbing states. Phys Rev E (1999). 59:6175. 10.1103/PhysRevE.59.6175

  • 45.

    DickmanRMuñozMAVespignaniAZapperiS. Paths to self-organized criticality. Braz J Phys (2000). 30:2741. 10.1590/S0103-97332000000100004

  • 46.

    BuendíaV.di SantoS.VillegasP.BurioniRMuñozMA. Self-organized bistability and its possible relevance for brain dynamics. Phys Rev Res (2020). 2:013318. 10.1103/PhysRevResearch.2.013318

  • 47.

    Hernandez-UrbinaVHerrmannJM. Self-organized criticality via retro-synaptic signals. Front Phys (2017). 4:54. 10.3389/fphy.2016.00054

  • 48.

    LübeckS. Universal scaling behavior of non-equilibrium phase transitions. Int J Mod Phys B (2004). 18:39774118. 10.1142/s0217979204027748

  • 49.

    MarkramHTsodyksM. Redistribution of synaptic efficacy between neocortical pyramidal neurons. Nature (1996). 382:80710. 10.1038/382807a0

  • 50.

    TsodyksMPawelzikKMarkramH. Neural networks with dynamic synapses. Neural Comput (1998). 10:82135. 10.1162/089976698300017502

  • 51.

    LevinaA.HerrmannKGeiselT. Dynamical synapses causing self-organized criticality in neural networks. Nat Phys (2007a). 3:857860. 10.1038/nphys758

  • 52.

    LevinaAHerrmannM. Dynamical synapses give rise to a power-law distribution of neuronal avalanches. Adv Neural Inf Process Syst (2006). 7718.

  • 53.

    LevinaA.HerrmannMGeiselT. Phase transitions towards criticality in a neural system with adaptive interactions. Phys Rev Lett(2009). 102:118110. 10.1103/PhysRevLett.102.118110

  • 54.

    WangSJZhouC. Hierarchical modular structure enhances the robustness of self-organized criticality in neural networks. New J Phys (2012). 14:023005. 10.1088/1367-2630/14/2/023005

  • 55.

    BrochiniLde Andrade CostaAAbadiMRoqueACStolfiJKinouchiO.Phase transitions and self-organized criticality in networks of stochastic spiking neurons. Sci Rep (2016). 6:35831. 10.1038/srep35831

  • 56.

    GerstnerWvan HemmenJL. Associative memory in a network of 'spiking' neurons. Netw Comput Neural SystNetw Comput Neural Syst (1992). 3:13964. 10.1088/0954-898X_3_2_004

  • 57.

    GalvesALöcherbachE. Infinite systems of interacting chains with memory of variable length-A stochastic model for biological neural nets. J Stat Phys (2013). 151:896921. 10.1007/s10955-013-0733-9

  • 58.

    KinouchiOBrochiniLCostaAACamposJGFCopelliM. Stochastic oscillations and dragon king avalanches in self-organized quasi-critical systems. Sci Rep (2019). 9:112. 10.1038/s41598-019-40473-1

  • 59.

    GrassbergerPKantzH. On a forest fire model with supposed self-organized criticality. J Stat Phys (1991). 63:685700. 10.1007/BF01029205

  • 60.

    CostaAdACopelliMKinouchiO. Can dynamical synapses produce true self-organized criticality?J Stat Mech (2015). 2015:P06004. 10.1088/1742-5468/2015/06/P06004

  • 61.

    CamposJGFCostaAdACopelliMKinouchiO. Correlations induced by depressing synapses in critically self-organized networks with quenched dynamics. Phys Rev E (2017). 95:042303. 10.1103/PhysRevE.95.042303

  • 62.

    KinouchiO. Self-organized (quasi-)criticality: the extremal Feder and Feder model. arXiv (1998). 9802311.

  • 63.

    HsuDBeggsJM. Neuronal avalanches and criticality: a dynamical model for homeostasis. Neurocomputing69 (2006). 113436. 10.1016/j.neucom.2005.12.060

  • 64.

    HsuDTangAHsuMBeggsJM. Simple spontaneously active Hebbian learning model: homeostasis of activity and connectivity, and consequences for learning and epileptogenesis. Phys Rev E76 (2007). 041909. 10.1103/PhysRevE.76.041909

  • 65.

    ShewWLClawsonWPPobstJKarimipanahYWrightNCWesselR. Adaptation to sensory input tunes visual cortex to criticality. Nature Phys11 (2015). 659663. 10.1038/nphys3370

  • 66.

    LevinaA.ErnstUMichael HerrmannJM. Criticality of avalanche dynamics in adaptive recurrent networks. Neurocomputing70 (2007b). 18771881. 10.1016/j.neucom.2006.10.056

  • 67.

    PengJBeggsJM. Attaining and maintaining criticality in a neuronal network model. Physica A Stat Mech Appl (2013). 392:161120. 10.1016/j.physa.2012.11.013

  • 68.

    HebbDO. The organization of behavior: a neuropsychological theory. Hoboken: J. Wiley; Chapman & Hall (1949).

  • 69.

    TurrigianoGGNelsonSB. Hebb and homeostasis in neuronal plasticity. Curr Opin Neurobiol (2000). 10:35864. 10.1016/S0959-4388(00)00091-X

  • 70.

    KuriscakEMarsalekPStroffekJTothPG. Biological context of Hebb learning in artificial neural networks, a review. Neurocomputing (2015). 152:2735. 10.1016/j.neucom.2014.11.022

  • 71.

    de ArcangelisLPerrone-CapanoCHerrmannHJ. Self-organized criticality model for brain plasticity. Phys Rev Lett (2006). 96:028107. 10.1103/PhysRevLett.96.028107

  • 72.

    LombardiFHerrmannHJde ArcangelisL. Balance of excitation and inhibition determines 1/f power spectrum in neuronal networks. Chaos (2017). 27:047402. 10.1063/1.4979043

  • 73.

    PellegriniG. L.de ArcangelisL.HerrmannHJPerrone-CapanoC. Activity-dependent neural network model on scale-free networks. Phys Rev E (2007). 76:016107. 10.1103/PhysRevE.76.016107

  • 74.

    de ArcangelisL. Are dragon-king neuronal avalanches dungeons for self-organized brain activity?Eur Phys J Spec Top (2012). 205:24357. 10.1140/epjst/e2012-01574-6

  • 75.

    de ArcangelisLHerrmannHJ. Activity-dependent neuronal model on complex networks. Front Physiol (2012). 3:62. 10.3389/fphys.2012.00062

  • 76.

    LombardiFHerrmannHJPerrone-CapanoCPlenzDDe ArcangelisL. Balance between excitation and inhibition controls the temporal organization of neuronal avalanches. Phys Rev Lett (2012). 108:228703. 10.1103/PhysRevLett.108.228703

  • 77.

    de ArcangelisLLombardiFHerrmannHJ. Criticality in the brain. J Stat Mech (2014). 2014:P03026. 10.1088/1742-5468/2014/03/P03026

  • 78.

    LombardiFHerrmannHJPlenzDDe ArcangelisL. On the temporal organization of neuronal avalanches. Front Syst Neurosci (2014). 8:204. 10.3389/fnsys.2014.00204

  • 79.

    LombardiFde ArcangelisL. Temporal organization of ongoing brain activity. Eur Phys J Spec Top (2014). 223:21192130. 10.1140/epjst/e2014-02253-4

  • 80.

    Van KessenichLMDe ArcangelisLHerrmannHJ. Synaptic plasticity and neuronal refractory time cause scaling behaviour of neuronal avalanches. Sci Rep (2016). 6:32071. 10.1038/srep32071

  • 81.

    ÇiftçiK. Synaptic noise facilitates the emergence of self-organized criticality in the caenorhabditis elegans neuronal network. Netw Comput Neural Syst (2018). 29:119. 10.1080/0954898X.2018.1535721

  • 82.

    UhligMLevinaAGeiselTHerrmannJM. Critical dynamics in associative memory networks. Front Comput Neurosci (2013). 7:87. 10.3389/fncom.2013.00087

  • 83.

    RabinovichMHuertaRLaurentG. Neuroscience: transient dynamics for neural processing. Science (2008). 321:4850. 10.1126/science.1155564

  • 84.

    RubinovM.SpornsO.ThiviergeJPBreakspearM. Neurobiologically realistic determinants of self-organized criticality in networks of spiking neurons. PLoS Comput Biol (2011). 7:e1002038. 10.1371/journal.pcbi.1002038

  • 85.

    Del PapaB.PriesemannVTrieschJ. Criticality meets learning: criticality signatures in a self-organizing recurrent neural network. PLoS One (2017). 12:e0178683. 10.1371/journal.pone.0178683

  • 86.

    LevinaAHerrmannJMGeiselT. Theoretical neuroscience of self-organized criticality: from formal approaches to realistic models. In PlenzDNieburE, editors. Criticality in neural systems. Hoboken: Wiley Online Library (2014). 41736. 10.1002/9783527651009.ch20

  • 87.

    SteppNPlenzDSrinivasaN. Synaptic plasticity enables adaptive self-tuning critical networks. PLoS Comput Biol (2015). 11:e1004043. 10.1371/journal.pcbi.1004043

  • 88.

    DelattreVKellerDPerichMMarkramHMullerEB. Network-timing-dependent plasticity. Front Cell Neurosci (2015). 9:220. 10.3389/fncel.2015.00220

  • 89.

    KossioFYKGoedekeSvan den AkkerBIbarzBMemmesheimerRM. Growing critical: self-organized criticality in a developing neural system. Phys Rev Lett (2018). 121:058301. 10.1103/PhysRevLett.121.058301

  • 90.

    TetzlaffCOkujeniSEgertUWörgötterFButzM. Self-organized criticality in developing neuronal networks. PLoS Comput Biol (2010). 6, e1001013. 10.1371/journal.pcbi.1001013

  • 91.

    CostaA.BrochiniLKinouchiO. Self-organized supercriticality and oscillations in networks of stochastic spiking neurons. Entropy (2017). 19:399. 10.3390/e19080399

  • 92.

    ZierenbergJ.WiltingJPriesemannV. Homeostatic plasticity and external input shape neural network dynamics. Phys Rev X (2018). 8:031018. 10.1103/PhysRevX.8.031018

  • 93.

    Girardi-SchappoM.BrochiniL.CostaAACarvalhoTTKinouchiO. Synaptic balance due to homeostatically self-organized quasicritical dynamics. Phys Rev Res (2020). 2:012042. 10.1103/PhysRevResearch.2.012042

  • 94.

    BrunelNDynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. J Comput Neurosci (2000). 8:183208. 10.1023/A:1008925309027

  • 95.

    BienenstockELehmannD. Regulated criticality in the brain?Advs Complex Syst (1998). 01:36184. 10.1142/S0219525998000223

  • 96.

    BornholdtSRohlfT. Topological evolution of dynamical networks: global criticality from local dynamics. Phys Rev Lett (2000). 84:6114. 10.1103/PhysRevLett.84.6114

  • 97.

    BornholdtSRöhlT. Self-organized critical neural networks. Phys Rev E (2003). 67:066118. 10.1103/PhysRevE.67.066118

  • 98.

    RohlfT. Self-organization of heterogeneous topology and symmetry breaking in networks with adaptive thresholds and rewiring. Europhys Lett (2008). 84:10004. 10.1209/0295-5075/84/10004

  • 99.

    GrossTSayamaH. Adaptive networks. Berlin: Springer (2009). 18. 10.1007/978-3-642-01284-6_1

  • 100.

    RohlfTBornholdtS. Self-organized criticality and adaptation in discrete dynamical networks. Adaptive Networks. Berlin: Springer (2009). 73106.

  • 101.

    MeiselCGrossT. Adaptive self-organization in a realistic neural network model. Phys Rev E80 (2009). 061917. 10.1103/PhysRevE.80.061917

  • 102.

    MinLGangZTian-LunC. Influence of selective edge removal and refractory period in a self-organized critical neuron model. Commun Theor Phys52 (2009). 351. 10.1088/0253-6102/52/2/31

  • 103.

    RybarschMBornholdtS. Avalanches in self-organized critical neural networks: a minimal model for the neural soc universality class. PLoS One (2014). 9:e93090. 10.1371/journal.pone.0093090

  • 104.

    CramerBStöckelDKreftMWibralMSchemmelJMeierKet alControl of criticality and computation in spiking neuromorphic networks with plasticity. Nat Commun (2020). 11:111. 10.1038/s41467-020-16548-3

  • 105.

    DrosteF.DoALGrossT. Analytical investigation of self-organized criticality in neural networks. J R Soc Interface (2013). 10:20120558. 10.1098/rsif.2012.0558

  • 106.

    KuehnC. Time-scale and noise optimality in self-organized critical adaptive networks. Phys Rev E (2012). 85:026103. 10.1103/PhysRevE.85.026103

  • 107.

    ZengH-LZhuC-PGuoY-DTengAJiaJKongHet alPower-law spectrum and small-world structure emerge from coupled evolution of neuronal activity and synaptic dynamics. J Phys: Conf Ser (2015). 604:012023. 10.1088/1742-6596/604/1/012023

  • 108.

    MejiasJFKappenHJTorresJJ. Irregular dynamics in up and down cortical states. PLoS One (2010). 5:e13651. 10.1371/journal.pone.0013651

  • 109.

    MillmanDMihalasSKirkwoodANieburE. Self-organized criticality occurs in non-conservative neuronal networks during ‘up’ states. Nat Phys (2010). 6:80105. 10.1038/nphys1757

  • 110.

    di SantoSBurioniRVezzaniAMuñozMA. Self-organized bistability associated with first-order phase transitions. Phys Rev Lett (2016). 116:240601. 10.1103/PhysRevLett.116.240601

  • 111.

    Di SantoSVillegasPBurioniRMuñozMA. Landau–Ginzburg theory of cortex dynamics: scale-free avalanches emerge at the edge of synchronization. Proc Natl Acad Sci USA (2018). 115:E135665. 10.1073/pnas.1712989115

  • 112.

    CowanJDNeumanJKiewietBVan DrongelenW. Self-organized criticality in a network of interacting neurons. J Stat Mech(2013). 2013:P04030. 10.1088/1742-5468/2013/04/p04030

  • 113.

    MagnascoMOPiroOCecchiGA. Self-tuned critical anti-Hebbian networks. Phys Rev Lett (2009). 102:258102. 10.1103/PhysRevLett.102.258102

  • 114.

    KhoshkhouMMontakhabA. Spike-timing-dependent plasticity with axonal delay tunes networks of izhikevich neurons to the edge of synchronization transition with scale-free avalanches. Front Syst Neurosci (2019). 13:22–7. 10.3389/fnsys.2019.00073

Summary

Keywords

self-organized criticality, neuronal avalanches, self-organization, neuronal networks, adaptive networks, homeostasis, synaptic depression, learning

Citation

Kinouchi O, Pazzini R and Copelli M (2020) Mechanisms of Self-Organized Quasicriticality in Neuronal Network Models. Front. Phys. 8:583213. doi: 10.3389/fphy.2020.583213

Received

14 July 2020

Accepted

19 October 2020

Published

23 December 2020

Volume

8 - 2020

Edited by

Attilio L. Stella, University of Padua, Italy

Reviewed by

Srutarshi Pradhan, Norwegian University of Science and Technology, Norway

Ignazio Licata, Institute for Scientific Methodology (ISEM), Italy

Updates

Copyright

*Correspondence: Osame Kinouchi,

This article was submitted to Interdisciplinary Physics, a section of the journal Frontiers in Physics

Disclaimer

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Outline

Figures

Cite article

Copy to clipboard


Export citation file


Share article

Article metrics