Mechanisms of self-organized quasicriticality in neuronal networks models

The critical brain hypothesis states that there are information processing advantages for neuronal networks working close to the critical region of a phase transition. If this is true, we must ask how the networks achieve and maintain this critical state. Here we review several proposed biological mechanisms that turn the critical region into an attractor of a dynamics in network parameters like synapses, neuronal gains and firing thresholds. Since neuronal networks (biological and models) are nonconservative but dissipative, we expect not exact criticality but self-organized quasicriticality (SOqC), where the system hovers around the critical point.


I. INTRODUCTION
Thirty-three years after the initial formulation of the self-organized criticality (SOC) concept [1] (and thirty-seven years after of the self-organizing extremal invasion percolation model [2]), one of the most active areas that employ these ideas is theoretical neuroscience.
However, neuronal networks, similarly to earthquakes and forest fires, are nonconservative systems, in contrast to canonical SOC systems like sandpile models [3,4]. To model such systems one uses nonconservative networks of elements represented by cellular automata, discrete time maps or differential equations. Such models have distinct features from conservative systems. A large fraction of them, in particular neuronal networks, have been described as displaying self-organized quasi-criticality (SOqC) [5][6][7] or weak criticality [8,9], which is the subject of this review.
The first person that made an analogy between brain activity and a critical branching process probably was Alan Turing, in his memorable paper Computing machinery and intelligence [10]. Decades later, the idea that SOC models could be important to describe the activity of neuronal networks was in the air as early as 1995 [11][12][13][14][15][16], eight years before the fundamental 2003 experimental paper of Beggs and Plenz [17] reporting neuronal avalanches. This occurred because several authors, working with models for earthquakes and pulse-coupled threshold elements, noticed the formal analogy between such systems and networks of integrate-and-fire neurons. Critical learning was also conjectured by Chialvo and Bak [18][19][20]. However, in the absence of experimental support, these works, although prescient, were basically theoretical conjectures. A historical question would be to determine in what extent this early literature motivated Beggs and Plenz to perform their experiments.
An important question is how neuronal networks self-organize toward the critical region.
The question arises because, like earthquake and forest-fire models, neuronal networks are not conservative systems, which means that in principle they cannot be exactly critical [5,6,44,45]. In these networks, we can vary control parameters like the strength of synapses and obtain subcritical, critical and supercritical behavior. The critical point is therefore achieved only by fine-tuning.
Over time, several authors proposed different biological mechanisms that could eliminate the fine-tuning and make the critical region a self-organized attractor. The obtained criticality is not perfect, but it is sufficient to account for the experimental data. Also, the mechanisms (mainly based on dynamic synapses but also on dynamic neuronal gains and adaptive firing thresholds) are biologically plausible and should be viewed as a research topic per se.
The literature about these homeostatic mechanisms is vast and we do not intend to present an exhaustive review. However, we discuss here some prototypical mechanisms and try to connect them to self-organized quasicriticality (SOqC), a concept developed to account for non-conservative systems that hover around but do not exactly sit on the critical point [5][6][7].
For a better comparison between the models, we will not rely on the original notation of the reviewed articles, but will try to use a universal notation instead. For example, the synaptic strength between a presynaptic neuron j and a postsynaptic neuron i will be always denoted by W ij (notice the convention in the order of the indexes), the membrane potential is V i , the binary firing state is s i ∈ {0, 1}, the gain of the firing function is Γ i , the firing threshold is θ i and so on. To prevent an excess of index subscripts as is usual in dynamical systems, like W ij,t , we use the convention W ij (t) for continuous time and W ij [t] for discrete time.
Lastly, before we begin, a few words about the fine-tuning problem. Even perfect SOC systems are in a sense fine tuned: they must be conservative and require infinite separation of time scales with driving rate 1/τ → 0 + and dissipation rate u → 0 + with 1/(τ u) → 0 [3,4,7,42,44] that when turned the system exhibits some quantifiable change. We say that the system self-organizes if nobody turns that knob but the system itself. In order to achieve this, the elements comprising the system require a feedback mechanism to be able to change their inner dynamics in response to their surroundings. (...) The latter does not require an external entity to turn the dial for the system to exhibit critical dynamics. However, its internal dynamics are configured in a particular way in order to allow feedback mechanisms at the level of individual elements.
Did we fine-tune their configuration? Yes. Otherwise, we would have not achieved what was desired, as nothing comes out of nothing. Did we change control parameter from W to A, B, C, and D? No, the control parameter is still intact, and now it is "in the hands" of the system. (...) Lastly and most importantly, the new configuration stresses the difference between global and local mechanisms. The control parameter W (the dial) is an external quantity that observes and governs the global (i.e., the collective), whereas [homeostasis] provides the system with local mechanisms that have an effect over the collective. This is the main feature of a complex system.

II. PLASTIC SYNAPSES
Consider an absorbing state second order phase transition where the activity is ρ = 0 below a critical point E c and for E E c , where E is a generic control parameter, see Fig. 1A and B. For topologies such as random and complete graphs, one typically obtains β = 1, which is consistent with a transition in the Mean-Field Directed Percolation (DP) class (or perhaps, the Compact-DP (Manna) class usual in SOC models, which has the same mean-field exponents but different ones below the upper critical dimension, see [3,7,41,47]).
The basic idea underlying most of the proposed mechanism for homeostatic self-organization is to define a slow dynamics in the individual links E i (t) (i = 1, . . . , N ) such that, if the network is in the subcritical state, their average value if the network is in the supercritical state, E(t) decreases toward E c , see Fig. 1C. Ideally, these mechanisms should be local, that is, they should not have access to global network information such as the density of active sites ρ (the order parameter) but rather only to the local firing of the neurons connected by E i . In the following, we give several examples from the literature.

A. Short-Term Synaptic Plasticity
Markram and Tsodyks [48,49] proposed a short-term synaptic model that inpired several authors in the area of self-organization to criticality. The Markram-Tsodyks (MT) dynamics is: where J ij is the available neurotransmitter resources, u is the fraction used after the presynaptic firing at timet j (so that the effective synaptic efficacy is W ij (t) = u(t)J ij (t)), A and U are baseline constants (hyperparameters), and τ and τ u are recovery time constants.
In an influential paper, Levina, Herrmann and Geisel (LHG) [50] proposed to use depressing-recovering synapses. In their model, we have leaky-integrate-and-fire (LIF) neurons in a complete-graph topology. As a self-organizing mechanism, they used a simplified version of the MT dynamics with constant u, that is, only Eq. (2). They studied the system varying A and found that although we need some tuning in the hyperparameter A, any initial distribution of synapses P (W ij (t = 0)) converges to a stationary distribution We will refer to Eq. (2) with constant u as the LHG dynamics. These authors found quasicriticality for 1.7 < A < 2.3, u ∈]0, 1] and τ ∝ N . Levina et al.
also studied synapses with the full MT model in [51,52].
Bonachela et al. [6] studied in depth the LHG model and found that, like forest-fire models, it is an instance of SOqC. The system presents the characteristic hovering around the critical point in the form of stochastic sawtooth oscillations in the W (t) that do not disappear in the thermodynamic limit. Using the same model, Wang and Zhou [53] showed that the LHG dynamics also works in hierarquical modular networks, with an apparent improvement in SOqC robustness in this topology.
Note that the LHG dynamics can be written in terms of the synaptic efficacy W ij = uJ ij by multiplying Eq. (2) by u, leading to: Brochini et al. [54] studied a complete graph of stochastic discrete time LIFs [55,56] and proposed a discrete time LHG dynamics: where the firing index s j [t] ∈ {0, 1} denotes spikes. Kinouchi et al. [57], in the same system, studied the stability of the fixed points of the joint neuronal-LHG dynamics. They found that, for the average synaptic value W , the fixed point is W * = W c +O((A−1)/uτ ), meaning that for large uτ the systems approaches the critical point W c if A > 1. However, since it is not biologically plausible to assume an infinite recovering time τ , one always obtains a system which is slightly supercritical. This work also showed that the fixed point is a barely stable focus, around which the system is excited by finite size (demographic) noise, leading to the characteristic sawtooth oscillations of SOqC. A similar scenario was already found by Grassberger for forest-fire models [58].
The discrete time LHG dynamics was also studied for celullar automata neurons in ran- with an upper limit P max = 1. Multiplying by K and summing over i, we get an equation for the local branching ratio: It has been found that such depressing synapses induce correlations inside the synaptic matrix, affecting the global branching ratio σ[t] = σ j [t] , so that criticality does not occur at the branching ratio σ c = 1 but rather when the largest eigenvalue of the synaptic matrix After examining this diverse literature, it seems that any homeostatic dynamics of the form: can self-organize the networks, where R and D are the recovery and depressing processes, for example: In particular, the simplest mechanism would be: a usual dynamics in SOC models [5,7]. This means that the full LHG dynamics, and also the full MT dynamics, is a sufficient but not a necessary condition for SOqC.
The average W = W ij for this dynamics is: where is the time dependent network activity. The stationary state is ρ * = 1/(τ u) and, if τ u is large, this means that ρ . The dissipative term u should also be small, meaning that, if we desire absolute separation of time scales, one needs 1/τ → 0 + , u → 0 + , 1/(τ u) → 0, as is usual in other SOC systems [3,5,7,42,44].
Here, for biological plausibility, its is better to assume a large but finite recovery time, say τ ∈ [100, 10000] ms, in comparison with 1 ms for spikes. Also, to obtain SOqC, u needs not be small. We must have A > 1 because A < 1 produces subcritical activity [6,50,57]. So, We observe that the original LHG model [6,50] had τ ∝ N to produce the infinite separation of time scales in the large-N limit. This, however, did not prevent the SOqC hovering stochastic oscillations in the thermodynamic limit. Moreover, a recovery time proportional to N is a very unrealistic feature for biological synapses. Curiously, if we use a finite τ u instead, the oscillations are damped in the thermodynamic limit because the fixed ) continues to be an attractive focus but the demographic noise vanishes. On the other hand, when we use τ u → ∞, the fixed point loses its stability and continues to be perturbed even by the N → ∞ vanishing fluctuations [57].
As early as 1998, Kinouchi [61] proposed the synaptic dynamics: with small but finite τ and u. The difference here from the former mechanisms is that, like in Eq. (10), depression is not proportional to W ij (but recovery is). He also discussed the several concepts of SOC at the time, and called these homeostatic system as self-tuned criticality (STC), which is equivalent to a SOqC system with finite separation of time scales.
Hsu and Beggs [62] studied a model for the activity A i (t) of the local field potential at electrode i: where H i (t) is a spontaneous activity used to prevent the freezing of the system in the absorbing state (this is similar to a time-dependent SOC drive term h). The probabilistic couplings is P ij ∈ [0, 1]. Firing rate homeostasis and critical homeostasis are achieved by increasing or decreasing H and P if the firing rate is too low or too high compared to a target firing rate s 0 = 1/τ 0 : where . . . represents a moving average over a memory widow τ m .
Hsu and Beggs found that for k S /k P ≈ 0.5, this dynamics leads to a critical branching ratio σ = 1. They also found that the target firing rate s 0 can be maintained by this homeostasis. Equation (15) reminds us of the depressing-recovering synaptic rule of Eq. (9).
Indeed, if we examine the small k P limit (as used by the authors), we have: where now τ = 1/(k P s 0 ) and u = k P . A similar reasoning applies to the equation for which could be identified with the homeostatic threshold equation (60) In another paper, Hsu et al. [63] extended the model to include distance-dependent connectivity and Hebbian learning [63]. Changing the homeostasis equations to our standard notation, we have: where H i ∈ [0, 1] is now a probability of spontaneous firing, s 0 is a target average activity and D ij is the distance between electrodes i and j. The input ratio is η i (t) = j P ij (t).
Remember that, for a critical branching process, η i = 1. These values were chosen as homeostatic targets.
Shew et al. [64] studied experimentally the visual cortex of the turtle and proposed a (complete graph) self organizing model for the input synapses Ω i and the cortical synapses W ij . The stochastic neurons fire with a linear saturating function: where, like in Eq. (13), H i accounts for external stimuli. For both types of synapses they used the discrete time LHG dynamics, Eq. (5), and concluded that the computational model accounts very well for the experimental data.
Hernandez-Urbina and Herrmann [46] studied a discrete time IF model where they define a local measure called node success: where A is the adjacency matrix of the network, with A ij = 1 if j projects onto i (A ij = 0 otherwise). Note that we reversed the indices as compared with the original notation [46].
Observe that φ j measures how many postsynaptic neurons are excited by the presynaptic neuron j.
The authors then define the node success driven plasticity (NSDP): where ∆t j = t −t j is the time difference between the spike of node j occurring at current time step t and its previous spike which occurred att j (the last spike), while B and D are constants. Notice that the drive term is larger if the node success is small and the dissipation term is larger if the firing rate (inferred locally asρ = 1/∆t j ) is large (compare with Eq. (8)).
They analyzed the relation between the avalanche critical exponents, the largest eigenvalue Λ associated to the weight matrix and the data collapse of the shape of avalanches for several network topologies. All results are compatible with (quasi-)criticality. They also found that if they stop NSDP and introduce STDP, the criticality vanishes, but if the two dynamics are done together, criticality survives.
Levina et al. [65] proposed a model in a complete graph in which the branching ratio σ is estimated as the local branching σ i of a neuron that initiates an avalanche. The homeostatic rule is to increase the synapses if σ i < 1 and decreasing them if σ i > 1. The network converges, with SOqC oscillations, to σ * ≈ σ c = 1.

B. Meta-Plasticity
Peng and Beggs [66] studied a square lattice (K = 4) of IF neurons with open boundary conditions. A random neuron receives a small increment of voltage (slow drive). If the voltage of presynaptic neuron j is above a threshold θ = 1, we have: The self-organization is made by a LHG dynamics plus a meta-plasticity term: where X a is the total fraction of neurons at the boundary that fired during the a-th avalanche and u a+1 is the updated value of u after the avalanche. Notice that the meta-plasticity term differs from the MT model of Eq. (3), because the hyperparameter u is updated in a much slower time scale. Peng and Beggs show that this dynamics converges automatically to good values for the parameter u, that is, we no longer need set the u value in advance. We observe, however, that X a is a non-local variable.

C. Hebbian Synapses
Ever since Donald Hebb's proposal that neurons that fire together wire together [67][68][69], several attempts have been made to implement this idea in models of self-organization.
However, a pure Hebbian mechanism can lead to diverging synapses, so that some kind of normalization or decay needs also be included in Hebbian plasticity. is given by the following neuronal dynamics on a square lattice of L × L neurons: If at time t a presynaptic neuron j has a membrane potential above a firing threshold, V j [t] > θ, it fires, sending neurotransmitters to all its (non refractory) neighbors: where W ij = W ij / nn l W lj . Then, neuron j enters in a refractory period of one time step. The synaptic self-organizing dynamics is given by Ç iftçi [80] studied a neuronal SIRs model on the C. elegans neuronal network topology.
The spontaneous activation rate (the drive) is h = 1/τ → 0 + and the recovery rate to the susceptible state is q. The author studied the system as a function of q/h (separation of time scales q h). The probability that a neuron j activates its neighbor i is P ij (g ij = 1 − P ij is the probability of synaptic failure in the author notation). The synaptic update occurs after an avalanche (of size S) and affects two neighbors that are active at the same time (Hebbian term): if the synapse was not used , Ciftçi found robust self-organization to quasicriticality. The author notes, however, that S is a non-local information.
Uhlig et al. [81] considered the effect of LHG synapses in the presence of an associative Hebb synaptic matrix. They found that, although the two processes are not irreconcilable, the critical state has detrimental effects to the attrator recovery. They interpret this as a suggestion that the standard paradigm of memories as fixed points attractors should be replaced by more general approaches like transient dynamics [82].

D. Spike-Time Dependent Plasticity
Rubinov et al. [83] studied a hierarchical modular network of LIF neurons with STDP plasticity. The synapses are modeled by double exponentials: where {t j } are the presynaptic firing times. Synaptic weight changes at every spike of a presynaptic neuron, folowing the STDP rule: where A + (W ij ) and A − (W ij ) are weight dependent functions, see [83] for details. The authors show an association between modularity, low cost of wiring, STDP and self-organized criticality in a neurobiologically realistic model of neuronal activity.
Del Papa et al. [84] explored the interaction between criticality and learning in the context of self-organized recurrent networks (SORN). The ratio between inhibitory to excitatory neurons is N I /N E = 0.2. These neurons interact via W EE , W IE and W EI synapses (no inhibitory self-coupling). Synapses are dynamic, and also the excitatory thresholds θ E i . The neurons evolve as: where η i [t] represents membrane noise. Synapses and thresholds evolve following five combined dynamics: where µ IP is the desired activity level. In the structural plasticity process, excitatory synapses are added with probability p(N E ). The authors found that this SORN model presents well behaved power-law avalanche statistics and that the plastic mechanisms are necessary to drive the network to criticality, but not to maintain it critical, that is, the plasticity can be turned off after the networks reaches the critical region. Also, they found that noise was essential to produce the avalanches, but degrade the learning performance.
From this, they conclude that the relation between criticality and learning is more complex and it is not obvious if criticality optimizes learning.
Levina et al. [85] studied the combined effect of LHG synapses, homeostatic branching parameter W h and STDP: They found that there is cooperativity of these mechanisms in extending the robustness of the critical state to variations on the hyperparameter A, see Eq. (2).
Stepp et al. [86] examined a LIF neuronal network which have both Markran-Tsodyks dynamics and spiking time-dependent plasticity STDP (both excitatory and inhibitory). They found that, although MT dynamics produces some self-organization, the STDP mechanism increses the robustness of the network criticality.
Delattre et al. [87] included in the STDP synaptic change ∆W + a resource depletion term: where resource availability η(t) evolves as: Here, α(t) is a continuous estimator of the network firing rate, τ η is the recovery time of the resources availability and the term η 0 (α(t)) = (1 + α/k) −1 in the denominator ensures that depletion is fast and recovery is slow (k = 20 Hz). They called this mechanism as network spiking dependent plasticity and showed that, in contrast to pure STDP, it leads to power-law avalanches with branching ratio around one.

E. Homeostatic Neurite Growth
Kossio et al. [88] studied IF neurons randomly distributed in a plane, with neurites distributed within circles of radii R i that evolved according to where {t i } are the spike times of neuron i, with τ and u constants. Since the connections are given by W ij = gO ij where g is a constant and O ij are the overlaping areas of the synaptic discs, Eq. (46) is not much different from the simple synaptic dynamics of Eq. (10), with constant drive and decay due to spikes.
Tetzlaff et al. [89] studied experimentally neuronal avalanches during the maturation of cell cultures, finding that criticality is achieved in a third stage of the dendrites/axons growth process. They modeled the system using neurons with membrane potential V i (t) < 1 and calcium dynamics C i (t): where k + > 0 (k − < 0) defines excitatory (inhibitory) neurons and η j (t) ∈ [0, 1] is a random number. Dendritic and axonal spatial distributions are again represented by their radii R i and A i , whose dynamics are governed by calcium dynamics as: Finally, the effective connection is defined as: where D ij is the distance between the neurons. This essentially represents the overlap of the axonal and dendritic zones, which can be understood as an abstract representation for the probability of synapse formation.

III. DYNAMIC NEURONAL GAINS
For all-to-all topologies as used in [6,50,52,54]  The stochastic neuron has a probabilistic firing function, say, a linear saturating function or a rational function: where s = 1 means a spike, V is the membrane potential, θ is the threshold and Γ is the neuronal gain. Now, let's assume that each neuron i has its neuronal gain Γ i . Several adaptive dynamics work, similar to LHG and even simpler: Costa et al. [90] and Kinouchi et al. [57] studied the stability of the fixed points of mechanisms given by Eq. (55) and (56) and concluded that the fixed point solution (ρ * , Γ * ) is of the form The homeostasis is given by a negative feedback: where τ hp is the time constant of the homeostatic process and r * is a target level. Notice that this mechanism depends only on the activity of the postsynaptic neuron i, not the presynaptic neuron j as in the LHG model.  They noticed, however, that for these stochastic LIF systems, the critical point requires also a zero field h = I −(1−µ)θ, where I is the external input and µ is the leakage parameter.
While setting h = 0 for the critical point of spin systems is natural, obtaining zero field in this case demands self-organization, which is done by an adaptive firing threshold: Notice the plus signal in the last term, since if the postsynaptic neuron fires (s i = 1) then the threshold must increase to hinder new firings. This mechanism is biologically plausible and also explains classical firing rate adaptation. Remembering that ρ = s i ∝ h 1/δ h in the critical point, where δ h is the field critical exponent, from Eq. (60) we have h ∝ 1/(τ θ u θ ) δ h ≈ 0 for large τ θ u θ .
Bienenstock and Lehmann [94] also studied, at the mean field level, the joint evolution of firing thresholds and dynamic synapses (see Sec. VI C).

V. TOPOLOGICAL SELF-ORGANIZATION
Consider a cellular automata model [28,31,59,60] in a network with average degree K and average probabilistic synaptic weights P = P ij . The critical branching ratio is σ = P K = 1, that is, critical average weight P c = 1/K. Notice that we can study networks with any K, even the complete graph, where P c = 1/(N − 1). In this network, what is critical is the activity, which does not depends of the topology (the degree K).
So, we can have a critical network with a W c and any K or a topologically critical network with a well defined K c . The two concepts (activity criticality and topological criticality) are different, but sometimes a topological criticality also presents a phase transition with powerlaw avalanches and critical phenomena. The topological phase transition is continuous and has a critical point, related to the formation of a percolating cluster of nodes, but in the Bornholdt and Rohlf (BR) model it is related to an order-chaos phase transition, not to an absorbing state phase transition.
We present here a more advanced version of the BR model [96]. It follows the idea of deleting synapses from correlated neurons and increasing synapses of uncorrelated neurons.
The correlation over a time T is calculated as: where the stochastic neurons evolve as: The self-organization procedure is: • Choose at random a pair (i, j) of neurons; • Calculate the correlation C ij (T ); • Define a threshold α. If C ij (T ) > α, i receives a new link W ij randomly drawn from a uniform distribution on [−1, 1] from site j, and if C ij < α the link is deleted.
• Then, continue updating the network state {s i } and self-organizing the network.
Interesting analytic results for this class of topological models were obtained by Droste et al. [104]. The self-organized connectivity is about K c ≈ 2, where the order-chaos transition occurs. We must notice, however, that K = 2 seems to be a very low degree for biological neuronal networks. Kuehn [105]  Mejias et al. [107] studied a neuronal population model with firing rate ν(t), which can be written in terms of the firing density ρ = ν/ν max : where S(z) = (1/2)[1 + tanh(z)] is a (deterministic) firing function, η(t) is a zero-mean Gaussian noise and D η is a noise amplitude. They used a depressing average synaptic weight inspired by a noisy LHG model: where D W is the synaptic noise amplitude. Within a certain range of noise, they observed up-down states with irregular intervals, leading to a distribution of permanence times T in the up state as P (T ) ∝ T −3/2 . Notice that this model already starts with the mean-field equations, it is not a microscopic model (although a microscopic model perhaps could be constructed from it).
Millman et al. [108] obtained similar results at a first order phase transition, but now in a random network of LIF neurons with average of K neighbors and chemical synapses. The synapses follow the LHG mechanism: where W ij (t) = p r U ij (t) in the authors notation (p r for probability of releasing vesicles, U ij (t) for synaptic resources) and A = p r . They found that the branching ratio is close to one in the up state, with power-law avalanches with size exponent 3/2 and lifetime exponent 2.
Di Santo et al. [109,110] and Buendia et al. [7,45] studied the self-organization toward a first-order phase transition (called self-organized bistability or SOB). The simplest selforganizing dynamics was used in a two-dimensional model: where ω, a > 0, b < 0 are constants, A is the maximum level of charging, D is the diffusion constant and η( x, t) is a zero-mean Gaussian noise with amplitude ρ. The authors original notation is h = 1/τ, = u and E is a (former) control parameter. In the limit 1/τ → 0 + , u → 0 + , 1/(τ u) → 0, this self-organization is conservative and can produce a tuning to the Maxwell point with power-law avalanches (with mean-field exponents) and dragon-king quasi-periodic events.
Relaxing the conditions of infinite separation of time scales and bulk conservation, the authors studied the model with an LHG dynamics [7,45,110]: where W is the synaptic weight and I an small input. They found that this is the equivalent SOqC version for first order phase transitions, obtaining histeretic up-down activity, which has been called self-organized collective oscillations (SOCO) [7,45,110]. They also observed bistability phenomena.
Cowan et al. [111] also found hysteresis cycles due to bistability in a IF model from the combination of an excitatory feed-back loop with anti-Hebbian synapses in its input pathway. This leads to avalanches both in the up state and in the down state, each one with power-law statistics (size exponents close to 3/2). The hysteresis loop leads to a sawtooth oscillation in the average synaptic weight. This is similar to the SOCO scenario.

B. Hopf bifurcation
Absorbing-active phase transitions are associated to transcritical bifurcations in the lowdimensional mean-field description of the order parameter. Other bifurcations (say, between fixed points and periodic orbits) can also appear in the low-dimensional reduction of systems exhibiting other phase transitions, such as between steady states and collective oscillations.
They are critical in the sense that they present phenomena like critical slowing down (powerlaw relaxation to the stationary state), critical exponents etc. Some authors explored the homeostatic self-organization toward such bifurcation lines.
In what can be considered a precursor in this field, Bienenstock and Lehmann [94] proposed to apply a Hebbian-like dynamics at the level of rate dynamics to the Wilson-Cowan equations, having shown that the model self-organizes near a Hopf bifurcation to/from oscillatory dynamics.
The model has excitatory and inhibitory stochastic neurons. The neuronal equations are: where, as before, the binary variable s ∈ {0, 1} denotes the firing of the neuron. The update process is an asynchronous (Glauber) dynamics: where Γ is the neuronal gain.
The authors proposed a covariance-based regulation for the synapses W EE and W IE and a homeostatic process for the firing thresholds θ E (t), θ I (t). The homeostatic mechanisms are: where c EE ≡ ρ E (t) − ρ E (t) 2 is the variance of the excitatory activity ρ E (t), c IE ≡ ρ E (t) − ρ E ρ I (t) − ρ I is the excitatory-inhibitory covariance, τ EE , τ IE , τ E , τ I are time constants, and Θ EE , Θ IE , Θ E , Θ I are target constants.
The authors show that there are Hopf and saddle-node lines in this system and that the regulated system self-organizes at the crossing of these lines. So, the system is very close to the oscillatory bifurcation, showing great sensibility to external inputs.
As commented, this paper is a pioneer in the sense of searching for homeostatic selforganization at a phase transition in a neuronal network in 1998, well before the work of Beggs and Plenz [17]. However, we must recognize some deficiencies that later models tried to avoid. First, all the synapses and thresholds have the same value, instead of an individual dynamics for each one, as we saw in the preceding sections. Most importantly, the network activities ρ E and ρ I are non-local quantities, not locally accessible to Eqs. (76)-(77).
Magnasco et al. [112] examined a very stylized model of neural activity with time dependent anti-Hebbian synapses: where δ ij is the Kronecker delta. They found that the system self-organizes around a Hopf bifurcation, showing power-law avalanches and hovering phenomena similar to SOqC.

C. Edge of synchronization
Khoshkhou and Montakhab [113] studied a random network with K = K i neighbors.
The cells are Izhikevich neurons described by: The parameters a, b, c and d are chosen to have regular spiking excitatory neurons and fast spiking inhibitory neurons. The synaptic input is composed of chemical double-exponential pulses with time constants τ s and τ f : where τ ij are axonal delays from j to i, V 0 is the reversal potential of the synapses, and K i is the in-degree of node i.
The inhibitory synapses are fixed but the excitatory ones evolves with a STDP dynamics.
If the firing difference is ∆t = t post − t pre , when the postsynaptic neuron i fires, the synapses change by: This system presents a transition from out-of-phase to synchronized spiking. The authors show that a STDP dynamics self-organizes in a robust way the system to the border of this transition, where critical features like avalanches (coexisting with oscillations) appear.

VII. CONCLUDING REMARKS
In this review we described several examples of self-organization mechanisms that drive neuronal networks to the border of a phase transition (mostly a second order absorbing phase transition, but also to first order, synchronization, Hopf and order-chaos transitions). Surprisingly, for all cases, it is possible to detect neuronal avalanches with mean-field exponents similar to those obtained in the experiments of Beggs and Plenz [17]. By using a standardized notation, we recognized several common features between the proposed homeostatic mechanisms. Most of them are variants of the fundamental drivedissipation dynamics of SOC and SOqC and can be grouped into a few classes.
Following Hernandez-Urbina and Herrmann [46], we stress that the coarse tuning on hyperparameters of homeostatic SOqC is not equivalent to the fine tuning of the original control parameter. This homeostasis is a bona-fide self-organization, in the same sense that the regulation of body temperature is self-organized (although presumably there are hyperparameters in that regulation). The advantage of these explicit homeostatic mechanisms is that they are biologically inspired and could be studied in future experiments to determine which are more relevant to cortical activity.
Due to non-conservative dynamics and the lack of an infinite separation of time scales, all these mechanisms lead to SOqC [5][6][7], not SOC. In particular, conservative sandpile models should not be used to model neuronal avalanches because neurons are not conservative. The presence of SOqC is revealed by stochastic sawtooth oscillations in the former control parameter, leading to large excursions in the supercritical and subcritical phases.
However, hovering around the critical point seems to be sufficient to account for the current experimental data. Also, perhaps the omnipresent stochastic oscillations could be detected experimentally (some authors conjecture that they are the basis for brain rhythms [90]).
One suggestion for further research is to eliminate non-local variables in the homeostatic mechanisms. Another is to study how the branching ratio σ, or better, the synaptic matrix largest eigenvalue Λ, depends on the self-organization hyperparameters (as done in Ref. [60]).
As several results in this review have shown, the dependence of criticality on the hyperparameters is always weaker than the dependence on the original control parameter. Finally, one could devise local metaplasticity rules for the hyperparameters, similarly to Peng and Beggs [66] (which, however, is unfortunatelly nonlocal). An intuitive possibility is that, at each level of metaplasticity, the need for coarse tuning of hyperparameters decreases and criticality will turn out more robust.

CONFLICT OF INTEREST STATEMENT
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

AUTHOR CONTRIBUTIONS
OK and MC contributed conception and design of the study; RP organized the database of revised articles; OK and MC wrote the manuscript. All authors contributed to manuscript revision, read and approved the submitted version.