ORIGINAL RESEARCH article

Front. Netw. Physiol., 09 August 2024

Sec. Networks of Dynamical Systems

Volume 4 - 2024 | https://doi.org/10.3389/fnetp.2024.1423023

How synaptic function controls critical transitions in spiking neuron networks: insight from a Kuramoto model reduction

  • 1. Department of Control Theory, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia

  • 2. Department of Mathematics and Statistics and Neuroscience Institute, Georgia State University, Atlanta, GA, United States

Article metrics

View details

3

Citations

1,9k

Views

568

Downloads

Abstract

The dynamics of synaptic interactions within spiking neuron networks play a fundamental role in shaping emergent collective behavior. This paper studies a finite-size network of quadratic integrate-and-fire neurons interconnected via a general synaptic function that accounts for synaptic dynamics and time delays. Through asymptotic analysis, we transform this integrate-and-fire network into the Kuramoto-Sakaguchi model, whose parameters are explicitly expressed via synaptic function characteristics. This reduction yields analytical conditions on synaptic activation rates and time delays determining whether the synaptic coupling is attractive or repulsive. Our analysis reveals alternating stability regions for synchronous and partially synchronous firing, dependent on slow synaptic activation and time delay. We also demonstrate that the reduced microscopic model predicts the emergence of synchronization, weakly stable cyclops states, and non-stationary regimes remarkably well in the original integrate-and-fire network and its theta neuron counterpart. Our reduction approach promises to open the door to rigorous analysis of rhythmogenesis in networks with synaptic adaptation and plasticity.

1 Introduction

Cooperative rhythms play a pivotal role in brain functioning. Fully or partially synchronized oscillations, observed across various frequency bands, underlie fundamental processes such as perception, cognition, and motor control Churchland and Sejnowski, 1992; Mizuseki and Buzsaki, 2014; Kopell et al., 2000. Extensive research has focused on the emergence of cooperative rhythms in networks of spiking and bursting neurons, encompassing synchronization Kopell et al., 2000; Brunel, 2000; Börgers and Kopell, 2003; Somers and Kopell, 1993; Izhikevich, 2007; Belykh et al., 2005; Ermentrout and Terman, 2010, partial and cluster synchronization Achuthan and Canavier 2009; Shilnikov et al., 2008; Belykh and Hasler, 2011; Schöll, 2016; Berner et al., 2021a, neural bumps Laing and Chow, 2001; Gutkin et al., 2001, and chimera states Olmi et al., 2011; Omelchenko et al., 2013.

Networks of spiking neurons with fast synaptic connections are often modeled via pulsatile on-off coupling, which sharply activates upon the arrival of a spike from a pre-synaptic cell. Such interactions are conveniently represented by networks of quadratic integrate-and-fire (QIF) models particularly suitable for large-scale simulations and analysis of cooperative dynamics Gerstner and Kistler (2002). The macroscopic dynamics of QIF networks have received extensive attention through the reduction to low-dimensional model descriptions, especially in the thermodynamic limit of infinite-dimensional networks Montbrió et al., 2015; Pazó and Montbrió, 2016; Devalle et al., 2017; Esnaola-Acebes et al., 2017; Devalle et al., 2018; Schmidt et al., 2018; Pietras et al., 2019; Montbrió and Pazó 2020; Lin et al., 2020; Gast et al., 2020; Taher et al., 2020; Byrne et al., 2022; Clusella et al., 2022; Clusella and Montbrió 2024; Ratas and Pyragas, 2018; Pyragas and Pyragas 2022, 2023; Coombes 2023; Ferrara et al., 2023. Notably, Montbrió et al. (2015) derived exact macroscopic equations for QIF networks, uncovering an effective coupling between firing rate and mean membrane potential governing network dynamics. Pietras et al. (2023) offered an analytical description of QIF network macroscopic dynamics, extending beyond the Ott-Antosen ansatz Ott and Antonsen (2008) and exploring various fast synaptic pulse profile choices. The impact of synaptic time delay on the collective dynamics of integrate-and-fire networks with sharply activated synaptic coupling, modeled by the Dirac delta function, was also extensively explored (Ernst et al., 1995; Devalle et al., 2018; Ratas and Pyragas, 2018; Pyragas and Pyragas 2022, 2023). In particular, Devalle et al. (2018) reduced a QIF model with synaptic delay to a set of firing rate equations to analyze the existence and stability of partially synchronous states. Ratas and Pyragas (2018) employed a Lorenzian ansatz to characterize macroscopic oscillations of a QIF network with heterogeneous time-delayed delta function synapses. However, there is a lack of analytical studies on the role of slower synaptic activation, potentially in the presence of time delays, in controlling critical phase transitions in QIF networks. Nevertheless, since the seminal paper by Van Vreeswijk et al. (1994), it has been recognized that slow inhibitory and excitatory synapses can reverse their roles, with slow inhibition favoring synchronization Golomb and Rinzel 1993; Terman et al., 1998; Elson et al., 2002. While predicting the exact rates of synaptic activation inducing such critical transitions in conductance-based spiking models may be challenging, analytically tractable QIF networks offer promising avenues for such exploration.

Toward this goal, this paper investigates a finite-size network of QIF neurons globally connected via a general kernel function that governs both synaptic activation and synaptic time delay. We analytically illustrate how the shape of the kernel function impacts neuron interaction, significantly altering the microscopic and macroscopic behavior of QIF networks representing Type I neuron populations. This is achieved by reducing QIF networks and their phase analog, theta neuron networks, to the Kuramoto-Sakaguchi (KS) model. Here, oscillator frequencies, coupling strength, and the Sakaguchi phase lag parameter are determined by the pulse profile’s first and second terms in the Fourier expansion. We conduct this reduction under the weak coupling assumption, utilizing the intermediate step of representing the QIF network as a generalized Winfree model, subsequently reduced to the KS model.

In our recent study Munyayev et al. (2023), we elucidated the qualitative connection between the dynamics of QIF networks incorporating synaptic dynamics and neuronal refractoriness, and the second-order Kuramoto model with high-order mode coupling. Here, we use multi-scale analysis to derive exact relationships between the QIF network with an arbitrary synaptic activation function and the KS model. Specifically, we establish explicit conditions on the parameters of the general kernel function that lead to critical transitions, determining whether the coupling is attractive or repulsive. Consequently, these conditions dictate the emergence of stable synchronization or nonstationary generalized splay states Berner et al. (2021b) and cyclops states Munyayev et al. (2023). Our analysis reveals alternating stability regions for network synchronization, dependent on both the (slow) synaptic activation and time delay. With some important caveats, this finding can be interpreted as an analogous stability criterion for synchronization in time-delayed phase oscillator networks Earl and Strogatz (2003).

Our approach serves as a connecting link between two alternative methodologies for describing macroscopic dynamics: QIF networks and theta neurons, and Winfree-type models Pazó and Montbrió, 2014; Gallego et al., 2017; Montbrió and Pazó, 2018; Pazó et al., 2019; Pazó and Gallego 2020; Manoranjani et al., 2021; Bick et al., 2020. Our KS model reduction of the generalized Winfree model with a general synaptic activation function can be seen as an extension of the work Montbrió and Pazó (2018), where a two-population Kuramoto model was derived from a network of Winfree oscillators featuring a feedback loop between fast excitation and slow inhibition.

The structure of this paper is outlined as follows. Section 2 presents the QIF network model, its theta neuron equivalent, and the general synaptic activation function. Section 3 details transforming the theta neuron model into the generalized Winfree model. We expand the pulse profile as a Fourier series and further simplify the model to the KS model using weak coupling-enabled averaging techniques. Section 4 focuses on a specific example of synaptic activation, presenting a class of kernel functions. We establish exact conditions determining whether the synaptic coupling is attractive, promoting synchronization, or repulsive, favoring splay and cyclops states. Section 5 offers numerical validation of the derived conditions and presents a comparison between the dynamics of the QIF network, the theta neuron model, and the reduced KS model. We demonstrate that the KS model accurately predicts firing rates and times, capturing the emergence of synchronization, weakly stable cyclops states, and non-stationary regimes. Section 6 contains concluding remarks and discussions.

2 The general QIF network and its theta neuron representation

Physiologically, excitable neurons are commonly categorized into two types. We focus here on Type I neurons, a group encompassing cortical excitatory pyramidal neurons. When subjected to a sufficiently large input stimulus, these neurons exhibit action potentials at an arbitrarily low rate, signaling the disappearance of a resting state through a saddle-node bifurcation. The canonical model used to describe Type I neurons is the QIF neuron model, which characterizes neurons’ dynamics near the spiking threshold Izhikevich (2007).

This study investigates a globally coupled network of QIF neurons interacting through chemical synapses. Each neuron’s microscopic state is characterized by its individual membrane potential , governed by the following ordinary differential equation Izhikevich (2007):Here, , represents external constant currents applied to neurons, is a common synaptic weight controlling the total strength of synaptic inputs, and is a time-varying input drive. When the membrane potential of the th neuron reaches the threshold value , the neuron generates a spike, and its voltage resets to . In the absence of the input drive , the intrinsic applied current places the corresponding th neuron at a saddle-node bifurcation, marking the onset of periodic firing. Thus, if , the neuron is in the excitable regime, while if , it is in the oscillatory regime.

The last term on the right-hand side of (Eq. 1) represents synaptic interactions characterized by the coupling strength of the global synaptic drive. In general, the mean population synaptic activity can be expressed by the following recurrent input equation:

This equation accounts for relaxation processes and describes a specific type of neuron activation and its sensitivity to stimuli from other cells, including signal duration and post-spike latency. Here, denotes the time of the th spike of the th neuron, represents the Dirac delta function, and is the normalized synaptic activation caused by a single presynaptic spike with a time scale . Notably, the integral transformation with the kernel acts as a low-pass filter.

The QIF-neuron model (Eq. 1) describes the membrane potential and operates as a hybrid dynamical system, incorporating instantaneous resets to a base value upon spike emission. While this formulation provides a direct physical interpretation, discontinuities can pose challenges for certain applications. Fortunately, a smooth change of coordinates exists, transforming the QIF-neuron dynamics into a space where the membrane potential is represented by a phase variable on the unit circle. This representation captures nonlinear spike-generating mechanisms of Type I neurons, ensuring smooth solutions within a compact domain. In the limit , the transformation Ermentrout and Kopell (1986) converts the membrane potential description of the QIF-neuron model (Eq. 1) into a canonical theta neuron model of a population of Type I neurons coupled by an excitatory or inhibitory synaptic drive. In this case, each neuron’s dynamics is governed by the following equation:where represents the index of the th neuron, and its state is characterized by the phase angle . We assume that the constant excitability parameters , akin to fixed input currents, have slightly different values across the network elements. Moreover, we consider , placing each neuron in an oscillatory regime indicative of periodic spiking. A neuron is deemed to spike or produce an action potential when crosses while increasing. Consequently, in addition to a common external input , each cell receives stimulation from other cells. Thus, the neurons are recurrently coupled via synaptic current .

The last term on the right-hand side of (Eq. 3) accounts for chemical interactions among neurons. The coupling strength is assumed to be uniform across all neurons. We model the synaptic activity acting on a neuron with the following expression:where the functiondetermines the shape of the pulsatile chemical synapse. The positive integer parameter controls the sharpness of , with higher values yielding sharper peaks. Note that as , the smooth profile converges to , effectively representing -pulses so thatwhere denotes the Dirac comb with period . In this limit and under the assumption , the theta neuron model (Eqs 3, 4) fully aligns with the QIF-neuron model (Eqs 1, 2). It is noteworthy that these models exhibit an unconventional characteristic: in contrast to conductance-based models, inhibitory -pulse coupling promotes synchronization, thus being considered attractive, while excitatory -pulse coupling is repulsive Izhikevich (2007).

Note that the limiting -pulse case with offers a convenient framework in which function controls synaptic activation and deactivation after the instantaneous occurrence of a presynaptic action potential. The use of simplifies the analytical expressions in the KS model to be more explicit and manageable (see next section). However, the general Eqs 4, 5 for synaptic input provide a more comprehensive and accurate description that captures the nuances of synaptic transmission processes. The release of neurotransmitters from the presynaptic neuron that induces a chemically activated synaptic current is non-instantaneous. Thus, the general function more adequately describes this gradual neurotransmitter release that takes place as the presynaptic neuron state approaches a spike activation threshold.

In this work, we primarily focus on the pulse shape defined by (Eq. 5), originally proposed in Ariaratnam and Strogatz (2001) and widely adopted in recent studies of pulse-coupled phase oscillators O’Keeffe and Strogatz 2016; Pazó and Montbrió, 2014; Gallego et al., 2017; Pazó et al., 2019; Bick et al., 2020 and populations of theta neurons Luke et al., 2013; So et al., 2014; Laing 2014, Laing 2015; Montbrió et al., 2015; Pazó and Montbrió, 2016; Chandra et al., 2017; Goel and Ermentrout 2002; Bick et al., 2020; Pietras et al., 2023. However, our approach is directly applicable to alternative pulse shapes satisfying common properties such as unimodality, normalization, symmetry, and localization around and considered in previous studies Gallego et al., 2017; Pietras et al., 2023.

In the following, we explore how the shape of the kernel function , governing synaptic activation, impacts the network’s collective behavior. To tackle this analytically, we will demonstrate that the model (Eqs 3, 4) consideration can be reduced to a more analytically-tractable KS model. The process involves several key steps: firstly, leveraging the assumption , we introduce an alternative phase representation for (Eqs 3, 4). Subsequently, we will derive the KS model of phase oscillators by employing multiple time scale analyses in weak chemical synaptic coupling scenarios.

3 Deriving the KS model from the theta-neuron model: an asymptotic analysis

We begin by assuming that each neuron operates within an oscillatory regime in the absence of interaction, i.e., . Hence, we can use a dynamical variable transformation:where is an unknown parameter to be determined subsequently. It is notable that is close to , where denotes the mean value of the distributed external constant currents . However, a more precise determination of is feasible for finite-size networks, as will be shown later in this section.

The transformation (Eq. 7) transitions the model (Eqs 3, 4) to an alternative phase representation:whereThis representation remains consistent with the original description; notably, , and the occurrence of spikes for the th neuron is still defined by crossing . However, given that in the model (Eqs 3, 4), all cells receive constant external inputs , and all units operate within the oscillatory regime, each phase uniformly rotates in the absence of network interaction. Hence, the representation (Eqs 8, 9) proves more convenient for subsequent analysis. Thus, we derive the governing equation for the introduced dynamical variables that may be viewed as the generalized Winfree model, which, in turn, can be reduced to the KS model. Further elaboration on this approach’s derivation and technical intricacies are presented below.

For further analysis, it is convenient to expand the symmetric pulse into a Fourier series with respect to . This expansion takes the following form:The coefficients in this series can be expressed analytically as follows:where represents combinations, denotes the gamma function, is the hypergeometric function. Specifically, for the first two Fourier series coefficients and of the symmetric pulse in (Eq. 10), we obtain:Noteworthy, in the limit , i.e., for , all coefficients in the Fourier series (Eq. 11) converge to the same value .

We proceed by assuming that the synaptic coupling is weak, allowing us to express it as , where is a small parameter. Similarly, we assume that the deviations of external inputs from the value are small, i.e., .

These assumptions enable a multiple-time scale analysis. To facilitate this analysis, we introduce a separation of time scales:and represent each phase variable, , as an asymptotic series with respect to the small parameter :

Substituting the series (Eq. 15) with times (Eq. 14) and (Eq. 11) into (Eqs 8, 9), and considering the zeroth order in , we obtain for :and, taking into account (Eq. 16), for we arrive atwhere each corresponding complex coefficient is determined as follows:

In (Eq. 16), the first term describes the fast, free-running rotation of period , while the slow phase drifts induced by synaptic interaction are characterized by the set of slow variables . Hence, can be considered constant over time scales comparable to the period of the corresponding fast rotation. Consequently, the standard averaging method can be applied to derive the KS model corresponding to (Eqs 3, 4). Notably, in this case, the Sakaguchi phase shift emerges naturally due to the complex nature of the coefficient determined in (Eq. 18).

In accordance with the averaging procedure after substituting expression (Eq. 17) into (Eq. 8), the next step of our asymptotic approach involves considering all terms that are and, in the first order in , we obtain a set of equations for .

To eliminate the secular terms that grow without bounds as , we impose the conditions

This yields a solution for without the secular terms. Note that (Eq. 19) measures the rate of change of with respect to the slow time scale . Finally, taking into account the relation , we find that the dynamics of the slow phases is approximately described by the KS model:where

To determine the unknown parameter , we set the mean value of the intrinsic frequencies in the KS model (Eq. 20) to zero. Hence, the value of can be found by solving the nonlinear algebraic equation:This choice of the optimal value of parameter yields a better quantitative match between the numerical simulation results of the theta neuron model (Eqs 3, 4) and the KS model (Eq. 20), compared to the conventional choice of . In the limiting case , where the shape of the pulsatile chemical synapse is determined by Eq. 6, the first two Fourier series coefficients and in (Eqs 12, 13) of the symmetric pulse in Eqs 8, 9 converge to and, hence, Eq. 22 for can be solved analytically. This yields the simpler expressions for the parameters , and of the KS model (Eq. 20) with coefficients (Eqs 21a, 21b, 21c):where the complex coefficient is determined as follows:

Note that oscillator frequencies , coupling strength , and the Sakaguchi phase lag parameter in the KS model (Eq. 20) are explicitly defined through by the pulse profile’s first and second Fourier series terms and . In particular, the expressions (Eq. 23) clearly indicate that for the -pulses, the sign of the coupling strength in the KS model is solely determined by and is opposite to the sign of the coupling in the original QIF model. The following section will show that this property carries over to the general finite-width pulses defined by (Eq. 5). For a specific class of the activation function , we will also demonstrate that increasing the pulse width decreases the coupling strength and practically does not affect the Sakachichi phase lag parameter . The direct dependence on the properties of synaptic activation enables a straightforward assessment of the role of synaptic interactions in critical phase transitions and dynamics. Specifically, the coupling strength and Sakachichi parameter directly reflect the impact of synaptic activation on critical phase transitions and the synchronization behavior of the neuron population. In particular, determining whether the coupling in the theta neuron model (Eqs 3, 4) is attractive or repulsive presents a challenge due to the complexity of the system, whereas the KS model (Eq. 20) makes this process straightforward. In the subsequent section, we delve into this process, focusing on a particular synaptic activation profile.

4 The role of synaptic profile: a combined effect of activation, deactivation, and time delays

To demonstrate the important effects arising from the specific selection of the shape of a “low-pass filter” in synaptic activation, we examine the following class of kernel functions:where is the Heaviside step function and represents an integer parameter pivotal to the network dynamics, as will become evident subsequently. The chosen kernel corresponds to the Green’s function for a non-homogeneous linear differential equation of order . In this case, the equation for the mean synaptic activity is generated by the th order linear differential operator and contains an external signal that represents an average profile of spike pulses:The source term on the right-hand side of (Eq. 26) is presented in three interchangeable forms corresponding to the QIF model (Eqs 1, 2), theta neuron (Eqs 3, 4), and their averaged representation through the KS model (Eq. 20). This term can be interpreted as the population firing rate, which induces a post-synaptic current in response to the arrival of spikes. In the limit , one might assume that the interaction between neurons becomes instantaneous. However, if the characteristic time scale of a post-synaptic response is not negligibly small, it becomes imperative to take into account the individual adaptation dynamics of the synaptic variable , encompassing its activation, deactivation, and time delay.

To describe the adaption dynamics of , it is common to assume that the synaptic variable follows the first-order Devalle et al., 2017; Bick et al., 2020; Afifurrahman et al., 2020; Afifurrahman et al., 2021; Pietras et al., 2023 or second-order ordinary differential equation Mohanty and Politi 2006; Zillmer et al., 2007; Bolotov et al., 2016; Chen et al., 2017, corresponding to and in (Eq. 26), respectively.

In the case , the mean population synaptic activity is governed by the standard relaxation rule. Here, when the th neuron fires at time , generating the th spike in the form of the Dirac delta function, the mean activity instantaneously changes and subsequently decays exponentially in the absence of further firings. The parameter acts as a synaptic time constant.

For , when the th neuron fires at time and the th Dirac delta pulse is generated, the variable is augmented by the function defined by (Eq. 25) with , coinciding with the so-called alpha-function pulse Mohanty and Politi 2006; Zillmer et al., 2007; Bolotov et al., 2016; Chen et al., 2017. For this alpha-pulse created by a spiking neuron, determines both the signal’s width and the time at which it attains its maximum value (Figure 1).

FIGURE 1

FIGURE 1

Synaptic dynamics , defined by (Eq. 4), induced by presynaptic spikes , taking the form of (A,B) as defined by (Eq. 6) and (C,D) as defined by (Eq. 5) with and linearly increasing phase . The parameter is used for all cases.

We extend the argument to arbitrary and that make the model of synaptic adaptation (Eq. 26) encompass a broad spectrum of realistic biophysical scenarios, ranging from fast non-delayed to slow delayed activation. These scenarios include neurotransmitter release in the synaptic cleft and the opening/closing of postsynaptic ion channels, characterized by distinct time scales such as latency (time delay), rise, and decay times, as reflected in the kernel function (Eq. 25).

Our choice of the kernel function for synaptic activation aligns with the gamma distribution Zwillinger (1992), a continuous probability distribution characterized by two parameters: , the scale parameter, and , the shape parameter. This distribution reaches its maximum value at , with mean value , variance , and skewness . The skewness, reflecting the symmetry of the distribution about its mean, is maximal for the exponential case and diminishes for larger values of , indicating increased symmetry. While both and influence synaptic dynamics, has a more significant impact on synaptic time delay than , whereas predominantly shapes the synaptic activation profile. Figures 1, 2 demonstrate how the parameters and determine the time profile of the post-synaptic response and its characteristics.

FIGURE 2

FIGURE 2

Characteristics of the synaptic dynamic profile for (Eq. 4) as a function of . (A) Peak latency, , (B) maximum value, , and (C) full width at half maximum , induced by spikes with and different (cyan markers – , red markers – , orange markers – ).

Towards our objective of deriving explicit conditions for the attractiveness or repulsiveness of synaptic coupling governed by (Eq. 24) with the kernel function (Eq. 25), we calculate the complex coefficient and its modulus and argument from (Eq. 24) as follows:where is determined from (Eq. 22). While the hypergeometric function , a factor defining the coupling strength in the KS model (Eq. 20) with coefficients (Eq. 27), cannot be expressed via elementary functions (except for the liming -pulse case with ), it is evident that for . Consequently, the coupling strength in the KS model (Eq. 20) for and the coupling strength in the QIF model (Eq. 1) have opposite signs. Therefore, for , a positive coupling strength corresponds to attractive coupling, provided that the Sakaguchi phase lag parameter . According to (Eq. 21), if

Solving the inequality (Eq. 28), we obtain the following intervals of parameters that correspond to the attractive coupling in the KS model (Eq. 20) and therefore in the QIF (Eqs 1, 2) and theta-neuron models (Eqs 3, 4) with :where .

Figure 3 displays the regions, as defined by (Eq. 29), where the coupling in the KS model is attractive (blue) or repulsive (red). These regions exhibit an alternating pattern as functions of the synaptic time constant and the common external input for a fixed . Increasing the parameter , which primarily controls the time delay, makes the synaptic coupling more sensitive and results in more alternating zones. This alternating pattern of attractive and repulsive coupling resembles the stability criterion for synchronization in time-delayed phase oscillator networks, where the time delay controls the sign of the derivative of the periodic coupling function Earl and Strogatz (2003).

FIGURE 3

FIGURE 3

Regions of attractive (blue) and repulsive (red) coupling in the theta neuron model (Eqs 3, 4), corresponding to the KS model regions defined by (Eq. 29). The colors represent the coupling strength sign as a function of the synaptic time constant and the common external input for a fixed . (A), (B), and (C). Other parameters: , , . The yellow points , , , and indicate the parameter values used for numerical simulations of Figures 69.

Figure 4 provides further insight into how the synchronization properties of the KS model, critically controlled by the coupling strength and Sakaguchi parameter , depend on the original parameters of the QIF model. Notably, both and exhibit a weak dependence on , suggesting that the pulse width has minimal impact on synchronization dynamics, except for the case of biologically irrelevant wide pulses where approaches 0, making the coupling strength significantly weaker. In contrast, both and are highly sensitive to variations in , with the latter producing the alternating regions of attractive and repulsive coupling, as illustrated in Figure 3.

FIGURE 4

FIGURE 4

Coupling strength (A) and Sakaguchi parameter (B) as functions of synaptic adaptation/delay (parameter ) and the finite pulse width (parameter ). The dependencies are calculated analytically via (Eq. 21) and (Eq. 27) for , , . The coupling strength increases as the pulse width decreases (via increasing ) and decreases as synaptic adaptation slows down and experiences larger time delays (via increasing ). The Sakaguchi parameter is highly sensitive to and only weakly dependent on .

In the following, we offer additional evidence supporting the predictive power of the derived KS model. We show numerically that it effectively captures the emergence of robust dynamical regimes like synchronization and more intricate partially synchronized dynamics such as weakly stable cyclops states and non-stationary generalized splay states in both the QIF and theta neuron models.

5 Dynamical equivalence of the models: numerical validation

We conduct numerical computations using a widely accepted fifth-order Runge–Kutta method with a fixed time step of 0.01, providing additional validation for our analytical findings and predictions.

To characterize the dynamical regimes, we utilize both microscopic measures (pairwise phase differences and firing times) and macroscopic indicators such as the first- and second-order complex Kuramoto parameters Daido 1992; Skardal et al., 2011:where and , define the magnitude and the phase of the th moment Kuramoto order parameter , respectively. The first-order scalar parameter characterizes the degree of phase synchrony with corresponding to full phase synchrony. The second-order scalar parameter determines the degree of cluster synchrony, where corresponds to generalized splay states Berner et al. (2021b) and their particular case of cyclops states Munyayev et al. (2023). We will also use the firing rate, which measures the average rate at which neurons emit spikes, as another important macroscopic observable to characterize dynamical regimes. We will use the following formula Pietras et al. (2023) in simulations later in the section:

To identify the time steps corresponding to neuron spike events in both the QIF-neuron model and the theta neuron model, we monitored the sign changes of and , along with their time derivative signs. Subsequently, we determined the spike moment using linear interpolation within each time step.

Figures 5, 6 illustrate the perfect correspondence between the emergence of full synchronization and non-stationary generalized splay states in the theta neuron model (Eqs 3, 4) and the KS model (Eq. 20) within the range of attractive coupling (point A in Figure 3) and repulsive coupling (point B in Figure 3), respectively. In the case of full synchronization (Figure 5), the first-order and second-order scalar parameters (Eq. 30), and , converge to unity but cannot reach 1 due to intrinsic parameter mismatch in . Likewise, the first-order scalar parameter, , associated with the non-stationary generalized splay state oscillates closely around 0 (Figure 6). Figure 7 demonstrates that the KS model maintains its excellent predictive power for the emergence of full synchronization and non-stationary generalized splay states in large networks of 100 and 1,000 theta neurons.

FIGURE 5

FIGURE 5

Dynamical equivalence between the theta neuron (Eqs 3, 4) and KS models (Eq. 20), demonstrated via the onset of full synchronization. (A) The evolution of the first (solid curves) and second (dashed curves) order parameters for the theta neuron (green curves) and KS model (red curves), including the transient period. Initial phases , are uniformly distributed over the interval . (B) The colors depict the phase differences in the theta neuron model converging to imperfect full synchronization, subject to mismatched parameters that are uniformly distributed on the segment , , . (C) Comparison between the dynamics of the phase differences for (thick red curve) and (thick blue curve) in the theta neuron model and recalculated from phases using the relation (Eq. 7) for (thin cyan curve) and (thin red curve) in the KS model. Note the perfect alignment of the phase-difference dynamics in the two models. Parameters: , , , , correspond to point on Figure 3.

FIGURE 6

FIGURE 6

Dynamical equivalence between the theta neuron (Eqs 3, 4) and KS models (Eq. 20), demonstrated via the onset of non-stationary generalized splay state with an oscillating . Notations are as in Figure 5. (A) The evolution of the first (solid curves) and second (dashed curves) order parameters for the theta neuron (green curves) and KS model (red curves), including the transient period. (B) The colors depict the phase differences θn–θ21 in the theta neuron model. (C) Comparison between the dynamics of the phase differences θn–θ21 for n = n1 = 17 (thick red curve) and n = n1 = 20 (thick blue curve) in the theta neuron model and θn–θ21 for n = n1 = 17 (thin cyan curve) and n = n1 = 20 (thin red curve) in the KS model. Mismatch parameters are chosen from a uniform distribution , , . Other parameters: , , , , and correspond to point on Figure 3 and yield the frequency parameter , calculated from (Eq. 22) [not shown].

FIGURE 7

FIGURE 7

Large-size networks. Dynamical equivalence between the theta neuron (Eqs 3, 4) and KS models (Eq. 20), demonstrated via the onset of full synchronization (A,B) and non-stationary generalized splay state with an oscillating (C,D) for (A,C) and (B,D). Notations are as in Figures 5, 6. Parameters: , , , , (A,B); , , , and (C,D).

Figures 810 illustrate the remarkable agreement between cooperative dynamics in the QIF and KS models. Specifically, Figure 8 depicts the onset of full synchronization, as evidenced by synchronized firing rates and times determined via (Eq. 31). The slight discrepancy in the firing times between the QIF and KS models may stem from various sources, such as accumulated numerical errors and the approximate calculation of the frequency parameter derived from (Eq. 22) for selecting the KS model parameters. Figure 9 provides evidence for the capability of the KS model to perfectly predict even non-stationary, asynchronous firing in the QIF model. Figure 10 illustrates the predictive power of the derived KS model in discerning stable complex cluster patterns like cyclops states in the QIF model. Introduced in Munyayev et al. (2023), cyclops states are formed by two distinct, coherent clusters, and a solitary oscillator reminiscent of the Cyclops’ eye. While detecting stable cyclops states can be challenging in the QIF model, the KS model provides a more convenient and constructive approach.

FIGURE 8

FIGURE 8

Onset of full synchronization in the QIF (Eq. 1) and KS models (Eq. 20). (A) Firing rate and (B) firing times of QIF neurons (cyan curves and round markers) and oscillators of the KS model (black curves and cross markers) with the firing times recorded at . Each row in (B) represents the firing times of a neuron/oscillator. Inset (C) zooms-in on the firing time pattern from (B). Initial conditions are as in Figure 5. Parameters: , , , , , , , correspond to point in Figure 3. The firing rate was calculated within a sliding time window of .

FIGURE 9

FIGURE 9

Diagram similar to Figure 8 showing a nearly perfect match for asynchronous firing rate (A) and firing times (B) in the QIF network (cyan curves and round markers) and the KS model (black curves and cross markers). Parameters: , , , , , , , correspond to point in Figure 3. Other notations and settings are as in Figure 8.

FIGURE 10

FIGURE 10

Firing rate (A) and firing times (B) of a three-cluster cyclops state in the QIF network (cyan curves and round markers) and the KS model (black curves and cross markers). (C) Snapshots of the cyclops state phase distributions in the KS model at two time instants. The oscillators’ coloring represents their phase. The cyclops states is formed by a solitary oscillator (red) and two coherent clusters, each composed of 10 oscillators (orange and blue). The initial phases are chosen near a cyclops state. Parameters: , , , , , , , . Other notations and settings are as in Figure 8.

In the numerical calculations of relatively small-size networks of QIF neurons presented in Figures 810, we employed the fourth–order Runge–Kutta method using the procedure for identifying the spike events described above. To ensure better consistency between the QIF-neuron model and the theta neuron model, we used sufficiently large values of and , specifically . We did not account for the time interval required for the membrane potential to reach . However, this approach becomes computationally expensive for simulating large networks of QIF neurons. Therefore, in the numerical calculations presented in Figures 11, 12, we employed the algorithm described in Pazó and Montbrió (2016). This algorithm, based on the Euler method, accounts for the time it takes for the dynamic variable associated with the membrane potential to pass through the singularity after exceeding and reach the final value . A detailed description of the algorithm and its parameters is available in the Supplemental Material for Pazó and Montbrió (2016). Figures 11, 12 confirm that the main analytical predictions for the dynamics of the QIF model remain valid for large network sizes and regardless of the calculation scheme used. Specifically, Figure 11 validates the prediction of coupling attractiveness at point in Figure 3 and demonstrates the onset of full synchronization in the QIF network of 1,000 neurons. Similarly, Figure 12 confirms the coupling repulsiveness at point in Figure 3 and illustrates the emergence of asynchronous dynamics. Notably, the asynchronous firing rate in the 21-node QIF network shown in Figure 9, corresponding to point , may resemble weak coherence patterns. In contrast, its 1,000-node counterpart in Figure 12, with parameters also corresponding to point , exhibits nearly perfect asynchronous dynamics. Our numerous additional simulations of 1,000-node QIF networks for a large set of parameters, including those near the boundary of the alternating regions of attractive and repulsive coupling in Figure 3, further validated the consistency of the predictions for cooperative dynamics and the critical transitions in the QIF networks [not shown].

FIGURE 11

FIGURE 11

Firing rate (A) and firing times (B) in the QIF network demonstrating the transition to full synchronization starting from random initial conditions. Parameters , , , , , , and correspond to point in Figure 3. The simulations use the algorithm from Pazó and Montbrió (2016), which accounts for the neurons’ refractory time. The mean synaptic activation was calculated within a sliding time window of .

FIGURE 12

FIGURE 12

Firing rate (A) and firing times (B) in the QIF network accompanying the transition from synchronous to asynchronous dynamics. Parameters , , , , , , and correspond to point in Figure 3. Other notations and settings are as in Figure 11.

6 Conclusions

Understanding the influence of synaptic dynamics, including activation rates, deactivation processes, and latency, on collective dynamics in neuronal networks is of significant importance. Considerable advancements have been made in analyzing the role of fast or time-delayed synapses in integrate-and-fire neuron networks. However, there remains a scarcity of analytical studies exploring the influence of slower synaptic dynamics, potentially in the presence of time delays, on controlling critical phase transitions in neuronal networks.

In this paper, we have made substantial contributions to advancing analytical methods in this field. We studied a finite-size network of QIF neurons globally interconnected via a generalized kernel function governing both synaptic activation and time delay. Our analytical exploration demonstrated how the shape of the kernel function profoundly affects neuron interaction, thereby significantly modifying the microscopic and macroscopic behavior of QIF networks. To achieve this, we reduced the QIF and theta neuron network models to the Kuramoto–Sakaguchi model. In this model, oscillator frequencies, coupling strength, and the Sakaguchi phase lag parameter are determined by the Fourier terms of the pulse profile series expansion.

We established exact conditions determining whether synaptic coupling is attractive, fostering synchronization, or repulsive, promoting splay and cyclops states. Furthermore, we demonstrated a remarkable correspondence between the dynamics of the derived KS model and the original QIF and theta neuron models. Specifically, the KS model accurately predicted firing rates and times, capturing the emergence of synchronization, weakly stable cyclops states, and non-stationary regimes in the QIF model. Our reduction approach complements the work by Ratas and Pyragas (2018), which assumed a Lorentzian distribution of the input currents and employed the thermodynamic limit to reduce the QIF model with time-delayed coupling to macroscopic equations that characterize the mean membrane potential, the spiking rate, and the mean synaptic current. The bifurcation analysis of these macroscopic equations, performed in Ratas and Pyragas (2018), revealed alternating parameter regions where the QIF network exhibits macroscopic self-oscillations as a function of input current heterogeneity and time-delayed coupling. While sharing some similarities and goals, our approach is fundamentally different. It reduces finite-size QIF networks with an arbitrary current distribution and arbitrary synaptic activation function to the finite-size microscopic KS model. This allows for characterizing fine dynamical patterns, including cluster and cyclops states, which might be out of reach for a macroscopic description. In light of this, our reduction approach to an analytically tractable Kuramoto model holds promise in facilitating constructive analysis of rhythmogenesis in QIF networks. By utilizing the reduced KS model, a variety of methods and analytical machinery can be applied to the analysis of collective dynamics in QIF models, including the constructive selection of complex patterns, such as chimeras, whose existence and emergence might be easier to deduce from the reduced Kuramoto model description. Furthermore, the reduction approach holds the potential for extensions to incorporate synaptic adaptation, Hebbian learning, and a complex network structure by employing node-degree block approximation.

Statements

Data availability statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.

Author contributions

LAS: Conceptualization, Formal Analysis, Investigation, Methodology, Visualization, Writing–review and editing. VOM: Formal Analysis, Investigation, Methodology, Visualization, Writing–review and editing. MIB: Formal Analysis, Investigation, Methodology, Visualization, Writing–review and editing. GVO: Conceptualization, Investigation, Methodology, Supervision, Writing–review and editing. IB: Conceptualization, Investigation, Methodology, Supervision, Writing–original draft.

Funding

The author(s) declare that financial support was received for the research, authorship, and/or publication of this article. This work was supported by the Ministry of Science and Higher Education of the Russian Federation under project No. 0729-2020-0036 (to GO and MB.), the Russian Science Foundation under project No. 22-12-00348 (to VM and LS), the Georgia State University Brains and Behavior Program, the National Science Foundation (United States) under Grants No. CMMI-2009329 and CMMI-1953135 (to IB).

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

  • 1

    AchuthanS.CanavierC. C. (2009). Phase-resetting curves determine synchronization, phase locking, and clustering in networks of neural oscillators. J. Neurosci.29, 52185233. 10.1523/JNEUROSCI.0426-09.2009

  • 2

    AfifurrahmanA.UllnerE.PolitiA. (2021). Collective dynamics in the presence of finite-width pulses. Chaos An Interdiscip. J. Nonlinear Sci.31, 043135. 10.1063/5.0046691

  • 3

    AfifurrahmanUllnerE.PolitiA. (2020). Stability of synchronous states in sparse neuronal networks. Nonlinear Dyn.102, 733743. 10.1007/s11071-020-05880-4

  • 4

    AriaratnamJ. T.StrogatzS. H. (2001). Phase diagram for the Winfree model of coupled nonlinear oscillators. Phys. Rev. Lett.86, 42784281. 10.1103/PhysRevLett.86.4278

  • 5

    BelykhI.De LangeE.HaslerM. (2005). Synchronization of bursting neurons: what matters in the network topology. Phys. Rev. Lett.94, 188101. 10.1103/PhysRevLett.94.188101

  • 6

    BelykhI.HaslerM. (2011). Mesoscale and clusters of synchrony in networks of bursting neurons. Chaos An Interdiscip. J. Nonlinear Sci.21, 016106. 10.1063/1.3563581

  • 7

    BernerR.VockS.SchöllE.YanchukS. (2021a). Desynchronization transitions in adaptive networks. Phys. Rev. Lett.126, 028301. 10.1103/PhysRevLett.126.028301

  • 8

    BernerR.YanchukS.MaistrenkoY.SchollE. (2021b). Generalized splay states in phase oscillator networks. Chaos An Interdiscip. J. Nonlinear Sci.31, 073128. 10.1063/5.0056664

  • 9

    BickC.GoodfellowM.LaingC. R.MartensE. A. (2020). Understanding the dynamics of biological and neural oscillator networks through exact mean-field reductions: a review. J. Math. Neurosci.10 (9), 9. 10.1186/s13408-020-00086-9

  • 10

    BolotovM.OsipovG.PikovskyA. (2016). Marginal chimera state at cross-frequency locking of pulse-coupled neural networks. Phys. Rev. E93, 032202. 10.1103/PhysRevE.93.032202

  • 11

    BörgersC.KopellN. (2003). Synchronization in networks of excitatory and inhibitory neurons with sparse, random connectivity. Neural comput.15, 509538. 10.1162/089976603321192059

  • 12

    BrunelN. (2000). Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. J. Comput. Neurosci.8, 183208. 10.1023/a:1008925309027

  • 13

    ByrneÁ.RossJ.NicksR.CoombesS. (2022). Mean-field models for EEG/MEG: from oscillations to waves. Brain Topogr.35, 3653. 10.1007/s10548-021-00842-4

  • 14

    ChandraS.HathcockD.CrainK.AntonsenT. M.GirvanM.OttE. (2017). Modeling the network dynamics of pulse-coupled neurons. Chaos An Interdiscip. J. Nonlinear Sci.27, 033102. 10.1063/1.4977514

  • 15

    ChenB.EngelbrechtJ. R.MirolloR. (2017). Cluster synchronization in networks of identical oscillators with α-function pulse coupling. Phys. Rev. E95, 022207. 10.1103/PhysRevE.95.022207

  • 16

    ChurchlandP. S.SejnowskiT. J. (1992). The computational brain. MIT press.

  • 17

    ClusellaP.MontbrióE. (2024). Exact low-dimensional description for fast neural oscillations with low firing rates. Phys. Rev. E109, 014229. 10.1103/PhysRevE.109.014229

  • 18

    ClusellaP.PietrasB.MontbrióE. (2022). Kuramoto model for populations of quadratic integrate-and-fire neurons with chemical and electrical coupling. Chaos An Interdiscip. J. Nonlinear Sci.32, 013105. 10.1063/5.0075285

  • 19

    CoombesS. (2023). Next generation neural population models. Front. Appl. Math. Statistics9, 1128224. 10.3389/fams.2023.1128224

  • 20

    DaidoH. (1992). Order function and macroscopic mutual entrainment in uniformly coupled limit-cycle oscillators. Prog. Theor. Phys.88, 12131218. 10.1143/ptp.88.1213

  • 21

    DevalleF.MontbrióE.PazóD. (2018). Dynamics of a large system of spiking neurons with synaptic delay. Phys. Rev. E98, 042214. 10.1103/physreve.98.042214

  • 22

    DevalleF.RoxinA.MontbrióE. (2017). Firing rate equations require a spike synchrony mechanism to correctly describe fast oscillations in inhibitory networks. PLoS Comput. Biol.13, e1005881. 10.1371/journal.pcbi.1005881

  • 23

    EarlM. G.StrogatzS. H. (2003). Synchronization in oscillator networks with delayed coupling: a stability criterion. Phys. Rev. E67, 036204. 10.1103/PhysRevE.67.036204

  • 24

    ElsonR. C.SelverstonA. I.AbarbanelH. D.RabinovichM. I. (2002). Inhibitory synchronization of bursting in biological neurons: dependence on synaptic time constant. J. Neurophysiology88, 11661176. 10.1152/jn.2002.88.3.1166

  • 25

    ErmentroutB.KopellN. (1986). Parabolic bursting in an excitable system coupled with a slow oscillation. SIAM J. Appl. Math.46, 233253. 10.1137/0146017

  • 26

    ErmentroutG. B.TermanD. H. (2010). Mathematical foundations of neuroscience, 35. Springer Science and Business Media.

  • 27

    ErnstU.PawelzikK.GeiselT. (1995). Synchronization induced by temporal delays in pulse-coupled oscillators. Phys. Rev. Lett.74, 15701573. 10.1103/PhysRevLett.74.1570

  • 28

    Esnaola-AcebesJ. M.RoxinA.AvitabileD.MontbrióE. (2017). Synchrony-induced modes of oscillation of a neural field model. Phys. Rev. E96, 052407. 10.1103/PhysRevE.96.052407

  • 29

    FerraraA.Angulo-GarciaD.TorciniA.OlmiS. (2023). Population spiking and bursting in next-generation neural masses with spike-frequency adaptation. Phys. Rev. E107, 024311. 10.1103/PhysRevE.107.024311

  • 30

    GallegoR.MontbrióE.PazóD. (2017). Synchronization scenarios in the Winfree model of coupled oscillators. Phys. Rev. E96, 042208. 10.1103/PhysRevE.96.042208

  • 31

    GastR.SchmidtH.KnöscheT. R. (2020). A mean-field description of bursting dynamics in spiking neural networks with short-term adaptation. Neural Comput.32, 16151634. 10.1162/neco_a_01300

  • 32

    GerstnerW.KistlerW. M. (2002). Spiking neuron models: single neurons, populations, plasticity. Cambridge University Press.

  • 33

    GoelP.ErmentroutB. (2002). Synchrony, stability, and firing patterns in pulse-coupled oscillators. Phys. D. Nonlinear Phenom.163, 191216. 10.1016/s0167-2789(01)00374-8

  • 34

    GolombD.RinzelJ. (1993). Dynamics of globally coupled inhibitory neurons with heterogeneity. Phys. Rev. E48, 48104814. 10.1103/physreve.48.4810

  • 35

    GutkinB. S.LaingC. R.ColbyC. L.ChowC. C.ErmentroutG. B. (2001). Turning on and off with excitation: the role of spike-timing asynchrony and synchrony in sustained neural activity. J. Comput. Neurosci.11, 121134. 10.1023/a:1012837415096

  • 36

    IzhikevichE. M. (2007). Dynamical systems in neuroscience. Cambridge, MA: MIT Press.

  • 37

    KopellN.ErmentroutG.WhittingtonM. A.TraubR. D. (2000). Gamma rhythms and beta rhythms have different synchronization properties. Proc. Natl. Acad. Sci.97, 18671872. 10.1073/pnas.97.4.1867

  • 38

    LaingC. R. (2014). Derivation of a neural field model from a network of theta neurons. Phys. Rev. E90, 010901. 10.1103/PhysRevE.90.010901

  • 39

    LaingC. R. (2015). Exact neural fields incorporating gap junctions. SIAM J. Appl. Dyn. Syst.14, 18991929. 10.1137/15m1011287

  • 40

    LaingC. R.ChowC. C. (2001). Stationary bumps in networks of spiking neurons. Neural Comput.13, 14731494. 10.1162/089976601750264974

  • 41

    LinL.BarretoE.SoP. (2020). Synaptic diversity suppresses complex collective behavior in networks of theta neurons. Front. Comput. Neurosci.14, 44. 10.3389/fncom.2020.00044

  • 42

    LukeT. B.BarretoE.SoP. (2013). Complete classification of the macroscopic behavior of a heterogeneous network of theta neurons. Neural Comput.25, 32073234. 10.1162/NECO_a_00525

  • 43

    ManoranjaniM.GopalR.SenthilkumarD.ChandrasekarV. (2021). Role of phase-dependent influence function in the Winfree model of coupled oscillators. Phys. Rev. E104, 064206. 10.1103/PhysRevE.104.064206

  • 44

    MizusekiK.BuzsakiG. (2014). Theta oscillations decrease spike synchrony in the hippocampus and entorhinal cortex. Philosophical Trans. R. Soc. B Biol. Sci.369, 20120530. 10.1098/rstb.2012.0530

  • 45

    MohantyP.PolitiA. (2006). A new approach to partial synchronization in globally coupled rotators. J. Phys. A Math. General39, L415L421. 10.1088/0305-4470/39/26/l01

  • 46

    MontbrióE.PazóD. (2018). Kuramoto model for excitation-inhibition-based oscillations. Phys. Rev. Lett.120, 244101. 10.1103/PhysRevLett.120.244101

  • 47

    MontbrióE.PazóD. (2020). Exact mean-field theory explains the dual role of electrical synapses in collective synchronization. Phys. Rev. Lett.125, 248101. 10.1103/PhysRevLett.125.248101

  • 48

    MontbrióE.PazóD.RoxinA. (2015). Macroscopic description for networks of spiking neurons. Phys. Rev. X5, 021028. 10.1103/physrevx.5.021028

  • 49

    MunyayevV. O.BolotovM. I.SmirnovL. A.OsipovG. V.BelykhI. (2023). Cyclops states in repulsive Kuramoto networks: the role of higher-order coupling. Phys. Rev. Lett.130, 107201. 10.1103/PhysRevLett.130.107201

  • 50

    O’KeeffeK. P.StrogatzS. H. (2016). Dynamics of a population of oscillatory and excitable elements. Phys. Rev. E93, 062203. 10.1103/PhysRevE.93.062203

  • 51

    OlmiS.PolitiA.TorciniA. (2011). Collective chaos in pulse-coupled neural networks. Europhys. Lett.92, 60007. 10.1209/0295-5075/92/60007

  • 52

    OmelchenkoI.Omel’chenkoE.HövelP.SchöllE. (2013). When nonlocal coupling between oscillators becomes stronger: patched synchrony or multichimera states. Phys. Rev. Lett.110, 224101. 10.1103/PhysRevLett.110.224101

  • 53

    OttE.AntonsenT. M. (2008). Low dimensional behavior of large systems of globally coupled oscillators. Chaos An Interdiscip. J. Nonlinear Sci.18, 037113. 10.1063/1.2930766

  • 54

    PazóD.GallegoR. (2020). The Winfree model with non-infinitesimal phase-response curve: Ott–Antonsen theory. Chaos An Interdiscip. J. Nonlinear Sci.30, 073139. 10.1063/5.0015131

  • 55

    PazóD.MontbrióE. (2014). Low-dimensional dynamics of populations of pulse-coupled oscillators. Phys. Rev. X4, 011009. 10.1103/physrevx.4.011009

  • 56

    PazóD.MontbrióE. (2016). From quasiperiodic partial synchronization to collective chaos in populations of inhibitory neurons with delay. Phys. Rev. Lett.116, 238101. 10.1103/PhysRevLett.116.238101

  • 57

    PazóD.MontbrióE.GallegoR. (2019). The Winfree model with heterogeneous phase-response curves: analytical results. J. Phys. A Math. Theor.52, 154001. 10.1088/1751-8121/ab0b4c

  • 58

    PietrasB.CestnikR.PikovskyA. (2023). Exact finite-dimensional description for networks of globally coupled spiking neurons. Phys. Rev. E107, 024315. 10.1103/PhysRevE.107.024315

  • 59

    PietrasB.DevalleF.RoxinA.DaffertshoferA.MontbrióE. (2019). Exact firing rate model reveals the differential effects of chemical versus electrical synapses in spiking networks. Phys. Rev. E100, 042412. 10.1103/PhysRevE.100.042412

  • 60

    PyragasV.PyragasK. (2022). Mean-field equations for neural populations with q-Gaussian heterogeneities. Phys. Rev. E105, 044402. 10.1103/PhysRevE.105.044402

  • 61

    PyragasV.PyragasK. (2023). Effect of cauchy noise on a network of quadratic integrate-and-fire neurons with non-cauchy heterogeneities. Phys. Lett. A480, 128972. 10.1016/j.physleta.2023.128972

  • 62

    RatasI.PyragasK. (2018). Macroscopic oscillations of a quadratic integrate-and-fire neuron network with global distributed-delay coupling. Phys. Rev. E98, 052224. 10.1103/physreve.98.052224

  • 63

    SchmidtH.AvitabileD.MontbrióE.RoxinA. (2018). Network mechanisms underlying the role of oscillations in cognitive tasks. PLoS Comput. Biol.14, e1006430. 10.1371/journal.pcbi.1006430

  • 64

    SchöllE. (2016). Synchronization patterns and chimera states in complex networks: interplay of topology and dynamics. Eur. Phys. J. Special Top.225, 891919. 10.1140/epjst/e2016-02646-3

  • 65

    ShilnikovA.GordonR.BelykhI. (2008). Polyrhythmic synchronization in bursting networking motifs. Chaos An Interdiscip. J. Nonlinear Sci.18, 037120. 10.1063/1.2959850

  • 66

    SkardalP. S.OttE.RestrepoJ. G. (2011). Cluster synchrony in systems of coupled phase oscillators with higher-order coupling. Phys. Rev. E84, 036208. 10.1103/PhysRevE.84.036208

  • 67

    SoP.LukeT. B.BarretoE. (2014). Networks of theta neurons with time-varying excitability: macroscopic chaos, multistability, and final-state uncertainty. Phys. D. Nonlinear Phenom.267, 1626. 10.1016/j.physd.2013.04.009

  • 68

    SomersD.KopellN. (1993). Rapid synchronization through fast threshold modulation. Biol. Cybern.68, 393407. 10.1007/BF00198772

  • 69

    TaherH.TorciniA.OlmiS. (2020). Exact neural mass model for synaptic-based working memory. PLoS Comput. Biol.16, e1008533. 10.1371/journal.pcbi.1008533

  • 70

    TermanD.KopellN.BoseA. (1998). Dynamics of two mutually coupled slow inhibitory neurons. Phys. D. Nonlinear Phenom.117, 241275. 10.1016/s0167-2789(97)00312-6

  • 71

    Van VreeswijkC.AbbottL.Bard ErmentroutG. (1994). When inhibition not excitation synchronizes neural firing. J. Comput. Neurosci.1, 313321. 10.1007/BF00961879

  • 72

    ZillmerR.LiviR.PolitiA.TorciniA. (2007). Stability of the splay state in pulse-coupled networks. Phys. Rev. E76, 046102. 10.1103/PhysRevE.76.046102

  • 73

    ZwillingerD. (1992). The handbook of integration. New York: A K Peters/CRC Press.

Summary

Keywords

network physiology, integrate-and-fire models, theta neurons, Kuramoto model, synaptic activation, time delay, synchronization, partial synchronization

Citation

Smirnov LA, Munyayev VO, Bolotov MI, Osipov GV and Belykh I (2024) How synaptic function controls critical transitions in spiking neuron networks: insight from a Kuramoto model reduction. Front. Netw. Physiol. 4:1423023. doi: 10.3389/fnetp.2024.1423023

Received

25 April 2024

Accepted

16 July 2024

Published

09 August 2024

Volume

4 - 2024

Edited by

Eckehard Schöll, Technical University of Berlin, Germany

Reviewed by

Ernest Montbrio, Pompeu Fabra University, Spain

Simona Olmi, National Research Council (CNR), Italy

Updates

Copyright

*Correspondence: Igor Belykh,

Disclaimer

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Outline

Figures

Cite article

Copy to clipboard


Export citation file


Share article

Article metrics