On the Influence of Amplitude on the Connectivity between Phases

In recent studies, functional connectivities have been reported to display characteristics of complex networks that have been suggested to concur with those of the underlying structural, i.e., anatomical, networks. Do functional networks always agree with structural ones? In all generality, this question can be answered with “no”: for instance, a fully synchronized state would imply isotropic homogeneous functional connections irrespective of the “real” underlying structure. A proper inference of structure from function and vice versa requires more than a sole focus on phase synchronization. We show that functional connectivity critically depends on amplitude variations, which implies that, in general, phase patterns should be analyzed in conjunction with the corresponding amplitude. We discuss this issue by comparing the phase synchronization patterns of interconnected Wilson–Cowan models vis-à-vis Kuramoto networks of phase oscillators. For the interconnected Wilson–Cowan models we derive analytically how connectivity between phases explicitly depends on the generating oscillators’ amplitudes. In consequence, the link between neurophysiological studies and computational models always requires the incorporation of the amplitude dynamics. Supplementing synchronization characteristics by amplitude patterns, as captured by, e.g., spectral power in M/EEG recordings, will certainly aid our understanding of the relation between structural and functional organizations in neural networks at large.


IntroductIon
The interplay between structural and functional brain networks has become a popular topic of research in recent years. It is currently believed that the topologies of structural and functional networks in various empirical systems may disagree (Sporns and Kötter, 2004) but systematic analyses tackling this issue are few and far between. In a combined neural mass and graph theoretical model of electroencephalographic signals, it was found that patterns of functional connectivity are influenced by -but not identical to -those of the corresponding structural level (Ponten et al., 2010). In this and many other studies, functional connectivity has been defined through the synchronization between activities at different nodes.
Neurons synchronize their firing pattern in accordance with different behavioral states. On a larger scale, synchronous activities are considered to stem from meso-scale neural populations that oscillate at certain frequencies with certain amplitudes. That is, oscillatory activity may yield synchronization characteristics within a neural population or between populations (Salenius and Hari, 2003). The amplitude of a single oscillatory neural population reflects the degree of synchronization of its neurons, that is, it measures local synchrony. By contrast, synchronization between two or more oscillatory neural populations is typically defined by their (relative) phase variance. Changes in instantaneous phase locking or coherence reflect changes in more global, distributed synchronization, i.e., between ensembles or between areas. In fact, synchronized activity across neural networks is believed to offer an effective mechanism for information transfer, especially when

Network of Wilson-Cowan models
As said, we are going to put individual Wilson-Cowan models at every node k of the network under study. Every model contains distinct populations of excitatory and inhibitory neurons that are described by their firing rates. If e n denotes the firing rate of an excitatory neuron and i n the firing rate of an inhibitory neuron, then a neural mass description can be obtained by averaging over the neural population in terms of E e where N e and N i are the numbers of excitatory and inhibitory neurons. By this averaging, E and I represent the mean firing rates of all excitatory and inhibitory neurons, respectively, of the neural population in question, i.e., that at node k.
Within that population, every neuron receives input from all other neurons of the population. Furthermore, the excitatory units individually receive constant external inputs p n , whose average is given by P p . The sum of all inputs is (instantaneously) integrated in time when it exceeds some threshold u. This thresholding is realized by means of a sigmoid function S. Without loss of generality we here chose S[x] = (1 + e −x ) −1 ; we note that, in general, the thresholds may differ between excitatory and inhibitory units 1 . In consequence, the mean firing rates of the neural populations can be cast in the following dynamical system The characteristics of this dynamical system range from a mere fixed-point relaxation to limit cycle oscillations (self-sustained oscillations) depending on parameter settings (Wilson and Cowan, 1972), in particular on the choice of the external input P. That input is usually chosen at random. In the current study, we restrict all parameter values to the regime within which the dynamics displays self-sustained oscillations; see Appendix.
To combine Wilson-Cowan models in a network, different populations are now connected via their excitatory units by virtue of the sum of all E l in the dynamics of E k (see Figure 1). The dynamics at node k then becomes In words, all Wilson-Cowan oscillators, located at nodes l in the network drive the change of the firing rate of the excitatory units E k . The connectivity is given by the real-valued matrix C kl that has vanishing diagonal elements, i.e., C kk = 0. That connectivity matrix is scaled via the overall coupling strength h. It is important to note that the C kl connectivity matrix is here always identified as the structural connectivity.
As the different Wilson-Cowan models display self-sustained oscillations, it seems obvious to describe them using their amplitude and phase dynamics. The required transforms and approximations are summarized in the Appendix and the outcomes reveal a phase dynamics similar to the seminal Kuramoto network of phase 1 At the individual neuron level, the dynamics reads: For the sake of legibility, however, we here refer to (2) also as the Kuramoto model.
As mentioned above, the effect of increasing h in the isotropic case is to increase the phase synchrony amongst the oscillators. Suppose the coupling is weak (i.e., smaller than the critical value, or h  h c , then the oscillators' phases disperse, whereas for strong coupling h ? h c the oscillators become synchronous, i.e., the phases are locked at fixed differences. In the intermediate case h ≈ h c , clusters of synchronous oscillators may emerge. However, many other oscillators, whose natural frequencies are at the tails of the distribution, are not locked into a cluster. In other words, as h increases, the interaction functions overcome the dispersion of natural frequencies v n resulting in a transition from incoherence, to partial and then full synchronization (Acebron et al., 2005;Breakspear et al., 2010).

Linking neural mass models to phase oscillators
When deriving the Kuramoto network from the Wilson-Cowan oscillator network, the major ingredient is to average every oscillator over one cycle when assuming that its amplitude and phase change slowly as compared to the oscillator's frequency. That is, time-dependent amplitude and phase are fixed, the system is integrated over one period to remove all harmonic oscillations, and, subsequently, amplitude and phase are again considered to be time-dependent (Guckenheimer and Holmes, 1990) -this procedure is also referred to as a combination of rotating wave approximation and slowly varying amplitude approximation (Haken, 1974). As shown in more detail in the Appendix, the phase dynamics of the system (1) can in this way be approximated as with S′ and S′′′ referring to the first and third derivative of the sigmoid function S. The parameter x E k , ( ) 0 is given by 0 0 defining the unstable node within the limit cycle of the Wilson-Cowan model (1) and at network node k. For more details including the definition of the natural frequency we refer to the Appendix. Considering the case that the amplitudes R k are reasonably small, this phase dynamics can be further simplified to oscillators. The Kuramoto model and its link to the here-discussed network of Wilson-Cowan models will be briefly sketched in the following two sub-sections.

Kuramoto network of phase oscillators
The collective behavior of a network of oscillators, whose states are captured by a single scalar phase w k each, can, in first approximation, be represented by the set of N coupled differential Eq.
That is, the k-th oscillator, with natural frequency v k , adjusts its phase according to input from other oscillators through a pair-wise phase interaction function sin(w lw k ). The connectivity matrix D kl is again scaled by an overall coupling strength, h. As will be sketched below, h serves as a bifurcation parameter in that small values of h yield a network behavior that essentially agrees with the entirely uncoupled case (i.e., the phases are not synchronized), whereas h larger than a certain critical value h c causes the phases to synchronize. The frequencies v k are distributed according to a specified probability density usually taken to be a symmetric, unimodal distribution (e.g., Lorentzian or Gaussian distributions) with mean v 0 . Although the sinusoidal interaction function is an approximation, it still permits a variety of highly non-trivial solutions. As such the model (2) can be viewed as the canonical form for synchronization in extended, oscillatory media. We note that the connectivity matrix D kl represents also a structural connectivity that does not necessarily agree with that of the Wilson-Cowan model -see below.
Strictly speaking the system (2) does not represent the Kuramoto model in its original form as there the coupling between nodes k and l was considered isotropic and homogeneous, i.e., D kl = 1 for all connections, by which the model reduces to strength h we induced qualitative differences in synchronization as the order parameter was expected to undergo well-defined bifurcations from an unlocked state to in-phase locking. We simulated both the network of Wilson-Cowan oscillators as well as the Kuramoto network. For the Wilson-Cowan model, we defined the phase as the quadrant-corrected inverse tangents of the ratio of excitatory and inhibitory units at node k, i.e., w k = arctan(E k /I k ) -this phase largely agreed with the Hilbert-phase of E k because of the smoothness of the Wilson-Cowan limit cycle. For the Kuramoto network, the phase was, of course, the state variable under study, which did not require any further definition. In all simulations the primary outcome variable in all simulations was, hence, r(h) for different network types and, in the case of the Wilson-Cowan network, distinct ranges of input values P k as will be explained below in all detail. In addition, we computed the phase locking index of the pair-wise relative phases between nodes which served as definition of the functional networks. The precise transform of the Kuramoto network dynamics to the dynamics of relative phases is beyond the scope of the current paper. To study potentially "erroneous" simulations of the phase dynamics -and thus possible "misinterpretations" of structural connectivity when solely looking at functional networks defined via phase synchrony -we ignored for the Kuramoto network the amplitude dependency (3) of the connectivity matrix and simply identified D kl by C kl . We further accelerated numerical simulations by adding some small dynamic noise (Stratonovich, 1963;Risken, 1989;Daffertshofer, 1998), so-called Langevin forces G k (t), in the form of mean-centered Gaussian white noise. The simulated dynamics hence looked like Recall that the connectivity in (5) differs from (2) by means of D kl → C kl .
Throughout simulations we fixed parameter settings as: a E = 1.2, a I = 2, c EE = 5, c II = 1, c IE = 6, c EI = 10, u E = 2, u I = 3.5. The strength of the dynamical noise was always considered very small (it only served to accelerate numerics and not to discuss impact of stochastic forces). It was set to e = 10 −4 for all simulations of the Wilson-Cowan network (4) and to e = 10 −2 for the Kuramoto network (5). Simulations were realized using a simple Euler-forward scheme with step-size 10 −2 . Per run a total number of 10 5 samples were simulated. For each network, simulations were repeated with 10 different realizations of constant but random inputs P k (Wilson-Cowan oscillators) or constant but random natural frequencies v n (Kuramoto oscillators). In addition, for the small-world network, new C kl matrices were generated with different rewiring pattern for each realization. Each of these 10 realizations was again repeated five times with different initial values of E k and I k , or w k . The resulting r(h) values were computed over the final 100 samples of every run In sum, the phase dynamics can, in good approximation, be cast into the form of a Kuramoto network provided the connectivity matrix is corrected by means of (3). This correction yields a non-trivial amplitude dependence of the connectivity at the level of the phase dynamics. Since S is a sigmoid function, S′ becomes bell-shaped implying a change in connectivity D kl whenever the parameter x E k , ( ) 0 is altered, e.g., by shifting the center of the Wilson-Cowan limit cycle at node k and/or l. This probably more global dependence is supplemented by the here more important node-by-node dependence. When the amplitudes R k differ per node, the ratio R l /R k in (3) directly affects the value of D kl , which can, strictly speaking, be entirely independent on the choice of the connectivity matrix C kl . Put differently, the structural connectivity at the neural mass level does not necessarily agree with the structural connectivity at the phase dynamics level.
Given our interest in amplitude dependency, we finally add a note about "large" amplitudes. In line with the Appendix Eq. A.7 including larger amplitudes yields a slight modification of the phase dynamics that we here abbreviate as Interestingly, the presence of large amplitudes yields, apart from slightly different coupling coefficients  D kl , phase shifts a kl that translate to finite transmission delays. Prior studies that incorporate transmission delays into phase oscillators have revealed elaborate synchronization behaviors (Zanette, 2000;Jeong et al., 2002). The more complex dynamics due to a suggests the notion of frustration, whereby the interaction functions require some finite phase offset in order to vanish (Acebron et al., 2005). For a more detailed discussion we refer to a recent review by Breakspear et al. (2010). Note that for our analytical estimates we always consider the case in which Eq. (2) and (3) apply to good approximation.

sIMulatIons
More recently, several research groups started investigating the relationship between structural and functional connectivity, suggesting that functional connectivity may indeed resemble aspects of structural connectivity, at least to some extent (Lebeau and Whittington, 2005;Ingram et al., 2006;Honey et al., 2007Honey et al., , 2009Honey et al., , 2010Voss and Schiff, 2009;D'angelo et al., 2010). In most studies, a fixed structural architecture was implemented based on, for instance, the cortical structure of the cat (Zhou et al., 2007), or the macaque neo-cortex (Honey et al., 2007). Yet it is unclear how variations in the network properties at the structural level or fixed network properties with variations by means of (node-dependent) amplitudes may affect the synchronization strength and more global network characteristics at the functional level.
Synchronization was quantified via the phase locking index or the phase uniformity r, defined as (Mardia and Jupp, 2000) This index agrees with the so-called Kuramoto order parameter and reflects the degree of divergence of the different phases in the network (not the relative phases). By varying the overall coupling

Small-world network
The model for generating small-world networks employed here was introduced by Watts and Strogatz (1998) to generate graphs with high clustering and low path length (high efficiency). Starting from an ordered network on a ring lattice where nodes are only connected to a small number of direct neighbors, connections are subsequently rewired to a random (distant) node with certain probability. The introduction of a few random connections in an ordered network drastically increases the synchronizability of the network (Watts and Strogatz, 1998;Barahona and Pecora, 2002;Motter et al., 2005;Zhou and Kurths, 2006;Stam and Reijneveld, 2007;Wu et al., 2008;Chen et al., 2009). We used a network with an average degree of 10 and a rewiring probability of 0.2. An example of a C kl matrix is given in Figure 5 below.

Hagmann network
Empirical networks are unlikely to have an organization that can be exactly described by one of the theoretical network models.
To study a network that more realistically represents anatomical connections in the human brain we repeated our simulations on a network that was based on axonal pathways obtained by diffusion spectrum imaging. This dataset has been used to identify the so-called "structural core" of anatomical connections in the human cerebral cortex as described by Hagmann et al. (2008), which is accessible via http://www.connectomeviewer.org/viewer/ datasets. To reduce the size of the network and, by this, accelerate simulation time, the original 998 regions were assigned to a 66-node parcellation scheme and averaged over all five subjects as was also done in the original study (Hagmann et al., 2008). The resulting weighted, undirected network was subsequently thresholded to obtain a binary network with an average degree of 10. This network served as our connectivity matrix C kl ; see Figure 2 and also Figure 6 below.

results
The changes in synchronization r as a function of overall coupling strength h are summarized in Figure 3. First thing to notice is that, for a critical h, the Wilson-Cowan model shows a brisk increase in r after which maximal synchronization is reached. Increasing h again after a critical value breaks down the synchronization as the individual Wilson-Cowan oscillators leave the stable limit cycle regime when their inputs exceed a certain value (Schuster and Wagner, 1990a). That means, the neural masses at the different nodes stop oscillating altogether if coupling is too strong. Of course, this does not apply for the Kuramoto model since, by construction, the phases keep oscillating. In consequence, r keeps increasing with h and reaches asymptotically maximum synchronization (see bottom row's panels in Figure 3).
The different choices of P k intervals result in altered synchronization curves. This was most apparent for the [−0.8,…,−0.7] and [0.7,…,0.8] intervals (blue solid lines in Figure 3, third row's panels) when amplitudes lie furthest apart. In general, different P k intervals caused a shift in critical h, with networks with larger amplitudes reaching maximum synchronization for lower coupling strength than oscillators with smaller amplitude. Interestingly, the cases with bimodal amplitude distributions (dashed lines) were and averaged over all simulations. Primary outcome variable was, hence, r(h) for simulations of (4) and (5) using the three different network types, and in the case of the Wilson-Cowan network (4), using altered input-distributions to set P k . For the Kuramoto model the natural frequencies v n were randomly drawn from a Cauchy-Lorentz distribution with width g = 0.5 and initial w k values at time t = 0 were drawn from a uniform distribution over the interval [0, 2π). The initial E k and I k values for the Wilson-Cowan oscillators were uniformly chosen from the interval [0, 1].
By default, the constant input values P k were drawn from a uniform distribution with −0.25 ≤ P k ≤ 0.25 for every node k. In order to tackle amplitude effects, however, we looked also at the case in which (selected) nodes displayed oscillations with clearly different amplitudes than others. For this we selected four different intervals from which P k was drawn: −0.25 ≤ P k ≤ −0.20; 0.20 ≤ P k ≤ 0.25; −0.8 ≤ P k ≤ −0.7; and 0.7 ≤ P k ≤ 0.8 . Simulations were performed using either a single interval or a combination of two intervals for which the first 50% of the nodes were assigned a P k from the first interval and the second 50% from the second interval. These combinations of intervals were between similar ranges, hence: −0.25 ≤ P k ≤ k −0.20 with 0.20 ≤ P k ≤ 0.25 and −0.8 ≤ P k ≤ −0.7 with 0.7 ≤ P k ≤ 0.8. As shown in the final part of the Appendix the stationary amplitude at network node k either vanishes, i.e., R k,stationary = 0 or it obeys the form explicitly depends on the input P k . Given this dependency, varying the input P k systematically could be used to create different scenarios of amplitude effects, which -in particular if selected nodes received significantly different input than others -potentially caused pronounced, qualitative differences between C kl and D kl . In these cases, the simulations of (4) and (5) were expected to disagree.
The connectivity matrices C kl were chosen as either a fully connected isotropic network, as a network with small-world topology generated by the Watts-Strogatz model (Watts and Strogatz, 1998), or via an anatomical network reported by Hagmann et al. (2008). For all the connectivities we estimated the functional networks via phase locking between nodes.

Fully connected homogeneous network
The original Kuramoto network comprises a fully connected homogeneous network -see above. C kl in this case consists of an N × N matrix containing ones everywhere except for the diagonal, where all values were set to zero, i.e., we did not allow for self-connections. We note that discarding diagonal elements is, strictly speaking, not necessary for the phase dynamics (2) or (6) as the coupling via the sine of relative phase vanishes, i.e., by construction (or symmetry) there are no self-connections. This argument, however, does not apply for the network (1) or (5), hence we always set C kk = 0.
Although the Kuramoto network is usually studied for large size networks, we chose a network of 66 nodes in order to make a better comparison with the Hagmann dataset.

Daffertshofer and van Wijk
Amplitude influences phase connectivity Frontiers in Neuroinformatics www.frontiersin.org synchronize for a combination of the two. A closer look at the functional synchronization patterns between individual nodes of this network revealed that two distinct clusters emerged corresponding to the bimodal inputs and thus amplitude distribution (Figure 4).
less synchronizable than their unimodal counterparts. An example of this phenomenon is the case of a fully connected network that reaches global synchronization for each of the [−0.8,…,−0.7] and [0.7,…,0.8] P k intervals separately but appears unable to fully

Daffertshofer and van Wijk
Amplitude influences phase connectivity Frontiers in Neuroinformatics www.frontiersin.org local clusters but the large difference between the input intervals prevented them from synchronizing with one another. It is important to note that, if amplitude effects were not taken into account, a full synchronization of the network would have been found. With the current parameter settings no full global synchronization could be achieved in both the small-world and the Hagmann network. However, partial synchronization patterns could be observed that did not correspond with the structural connectivity but also not with the distribution of amplitudes (Figures 5 and 6). These patterns rapidly emerged and disappeared with varying h. Although the match with the amplitude distribution was not as clear-cut as in the case of the fully connected network (Figure 4), a similar clustering could be observed, by which the functional connectivities turned out to differ not only quantitatively but also qualitatively from the underlying structural connectivity -a fact that would be missed if relying on a description of sole phase oscillators that show such partial synchronization patterns only in close vicinity of the critical coupling strength.

dIscussIon and conclusIon
The introduction of network analysis to neuroscience has paved new ways for the study of neural network organizations. Particular focus has been on the search for complex networks since many of these networks -especially in the neuroinformatics context -are known  Figure 4 we here chose the inputs from a bimodal distribution, here the intervals [−0.8,…,−0.7] and [0.7,…,0.8], which causes the amplitudes to differ between the first and second half of the nodes and, consequently, the functional connectivity matrix to disagree with C kl . Similar to the homogeneous case in Figure 4 the ratios R l /R k largely prescribe the functional connectivity, here, however, mixed with the small-world structure C kl -see, e.g., the pronounced synchrony along the diagonal that is absent in the patterns of amplitude ratios. In addition, dependent on the overall coupling strength h, pronounced clusters of synchronized nodes appear -the corresponding coupling values, h = 25-45, agree with the synchronization regime of this small-world network displayed in Figure 3, middle column, third row, blue dashed line. If the structural connectivity is isotropic, then amplitude distribution largely (if not fully) prescribes the functional connectivity pattern that thus clearly disagrees with the structural connectivity. In consequence, the current example revealed two strongly synchronized Daffertshofer and van Wijk Amplitude influences phase connectivity Frontiers in Neuroinformatics www.frontiersin.org electrophysiological signals, for instance, M/EEG. We have shown that, even if the local dynamics at every node of a network can be described as phase dynamics in the form of a Kuramoto network, the connectivity matrix at this level of phases does not necessarily agree with the connectivity at the level of neural mass models describing firing rates of local neural populations. The connectivity at the phase dynamics level has to be corrected by its amplitude dependency. This phase level is indeed closely related to the empirically assessed functional connectivity matrix as this, as said, is commonly defined through locking patterns of phases. If relying on Kuramoto-like approximations, the connectivity matrix has to be corrected via the relation (3) that may include non-trivial amplitude dependency. Especially, when the amplitudes differ from node to node, the connectivity at the level of phases can qualitatively differ from the structural connectivity at the level of neural mass or mean firing rates. That is, structural and functional connectivity may differ simply because of the latter's amplitude dependency.
In consequence, phase dynamics and, hence, synchrony patterns should always be analyzed in conjunction with the corresponding amplitude changes. Patterns of global synchrony (phase) may depend on local synchrony (amplitude). This may have profound impacts when linking, for instance, M/EEG studies to neural modeling. Amplitude there translates to (spectral) power, which typically differs between distinct behavioral states or due to pathology. Incorporating these amplitude changes will certainly help to understand how structural and functional network organizations in the cortex, in particular, and in the central nervous system, in general, may relate to one another.

acknowledgMents
We thank the Netherlands Organisation for Scientific Research for financial support (NWO grant # 021-002-047).
for their efficiency when transferring and integrating information from local, specialized brain areas, even when they are distant (Sporns and Zwi, 2004). Over the years, small-world structural networks have been found for C. Elegans (Watts and Strogatz, 1998), cat cortex, and macaque (visual) cortex (Sporns and Zwi, 2004). In humans, anatomical connectivity can be estimated in vivo indirectly via crosscorrelation analysis of cortical thickness in structural MRI (He et al., 2007;Chen et al., 2008) and more directly using tractography based on diffusion tensor imaging and diffusion spectrum imaging Hagmann et al., 2008;Gong et al., 2009). Similar to the structural, i.e. anatomical connections, functional connections have also been found to display characteristics of complex networks, especially by looking at human functional networks in resting state, using either fMRI (Salvador et al., 2005;Achard et al., 2006;Van den Heuvel et al., 2008;Ferrarini et al., 2009) or M/EEG (Stam, 2004;Bassett et al., 2006;Stam and Reijneveld 2007;Bullmore and Sporns, 2009). Changes in these resting state networks appear to relate to neurological and psychiatric diseases like Alzheimer's disease He et al., 2008;Supekar et al., 2008;de Haan et al., 2009), schizophrenia (Bassett et al., 2008;Liu et al., 2008;Rubinov et al., 2009), ADHD in children (Wang et al., 2009), (removal of) brain tumors (Bartolomei et al., 2006;Bosma et al., 2009), and during epileptic seizures (Kramer et al., 2008;Schindler et al., 2008;Ponten et al., 2009), but also to aging (Achard and Bullmore, 2007;Meunier et al., 2009;Micheloyannis et al., 2009) or to different sleep stages (Ferri et al., 2008;Dimitriadis et al., 2009), as well as during foot movements (De Vico Fallani et al., 2008) and finger tapping (Bassett et al., 2006). Do these functional networks precisely match their underlying structural counterparts? In general, networks do not agree, especially when the functional networks are solely defined via (phase) synchronization patterns, which is common practice when studying holds. In principle this can be any solution but here we identify ( , ) with the unstable fixed-point (unstable node to be precise) within the stable limit cycle (see the intersection point of the nullclines in Figure A1). We investigate the deviation of this solution by means of where we abbreviated As said, this "mean"-centering allows for expanding the sigmoid function to the M-th order,  To show the link between the network of Wilson-Cowan models (1) and the Kuramoto network of phase oscillators (2) we adopt Schuster and Wagner's derivation (Schuster and Wagner, 1990). In contrast to their description of two coupled oscillators, however, we explicitly account for a network structure containing N nodes. When deriving the Kuramoto network, the strategy is to consider the Wilson-Cowan model in the oscillatory regime, i.e., in the presence of a stable limit cycle (Figure A1), which is first "mean"-centered simplifying the expansion of the sigmoid function S. Then, the oscillator is averaged over one cycle when assuming that its amplitude and phase change slowly as compared to the oscillator's frequency. That is, time-dependent amplitude and phase are fixed, the system is integrated over one period to remove all harmonic oscillations, and, subsequently, amplitude and phase are again considered to be time-dependent (Guckenheimer and Holmes, 1990) -we note that this procedure is also referred to as a combination of rotating wave approximation and slowly varying amplitude approximation (Haken, 1974). The averaging immediately results in the oscillator network that, when assuming weak coupling and small amplitudes, resembles the Kuramoto network.
More explicitly, let ( , )  where R k and w k are the time-dependent amplitude and phase, respectively, of the network node k, and V is a yet unknown (mean) frequency. As said, we assume that amplitude and phase change slowly with respect to V, and average the system (A.5) over a cycle t = [0…2π/V). This averaging yields the phase dynamics as with S′ and S-referring to the first and third derivative of the sigmoid function S, respectively (see Figure A.2), and the frequency being given by As a last approximation, we consider the case in which all amplitudes R k are sufficiently small so that their quadratic and higher orders can be ignored. We note that R k are the amplitudes of the limit cycles describing (dE k ,dI k ) which do not agree with the mean "activities" of the Wilson-Cowan oscillators as they are shifted by ( , ) (