Bumps in Small-World Networks

We consider a network of coupled excitatory and inhibitory theta neurons which is capable of supporting stable spatially-localized “bump” solutions. We randomly add long-range and simultaneously remove short-range connections within the network to form a small-world network and investigate the effects of this rewiring on the existence and stability of the bump solution. We consider two limits in which continuum equations can be derived; bump solutions are fixed points of these equations. We can thus use standard numerical bifurcation analysis to determine the stability of these bumps and to follow them as parameters (such as rewiring probabilities) are varied. We find that under some rewiring schemes bumps are quite robust, whereas in other schemes they can become unstable via Hopf bifurcation or even be destroyed in saddle-node bifurcations.


INTRODUCTION
Spatially-localized "bumps" of activity in neuronal networks have been studied for many years, as they are thought to play a role in short term memory (Camperi and Wang, 1998;Compte et al., 2000;Bressloff, 2012;Wimmer et al., 2014) and the head direction system (Redish et al., 1996;Zhang, 1996), among other phenomena. Some models of bump formation have used a firing rate description (Wilson and Cowan, 1973;Amari, 1977;Laing and Troy, 2003;Owen et al., 2007) while others have considered networks of spiking neurons (Gutkin et al., 2001;Laing and Chow, 2001;Wang, 2001). The simplest models typically have "Mexican-hat" connectivity in a single population of neurons, where nearby neurons are excitatorily coupled and more distant ones are inhibitorily coupled (Ermentrout, 1998;Bressloff, 2012). However, more realistic models consider both excitatory and inhibitory neurons with non-negative connectivity within and between populations (Pinto and Ermentrout, 2001;Blomquist et al., 2005). Almost all previous models have considered homogeneous and isotropic networks, which typically support a continuous family of reflection-symmetric bumps, parameterized by their position in the network. Some exceptions are Turner, 2009, 2014), in which a spatially-inhomogeneous coupling function is used, and (Thul et al., 2016), in which a spatially-varying random firing threshold is imposed.
In this paper we further investigate the effects of breaking the spatial homogeneity of neural networks which support bump solutions, by randomly adding long-range connections and simultaneously removing short-range connections in a particular formulation of smallworld networks (Song and Wang, 2014). Small-world networks (Watts and Strogatz, 1998) have been much studied and there is evidence for the existence of small-worldness in several brain networks (Bullmore and Sporns, 2009). In particular, we are interested in determining how sensitive networks which support bumps are to this type of random rewiring of connections, and thus how precisely networks must be constructed in order to support bumps.
We will consider networks of heterogeneous excitatory and inhibitory theta neurons, the theta neuron being the canonical model for a Type I neuron for which the onset of firing is through a saddle-node on an invariant circle bifurcation (Ermentrout and Kopell, 1986;Ermentrout, 1996). In several limits such networks are amenable to the use of the Ott/Antonsen ansatz Antonsen, 2008, 2009), and we will build on previous work using this ansatz in the study of networks of heterogeneous theta neurons (Luke et al., 2013;Laing, 2014aLaing, , 2015So et al., 2014). We present the model in Section 2.2 and then consider two limiting cases: an infinite number of neurons (Section 2.3) and an infinite ensemble of finite networks with the same connectivity (Section 2.4). Results are given in Section 3 and we conclude in Section 4. The Appendix contains some mathematical manipulations relating to Section 2.4.

Introduction
First consider an all-to-all coupled network of N heterogeneous theta neurons whose dynamics are given by is the phase of the ith neuron, P n (θ ) = a n (1 − cos θ ) n , n ∈ N + and a n is a normalization factor such that 2π 0 P n (θ )dθ = 2π. The function P n is meant to mimic the action potential generated when a neuron fires, i.e., its phase increases through π; n controls the "sharpness" of this function. The I i are input currents randomly chosen from some distribution, g is the strength of connectivity within the network (positive for excitatory coupling and negative for inhibitory), and τ is a time constant governing the synaptic dynamics. The variable r is driven up by spiking activity and exponentially decays to zero in the absence of activity, on a timescale τ .
The model (1)-(2) with τ = 0 (i.e., instantaneous synapses) was studied by Luke et al. (2013), who found multistability and oscillatory behavior. The case of τ > 0 was considered in Laing, unpublished and similar forms of synaptic dynamics have been considered elsewhere (Börgers and Kopell, 2005;Ermentrout, 2006;Coombes and Byrne, unpublished). The model presented below results from generalizing Equations (1) and (2) in several ways. Firstly, we consider two populations of neurons, one excitatory and one inhibitory. Thus, we will have two sets of variables, one for each population. Such a pair of interacting populations was previously considered by Luke et al. (2014); Börgers and Kopell (2005); Coombes and Byrne, unpublished; and Laing, unpublished. Secondly, we consider a spatially-extended network, in which both the excitatory and inhibitory neurons lie on a ring, and are (initially) coupled to a fixed number of neurons either side of them. Networks with similar structure have been studied by many authors (Redish et al., 1996;Compte et al., 2000;Gutkin et al., 2001;Laing and Chow, 2001;Laing, 2014aLaing, , 2015.

Model
We consider a network of 2N theta neurons, N excitatory and N inhibitory. Within each population the neurons are arranged in a ring, and there are synaptic connections between and within populations, whose strength depends on the distance between neurons, as in Laing and Chow (2002) and Gutkin et al. (2001) (In the networks we will consider, connection strengths are either 1 or 0, i.e., neurons are either connected or not connected). Inhibitory synapses act on a timescale τ i , whereas the excitatory ones act on a timescale, τ . θ i ∈ [0, 2π) is the phase of the ith excitatory neuron and φ i ∈ [0, 2π) is the phase of the ith inhibitory one. The equations are where P n is as in Section 2.1. The positive integers M IE , M EE , M EI , and M II give the width of connectivity from excitatory to inhibitory, excitatory to excitatory, inhibitory to excitatory, and inhibitory to inhibitory populations, respectively. The non-negative quantities g EE , g EI , g IE and g II give the overall connection strengths within and between the two populations (excitatory to excitatory, inhibitory to excitatory, excitatory to inhibitory, and inhibitory to inhibitory, respectively). The variable v i (when multiplied by g EE ) gives the excitatory input to the ith excitatory neuron, and whose dynamics are driven by r i , which depends on the activity of the excitatory neurons with indices between i − M EE and i + M EE . Similarly, u i (when multiplied by g IE ) gives the excitatory input to the ith inhibitory neuron, and is driven by q i , which depends on the activity of the excitatory neurons with indices between i − M IE and i + M IE . g EI y i is the inhibitory input to the ith excitatory neuron, driven by s i , which depends on the activity of the inhibitory neurons with indices between i − M EI and i + M EI . Lastly, g II z i is the inhibitory input to the ith inhibitory neuron, driven by w i , which depends on the activity of the inhibitory neurons with indices between i − M II and i + M II . For simplicity, and motivated by the results in Pinto and Ermentrout (2001), we assume that the inhibitory synapses act instantaneously, i.e. τ i = 0, and that there are no connections within the inhibitory population, i.e. g II = 0. Thus (8) and (12) become irrelevant and from (7) we have that y i = s i in (3).
The networks are made heterogeneous by randomly choosing the currents I i from the Lorentzian and the currents J i from the Lorentzian I 0 and J 0 are the centers of these distributions, and for simplicity we assume that both have the same width, . The heterogeneity of the neurons (i.e., the positive value of ) is not necessary in order for the network to support bumps, but it is necessary for the Ott/Antonsen ansatz, used extensively below, to be valid . Networks of identical phase oscillators are known to show non-generic behavior which can be studied using the Watanabe/Strogatz ansatz Strogatz, 1993, 1994). We want to avoid non-generic behavior, and having a heterogeneous network is also more realistic. For typical parameter values we see the behavior shown in Figures 1, 2, i.e., a stable stationary bump in which the active neurons are spatially localized. While these bumps may look superficially like "chimera" states in a ring of oscillators Strogatz, 2004, 2006;Laing, 2009;Panaggio and Abrams, 2015) they are different in one important aspect. Chimera states in the references above occur in networks for which the dynamics depend on only phase differences. Thus these systems are invariant with respect to adding the same constant to all oscillator phases, and can be studied in a rotating coordinate frame in which the synchronous oscillators have zero frequency, i.e., only relative frequencies are meaningful. In contrast, networks of theta neurons like those studied here are not invariant with respect to adding the same constant to all oscillator phases. The actual value of phase matters, and the neurons with zero frequency in Figure 2 have zero frequency simply because their input is not large enough to cause them to fire.
We now want to introduce rewiring parameters in such a way that on average, the number of connections is preserved as the networks are rewired. This is different from other formulations of small-world networks in which additional edges are added (Newman and Watts, 1999;Medvedev, 2014; but see Puljic and Kozma, 2008 for an example in which the number of connections to a node is precisely conserved). The reason for doing this is to keep the balance of excitation and inhibition constant. If we were to add additional connections, for example, within the excitatory  population, the results seen might just be a result of increasing the number of connections, rather than their spatial arrangement. We are interested in the effects of rewiring connections from short range to long range, and thus use the form suggested in Song and Wang (2014). We replace Equations (9) to (11) by where and where |i − j| refers to the shortest distance between neurons i and j, measured on the ring. When p 1 = p 2 = p 3 = 0, Equation (15) reverts to Equations (9) to (11). Note that when p 1 = 1, the probability of A IE ij being 1 is independent of i and j, and that the expected number of nonzero entries in a row of A IE ij (i.e., the expected number of connections from the excitatory population to an inhibitory neuron) is independent of p 1 . Similar statements apply for the other two matrices and their parameters p 2 and p 3 . Typical variation of A IE with p 1 is shown in Figure 3 and it is clear that increasing p 1 interpolates between purely local connections (p 1 = 0) and uniform random connectivity (p 1 = 1).
We could simply simulate Equations (3)-(6) with Equation (15) for particular values of p 1 , p 2 , and p 3 but we would like to gain a deeper understanding of the dynamics of such a network. The first approach is to take the continuum limit in which the number of neurons in each network goes to infinity, in a particular way.

Continuum Limit
We take the continuum limit: and set the circumference of the ring of neurons to be 1. In this limit the sums (Equation 15) are replaced by integrals (more specifically, convolutions) with the connectivity kernels where G IE (x, p 1 ) is the probability that a point in the excitatory population is connected to a point in the inhibitory population a distance x away, and similarly for the other two kernels. The effect of varying p j , j = 1, 2, 3, on one of the functions Equations (19)-(21) is shown in Figure 4. Taking G IE for example, we see that , the expected total number of connections is preserved, and similarly for the other two functions.
Taking the continuum limit of Equations (3)-(6) we describe the dynamics of the θ i and φ i in terms of probability densities F E (θ, x, I, t) and F I (φ, x, J, t), respectively, where x and t are (continuous) space and time, and I and J are random variables with densities h(I) and g(J) respectively. F E satisfies the continuity equation (Luke et al., 2013) and similarly F I satisfies 0 F I (φ, y, J, t)a n (1 − cos φ) n dφ dJ dy The forms of Equations (22) and (23) mean that they are amenable to the use of the Ott/Antonsen ansatz Antonsen, 2008, 2009). This ansatz states that if the neurons are not identical (i.e., > 0 for the networks studied here), solutions of the continuity equation [Equations (22) and (23)] decay exponentially onto a lower-dimensional manifold on which the θ and φ dependence of F E and F I , respectively, have a particular form. This form is a Fourier series in θ (or φ) in which the nth coefficient is some function to the nth power. [See Equation (A8), for example]. Thus, we can restrict Equations (22) and (23) to this manifold, thereby simplifying the dynamics. The standard Kuramoto order parameter for an all-to-all coupled network with phases {θ j } is the expected value of e iθ j (Strogatz, 2000). For the network studied here we can define the analogous spatially-dependent order parameters for the excitatory and inhibitory networks as and respectively. For fixed x and t, z E (x, t) is a complex number with a phase and a magnitude. The phase gives the most likely value of θ and the magnitude governs the "sharpness" of the probability distribution of θ (at that x and t), and similarly for z I (x, t) and φ (Laing, 2014a(Laing, , 2015. We can also determine from z E and z I the instantaneous firing rate of each population (see Section 3.1 and Montbrió et al., 2015). Performing manipulations as in Laing (2014aLaing ( , 2015, Luke et al. (2013), and So et al. (2014) we obtain the continuum limit of Equations (3)-(6): evolution equations for z E and z I together with Equations (24) and (25), where and H(z; n) = a n   C 0 + n q=1 C q z q +z q   where and where by |x − y| in Equations (26)-(28) we mean the shortest distance between x and y given that they are both points on a circle, i.e., |x − y| = min (|x − y|, 1 − |x − y|).
The advantage of this continuum formulation is that bumps like that in Figure 1 are fixed points of Equations (31) and (32) and Equations (24) and (25). Once these equations have been spatially discretized, we can find fixed points of them using Newton's method, and determine the stability of these fixed points by finding the eigenvalues of the linearization around them. We can also follow these fixed points as parameter are varied, detecting (local) bifurcations (Laing, 2014b). The results of varying p 1 , p 2 and p 3 independently are shown in Section 3.1.

Infinite Ensembles
We now consider the case where N is fixed and finite, and so are the matrices A IE , A EE and A EI , but we average over an infinite ensemble of networks with these connectivities, where each member of the ensemble has a different (but consistent) realization of the random currents I i and J i (Barlev et al., 2011;Laing et al., 2012). This procedure results in 4N ordinary differential equations (ODEs), 2N of them for complex quantities and the other 2N for real quantities. Thus, there is no reduction of dimension from the original system (Equations 3-6), but as in Section 2.3, bump states will be fixed points of these ODEs.
Letting the number of members in the ensemble go to infinity, we describe the state of the excitatory network by the probability density function f E (θ 1 , θ 2 , . . . , θ N ; I 1 , I 2 , . . .
and that of the inhibitory one by which satisfy the continuity equations and where dθ j /dt and dφ j /dt are given by Equations (3) and (4). Performing the manipulations in the Appendix we obtain  for j = 1, 2, . . . N where and for i = 1, 2, . . . N. Equations (42)-(48) form a complete description of the expected behavior of a network with connectivities given by the matrices A IE , A EE and A EI . Note the similarities with Equations (31)-(35) and Equations (24) and (25). As mentioned above, the advantage of this formulation is that states like that in Figure 1 will be fixed points of Equations (42)-(48), for the specified connectivities.
Recalling that the matrices A IE , A EE and A EI depend on the parameters p 1 , p 2 and p 3 respectively we now investigate how solutions of Equations (42)-(48) depend on these parameters. One difficulty in trying to vary, say, p 1 , is that the entries of A IE do not depend continuously on p 1 . Indeed, as presented, one should recalculate A IE each time p 1 is changed. In order to generate results comparable with those from Section 2.3 we introduce a consistent family of matrices, following Medvedev (2014). Consider A IE (similar procedures apply for the other two matrices) and define an N × N matrix r, each entry of which is independently and randomly chosen from a uniform distribution on the interval (0, 1). The matrix r is now considered to be fixed, and we define A IE (p 1 ) as follows: where is the Heaviside step function and the indices are taken modulo N. Comparing this with Equation (16) we see that for a fixed p 1 , generating a new r and using Equation (49) is equivalent to generating A IE using Equation (16). The reason for using Equation (49) is that since the r ij are chosen once and then fixed, an entry in A IE will switch from 0 to 1 (or vice versa) at most once as p 1 is varied monotonically in the interval [0, 1].
The effects of quasistatically increasing p 1 and p 3 for Equations (42)-(48) are shown in Section 3.2.

Results for Continuum Limit
For the system Equations (31) and (32) and Equations (24) and (25) we discretize the spatial domain into 1024 evenly spaced points and approximate the integrals in Equations (33)-(35) with Riemann sums. We numerically integrate the spatiallydiscretized evolution equations in time, using appropriate initial conditions, until a steady state is reached. This steady state is then continued using pseudo-arclength continuation, and the stability of the solutions found determined by examining the eigenvalues of the Jacobian evaluated at them (Laing, 2014b). The increment between successive values of the p i found during continuation is not fixed and the numerical results found were interpolated to a uniform grid for plotting in Figures 5, 7, 8. We consider varying p 1 , p 2 and p 3 independently, keeping the other two parameters fixed at zero. The results of varying p 1 are shown in Figure 5, where we plot the firing rate of the two populations, derived as Re(w i )/π where w i = (1 −z i )/(1 +z i ) for i = I, E, as in Montbrió et al. (2015), where the z i are fixed points of Equations (31) and (32). We see an increase and then decrease in bump width as p 1 is increased. There is also a pair of supercritical Hopf bifurcations, between which the bump is unstable (It is only weakly unstable, with the rightmost eigenvalue of the Jacobian having a maximal real part of 0.015 in this interval). At the leftmost Hopf bifurcation the Jacobian has eigenvalues ±1.8191i and at the rightmost it has eigenvalues ±1.7972i, with all other eigenvalues having negative real parts. One notable aspect is the increase in firing rate of the inhibitory population "outside" the bump as p 1 is increased, such that when p 1 = 1 the firing rate in this population is spatially homogeneous. This is to be expected,  as there are no inhibitory-to-inhibitory connections, and when p 1 = 1 all inhibitory neurons receive the same input from the excitatory population.
Increasing p 2 while keeping p 1 = p 3 = 0 we find that the bump undergoes a Hopf bifurcation (Jacobian has eigenvalues ±0.3404i) and then is destroyed in a saddle-node bifurcation at p 2 ≈ 0.48, as shown in Figure 6. The behavior of the bumps for 0 ≤ p 2 ≤ 0.48 is shown in Figure 7.
Varying p 3 we obtain Figure 8, where there are no bifurcations as p 3 is increased all the way to 1, corresponding to the case where all excitatory neurons feel the same inhibition, just a weighted mean of the output from the inhibitory population. We again see an increase and then slight decrease in bump width as p 3 is increased.
While a Hopf bifurcation of a bump may seem undesirable from a neurocomputational point of view, it should be kept in mind that oscillations are an essential phenomenon in many different neural networks, and they are widely studied (Ashwin et al., 2016).
We have only varied one of p 1 , p 2 and p 3 , keeping the other two probabilities at zero. A clearer picture of the system's behavior could be obtained by simultaneously varying two, or all three, of these probabilities. We leave this as future work, but mention that for the special case p 1 = p 2 = p 3 = p, the bump persists and is stable up to p ≈ 0.49, where it undergoes a saddle-node bifurcation (not shown).

Results for Infinite Ensemble
This section refers to Equations (42)-(48). In Figure 9 we show the results of slowly increasing p 1 , while keeping p 2 = p 3 = 0. We initially set p 1 = 0 and integrated Equations (42)-(48) to a steady state, using initial conditions that give a bump solution. We then increased p 1 by 0.01 and integrated Equations (42)-(48) again for 10,000 time units, using as an initial condition the final state of the previous integration. We continued this process up to p 1 = 1. The firing rate for the jth excitatory neuron is Re(w j )/π where w j = (1 −z E j )/(1 +z E j ), and similarly for an inhibitory neuron. Comparing Figure 9 with Figure 5 we see the same behavior, the main difference being that the bump now moves in an unpredictable way around the domain as p 1 is increased. This is due to the system no longer being translationally invariant, and the bump moving to a position in which it is stable (Thul et al., 2016). Unlike the situation shown in Figure 5 we did not observe any Hopf bifurcations, for this realization of the A IE . Presumably this is also a result of breaking the translational invariance and the weakly unstable nature of the bump shown in Figure 5 between the Hopf bifurcations.
Repeating this process as p 3 is varied with p 1 = p 2 = 0 we obtain the results in Figure 10. Comparing with Figure 8 we see very good agreement, although the bump does move considerably for small p 3 , as in Figure 9. Varing p 2 with p 1 = p 3 = 0 we obtain similar results to those in Figure 7 (not shown). Typical behavior of (42)-(48) with p 2 = 0.3 (i.e., beyond the Hopf bifurcation shown in Figure 7) is shown in Figure 11, where the oscillations are clearly seen.

Results for Original Network
To verify the results obtained above we ran the full network Equations (3)-(6) but using the connectivity Equation (49) (and similar constructions for A EE and A EI ) to calculate Equation (15). The frequency was measured directly from simulations. Varying p 1 we obtain the results in Figure 12; again, no oscillatory behavior associated with a Hopf bifurcation was observed and the results are similar to those in Figure 9. Varying p 3 we obtained Figure 13 (compare with Figure 10). Figure 14 shows oscillatory behavior at p 2 = 0.3, p 1 = p 3 = 0 (the same parameter values as used in Figure 11).
Note that the results in Figures 9-14 are each for a single realization of a typical (parameterized) small-world network. To gain insight into general small-world networks it would be of interest to study the statistics of the behavior of such networks.

DISCUSSION
We have considered the effects of randomly adding long-range and simultaneously removing short-range connections in a network of model theta neurons which is capable of supporting spatially localized bump solutions. Such rewiring makes the networks small-world, at least for small values of the rewiring probabilities. By using theta neurons we are able to use the Ott/Antonsen ansatz to derive descriptions of the networks in two limits: an infinite number of neurons, and an infinite ensemble of finite networks, each with the same connectivity. The usefulness of this is that the bumps of interest are fixed points of the dynamical equations derived in these ways, and can thus be found, their stability determined, and followed as parameters are varied using standard dynamical systems techniques.
For the parameters chosen we found bumps to be surprisingly robust: in several cases a rewiring probability could be taken from 0 to 1 without destroying a bump. However, rewiring connections within the excitatory population (increasing p 2 ) was found to destabilize a bump through a Hopf bifurcation and later destroy the unstable bump in a saddle-node bifurcation. Simulations of the full network were used to verify our results.
The network studied has many parameters: the spatial spread of local couplings, the timescale of excitatory synapses, the connection strengths within and between populations, and the distributions of heterogeneous input currents. These were all set so that the network without rewiring supported a stable bump solution, but we have not investigated the effects of varying any of these parameters. However, even without considering rewiring, Equations (31)-(35) and Equations (24) and (25) provide a framework for investigating the effects of varying these parameters on the existence and stability of bump solutions, since these continuum equations are derived directly from networks of spiking neurons, unlike many neural field models.

APPENDIX
Mathematical Details Relating to Section 2.4 In the limit of an infinite ensemble we have where f E j (θ j , I j , t) is the marginal distribution for θ j , given by f E j (θ j , I j , t) = · · · f E ({θ }; {I}; t) k =j dθ k dI k (A4) and similarly f I j (φ j , J j , t) = · · · f I ({φ}; {J}; t) k =j dφ k dJ k (A5) Multiplying the continuity equation (40) and f I j (φ j , J j , t) = g(J j ) 2π 1 + ∞ n=1 α I j (J j , t) n e inφ j + c.c.
for some functions α E j (I j , t) and α I j (J j , t), where "c.c." means the complex conjugate of the previous term. Substituting Equation (A8) into Equation (A6) and Equation (A9) into Equation (A7) we find that ∂α E j ∂t = −i I j + g EE v j − g EI s j − 1 2 + (I j + g EE v j − g EI s j + 1)α E j + I j + g EE v j − g EI s j − 1 2 α E j 2 (A10) and ∂α I j ∂t = −i J j + g IE u j − 1 2 + (J j + g IE u j + 1)α I j + J j + g IE u j − 1 2 α I j 2 (A11) Substituting Equations (A8) and (A9) into Equations (A1)-(A3) we obtain H α E j (I j , t); n dI j (A12) where H is given by Equation (36). Using standard properties of the Lorentzian one can perform the integrals in Equations (A12)-(A14) and defining z E j (t) ≡ᾱ E j (I 0 + i , t) and z I j (t) ≡ᾱ I j (J 0 + i , t) we have Evaluating Equation (A10) at I j = I 0 + i and Equation (A11) at J j = J 0 + i we obtain