Network Model of Spontaneous Activity Exhibiting Synchronous Transitions Between Up and Down States

Both in vivo and in vitro recordings indicate that neuronal membrane potentials can make spontaneous transitions between distinct up and down states. At the network level, populations of neurons have been observed to make these transitions synchronously. Although synaptic activity and intrinsic neuron properties play an important role, the precise nature of the processes responsible for these phenomena is not known. Using a computational model, we explore the interplay between intrinsic neuronal properties and synaptic fluctuations. Model neurons of the integrate-and-fire type were extended by adding a nonlinear membrane current. Networks of these neurons exhibit large amplitude synchronous spontaneous fluctuations that make the neurons jump between up and down states, thereby producing bimodal membrane potential distributions. The effect of sensory stimulation on network responses depends on whether the stimulus is applied during an up state or deeply inside a down state. External noise can be varied to modulate the network continuously between two extreme regimes in which it remains permanently in either the up or the down state.


INTRODUCTION
Neural activity in the absence of sensory stimulation can be structured (Arieli et al., 1996) with, in some cases, the membrane potential making spontaneous transitions between two different levels called up and down states (Metherate and Ashe, 1993;Steriade et al., 1993a,b,c;Wilson and Groves, 1981). These transitions have been observed in a variety of systems and conditions: during slow-wave sleep (Steriade et al., 1993a,b,c), in the primary visual cortex of anesthetized animals (Anderson et al., 2000;Lampl et al., 1999), in the somatosensory cortex of unanesthetized animals during quiet wakefulness (Petersen et al., 2003) and in slices from ferrets (Sanchez-Vives and McCormick, 2000) and mice (Cossart et al., 2003).
A hallmark of this subthreshold activity is a bimodal distribution of the membrane potential, with peaks at the mean potentials of the depolarized and hyperpolarized states. However, there are considerable differences in the degree of regularity of the transitions observed in different experiments. In slow-wave sleep and in some slices (Sanchez-Vives and McCormick, 2000), these are rather regular whereas they exhibit an irregular pattern in experiments done with anesthetized animals (Lampl et al., 1999).
Another characteristics of the up-down dynamics is that the transitions occur synchronously (Lampl et al., 1999;Stern et al., 1998), although the degree of synchrony depends on the particular experiment. In slowwave sleep, there is a high degree of long-ranged synchrony (Amzica and Steriade, 1995;Volgushev et al., 2006), whereas recordings from the visual cortex of anesthetized animals show less and shorter-ranged synchrony (Lampl et al., 1999).
Transitions between up and down states can also be evoked by sensory stimulation (Anderson et al., 2000;Haider et al., 2007;Petersen et al., 2003;Sachdev et al., 2004). An interesting result of these experiments is that sensory-evoked activity patterns are similar to those produced spontaneously (Petersen et al., 2003). Similarly, in thalamocortical slices from mice, the cortical response to stimulation of the thalamic fibers is comparable to the spontaneous activity in the slice (MacLean et al., 2005). Studies in rats and cats report another interesting feature, the response to the stimulus depends on the state of the spontaneous fluctuations (Petersen et al., 2003;Sachdev et al., 2004;Haider et al., 2007). The effect appears to be species dependent; in rats, if a sensory stimulus is applied when the recorded neuron is in a down state, responses are stronger than if it is applied during an up state (Petersen et al., 2003;Sachdev et al., 2004). In contrast, in cats, the stronger response occurs during the up state (Haider et al., 2007).
The origin of the spontaneous transitions has been claimed to lie in both the intrinsic properties of neurons (Bazhenov et al., 2002;Crunelli et al., 2005;Mao et al., 2001;Sanchez-Vives and McCormick, 2000) and their synaptic inputs (Cossart et al., 2003;Metherate and Ashe, 1993;Sanchez-Vives and McCormick, 2000;Seamans et al., 2003;Wilson and Kawaguchi, 1996). It seems plausible that their particular temporal structure results from interactions between these two components. Previous modeling studies have included intrinsic properties and synaptic currents in a fairly biophysically detailed fashion (Bazhenov et al., 2002;Compte et al., 2003;Hill and Tononi, 2005;Kang et al., 2004;Timofeev et al., 2000). However, the very detailed description of neurons and networks in these models somewhat obscures how the interaction between the intrinsic properties and synaptic currents give rise to large and synchronous membrane fluctuations.
Here, we use a reduced model to investigate the interplay between synaptic activity and an intrinsic neuronal property and to study network responses to sensory stimulation. Our goal is to understand the conditions under which up and down-state transitions emerge in a network of model neurons when plausible assumptions are made. Instead of postulating the existence of a specific current or set of currents, we assume the existence of a nonlinear feature in the intrinsic membrane currents of the neurons that interacts with synaptic currents. Aside from this nonlinearity, the neuron model is of the usual integrate-and-fire (IF) type. The simplicity of the model allows us to isolate the mechanisms responsible for transitions and to reach an understanding of their roles and interactions. The model produces synchronous spontaneous transitions between two distinct membrane potential states and generates responses to sensory stimulation. These responses depend on the state of the network at the time of the application of the stimulus. The termination of the up state occurs by dominant inhibition. External noise can be used to induce a variety of regimes, from networks that remain in a silent down state to active networks similar to a perpetual up state.

The model
We consider a network of IF neurons, with the addition of a nonlinear membrane current, receiving synaptic input composed of slow and fast excitatory and inhibitory conductances. The network consists of random connections with finite range.
Below its threshold value, the membrane potential V of each model neuron obeys the equation Here τ m is the membrane time constant, g L is the leak conductance, and V L is the leak reversal potential. We measure all conductances in units of the leak conductance of excitatory neurons, that is, g L = 1 for excitatory neurons by definition and all other conductances are relative to this one. The adaptation current, which is the second term on the right side of Equation (1), is only included for excitatory neurons. Its conductance g a obeys the equation and it is augmented by an amount g a → g a + ∆g a whenever the neuron fires an action potential. I syn,E and I syn,I are the excitatory and inhibitory synaptic currents. I noise represents an external noise, and I stim (t) stands for the current produced by sensory stimulation. I nl describes a nonlinear property of the neuron (see below). The potential V (t) obeys Equation (1) until it reaches the spike generation threshold V th . At that point, an action potential is discharged, and the potential V (t) is reset to V reset where it is held for a refractory time τ ref .
Four synaptic currents, AMPA, NMDA, GABA A and GABA B (Metherate and Ashe, 1993), are used in the model, When a neuron fires an action potential, the synaptic conductances of its postsynaptic targets are modified by where ∆g x is the unitary synaptic conductances for X = AMPA, NMDA, and GABA A , GAB B . Otherwise, the synaptic conductances decay exponentially  with synaptic time constant τ x . Nonlinearities characterizing NMDA and the GABA B receptors are not included, because the emphasis is on their timescales not their voltage dependences. We assume that the neurons have a bistable character in the network ( Figure 1A, solid line), but this does not necessarily imply that isolated neurons exhibit bistability. Although intrinsic currents may contribute to this phenomenon, bistability can arise from an interplay between intrinsic and network-generated currents. For example, bistability can be obtained by combining a voltage-dependent intrinsic current ( Figure 1A, dashed line) and a linear synaptic or modulatory current ( Figure 1A, dotted line). An instantiation of this mechanism, in which the nonlinearity was given by a transient Ca 2+ current, has been studied previously (Crunelli et al., 2005). In a more complex example, bistability arises from the dynamics of the extracellular K + concentration (Frohlich et al., 2006). Here, we assume that such a combination of currents can be described by the term whereṼ 1 <Ṽ 2 <Ṽ 3 and c is a parameter that determines the strength of the current. This current is illustrated in Figure 1A (solid line) and, as discussed above, it can be interpreted as the sum of a nonlinear current that does not produce bistability (dotted lines) and a linear contribution (dashed lines) that causes the sum to show bistability, that is, multiple zero crossings. The increase in the magnitude of this current at potentials larger than about −45 mV or at very hyperpolarized potentials is not relevant because the model neuron never operates in these ranges.
In the absence of other currents, I nl (t) induces three fixed points, at the values V 1 , V 2 , and V 3 , which are related but not equal (due to the leak current) toṼ 1 ,Ṽ 2 , andṼ 3 . In the absence of fluctuating currents, the neuron will fire only if V (t) stays in the region above the unstable fixed point at V 2 (this requires V reset to be above this fixed point) and if V th is less than the upper stable fixed point at V 3 . If the threshold satisfies V th > V 3 , the membrane potential will remain stuck at the value V 3 . On the other hand, if V is in the region below the unstable fixed point at V 2 , it will be attracted to the quiescent fixed point at V 1 . In the network we study, fluctuations produced by both the synaptic currents and the external source of noise I noise allow the neuron to fire even if its threshold is above the upper fixed point.
Most neuron parameters within the network are distributed stochastically. Because the relationship between the neuron parameters V reset , V th , and V i (or equivalentlyṼ i ) for i = 1, 2, 3 is different for each neuron, most neurons transition from one state to the other with some regularity, but others tend to remain either silent or firing most of the time. Figure  1B shows the range of ∼ V 3 (black segment) and V th (red segment) used in the network. There is a small bias toward neurons with V th >∼ V 3 . Similarly, Figure 1C shows the range used for ∼ V 2 (black segment) and V reset (red segment).
Each neuron receives independent noise I noise consisting of two Poisson trains, one excitatory and one inhibitory. The noise model has four parameters: two unitary conductances (∆g syn,E and ∆g syn,I ) and two rates. This noise is filtered according to Equation (3) through synapses with slow synaptic time constants (i.e., τ NMDA and τ GABA B ).
We have implemented sensory stimulation by the application of a pulse of excitatory conductance to a subpopulation of the excitatory neurons in the network. Minimal stimulation was defined as the minimal conductance of a pulse required to evoke an up state from a down state with high probability.

Parameter values and simulations
Most of the results presented were obtained for fixed values of the model parameters, although the results presented in Figure 4A (see figure caption) and the analysis of the network with zero adaptation conductance are an exception. Otherwise, only the noise term was varied to observe how it affects network activity.
The network contains 4000 neurons of which 17% are inhibitory and the rest excitatory. Each neuron is connected with a probability of 2% to other neurons contained within a disk centered about its location and containing about 31% of the total number of neurons. This results in each neuron, on average, connecting to 25 other neurons. The network size is 50 × 80, with periodic boundary conditions.
All the neurons have a membrane time constant of 20 ms and a refractory time τ refr = 5 ms. Other passive properties are distributed uniformly, and we use a ± notation to indicate the interval within which each parameter falls uniformly. The membrane threshold V th takes values in the interval −45 ± 2 mV, the reset potential V reset in the interval −55 ± 1 mV, and the leak potential V L in the interval −68 ± 1 mV. The parameters of the nonlinear current, with conductance measured in units of the leak, are c = 0.03 mV −2 , and theṼ s were chosen asṼ 1 = −72 ± 2 mV, V 2 = −58 ± 2 mV, andṼ 3 = −44 ± 2 mV.
All excitatory synapses include both AMPA and NMDA components. On the other hand, we assigned GABA A receptors to 55% and GABA B receptors to 45% of the inhibitory synapses. The synaptic time constants are τ AMPA = 2 ms, τ NMDA = 100 ms, τ GABA A = 10 ms, and τ GABA B = 200 ms. Recall that all conductances are measured in units of the leak conductance of excitatory neurons. For excitatory neurons, g E,AMPA = 0.27, g E,NMDA = 0.0495, ∆g E,GABA A = 0.84, ∆g E,GABA B = 0.1848. For inhibitory neurons, g I,AMPA , g I,NMDA = 0.05, ∆g I,GABA A = 0.017, ∆g I,GABA B = 0.017, and g I,L = 1.4. In addition, for excitatory neurons ∆g a = 0.14, V a = −80 mV, and τ a = 100 ms. The reversal potentials for the inhibition, V GABA B and V GABA A fall uniformly within the intervals −90 ± 2 mV and −80 ± 2 mV, respectively. V AMPA and V NMDA are both set to zero.
The parameters of the noise model were varied to study how network behavior was modulated by noise. We started with a network characterized by the following values: ∆g syn,E = 0.09, ∆g syn,I = 0.179 for the conductances and ν syn,E = 66.66 Hz, ν syn,I = 24.31 Hz for the rates. Other networks were obtained by multiplying the inhibitory noise conductance ∆g syn,I by factors that are given in the Results.
The network was stimulated by applying conductance pulses to 17% of the excitatory neurons (either in a localized or in a distributed way) for 10 ms. The size of the pulse for minimal stimulation is g min ∼ 1 − 1.1. The result of this calibration can be observed in Figure 9.
For individual neurons the transition from one state to the other was defined to occur at V = −60 mV, where V is the potential of the neuron. This value separates the two peaks in the membrane potential distribution (see Figure 3A).
At the network level, the down-up transition was taken at the point where the average membrane potential is equal to the mean of its minimum value in the down state and its peak value in the up state, and a similar criterion was used to define the up-down transition.
Simulation times were typically from a few seconds to 25 seconds, and in some cases up to 100 seconds. Time was divided into bins of size t = 0.1 ms. The simulation was done using a computer code written in C and run under the Linux operating system.

Spontaneous activity
The network has a variety of activity regimes depending on the values of the model parameters. The set of parameter values given in the Methods defines a network that generates spontaneous up states at a rather regular frequency of approximately 0.6 Hz (Figure 2). These transitions can be seen most easily in global quantities such as the population rate and average membrane potential (Figures 2A, B, and E, respectively). The latter can be used as a surrogate for the local field potential. The phenomenon is quite robust, and the appearance of a signal in global quantities implies that a large population of neurons transition between up and down states synchronously ( Figure 2D). However, the up states are not identical, nor are the times that the network spends in these states always the same. This indicates that the state of the network at the onset of these up states is variable.
Traces of the membrane potential of individual neurons ( Figure 2D) show less regular up-down dynamics than global quantities. Even when the synchrony is evident in the average membrane potential, there is some variability in the timing of the transitions for different neurons. In these respects, this example resembles the observations by Lampl et al. (1999) in primary visual cortex that the correlations of the membrane potential of pairs of nearby neurons are weaker than those observed, for example, in slow-wave sleep, and that even the degree of subthreshold synchrony exhibited by a given pair can change with time. However, the model can support more correlated populations. Figure 4A presents an example in which the distribution of firing thresholds and some of the unitary conductances were changed to obtain more synchronous transitions.
The four neurons shown in Figure 2D were selected to illustrate the different membrane potential distributions displayed in Figure 3A. These distributions are all bimodal, but they show different splits between the two peaks. For the neurons shown in the two upper panels of Figure  2D, corresponding to the upper two panels in Figure 3A, both peaks are comparable, but the other two neurons, shown in the lower panels of these figures, remain in the down or in the up state most of the time.
Figures 3B and C present histograms of the duration of the up states for two neurons that have bimodal membrane potential distributions. The distribution for the duration of up states across the entire network is shown in Figure 3D. Although bimodal neurons have distributions concentrated around a preferred duration, as in (Stern et al., 1998), the data taken over the whole network has a more varied distribution with a tail reaching durations of a few seconds ( Figure 3D). (Cossart et al., 2003) have observed an even longer tail including durations of about 10 seconds. Although we have not tried to reproduce this observation, it is conceivable that a proper choice of the distribution of neuron properties could generate a subpopulation of neurons with longer up states.
To illustrate the evolution of the synaptic conductances, we plotted the network average of the inhibitory conductance versus the corresponding average of the excitatory conductance ( Figure 4B). In this plot, time advances counter-clockwise along the lines. The first second of the simulation has been included in this figure, resulting in the initial transient seen as the line departing from the origin. After this transient, the plot consists of a series of ellipses each describing the evolution of the synaptic conductances during one transition to the up state and back to the down state. The excitatory conductance is the first to grow followed by inhibition until the latter becomes strong enough causing the excitatory conductance to decrease.
The transition from the down to the up state results from the interaction between the nonlinear property of the neurons and synaptic activity in the population. When the network is in the down state, most, but not all, of the neurons are silent. The activity of the small number of active neurons (plus possible current fluctuations coming from the noise) propagates through the network causing neurons to transit to the up state. Eventually, a large number of neurons make this transition, and the population rate increases. During the transition to the up state, the excitatory conductances are the first to increase, but they are soon followed by the inhibitory ones ( Figure  4B). After some time, inhibition becomes strong enough to destabilize the up state of individual neurons, and eventually the network returns to the down state. Most of the inhibitory neurons do not fire in the down state. Rather, the network is maintained in this state because of a lack of excitation (as observed in ) and due to the effective bistability of the neurons.
Transitions from the up to the down state can also be interpreted in terms of an oscillatory property of networks of normal IF neurons. When our network is in the up state, it behaves like a network of IF neurons kept depolarized at a potential approximately equal to the average potential of the up state. It has been shown that synaptic delays introduce an oscillating mode in such networks (Brunel and Wang, 2003). For normal IF neurons, the population rate oscillates in complete cycles, with inhibition following excitation, at a frequency determined by the synaptic time constants. In our model, the neurons start to fall into the down state as the network approaches the negative phase of the oscillation, so the cycle is  (Destexhe et al., 1999) interrupted. Whereas the time the network stays in the up state is mainly determined by the mechanism just described, the time that it spends in the down state is defined by different factors; the number of neurons firing during the down state, the distribution of neuron parameters, and the connectivity of the network. The termination mechanism described above does not require neuronal adaptation. This was checked by removing all adaptation, using the values of the synaptic conductances given in the Methods section. In the absence of adaptation, the transition to an up state starts with a rise of the excitatory conductance followed by inhibition. When the inhibition becomes sufficiently strong, the network goes back to a down state. Adaptation was included in the model for the sake of biological realism, but it is not an essential element of the up-down dynamics of the model.

Noise modulation
The dynamics of up-down transitions can be modulated by changing the relative strength of the excitatory and inhibitory components of the noise. We limited this analysis to changes of the inhibitory unitary conductance of the noise, g syn,I , leaving the other three parameters in the noise model fixed. Changing the excitatory unitary conductance yields qualitatively similar results. Taking the network in Figures 2 and 3 as the starting point, we first look at the effect of reducing g syn,I . Figure 5 presents a network in which this was reduced by 15%. The top trace is the population rate during 25 seconds, and the rest of the figure is an expansion of the time interval between 5 and 10 seconds. Another 5 seconds time interval (20-25 seconds) is shown in Figure 6. Decreasing the inhibitory component of the noise makes the network transitions more irregular. For example, the network makes a single transition to the down state from 5-10 seconds, but it exhibits four up states of different durations during the interval from 20-25 seconds.
If the inhibitory noise conductance is decreased even more, the system eventually reaches a regime in which neurons either fire tonically or become inactive. Setting g syn,I to 50% of its value in the regular network produces the active network shown in Figure 7. This network is Figures 2 and 3. The traces are shown with the same convention as in Figure  2, and the four neurons in D are the same as those in Figure 2D. The expanded box and panels B-E show the interval between 5 and 10 seconds during the simulation. A synchonous transition occurs after t = 8 seconds.

Figure 5. A more irregular network I. The inhibitory conductance of the noise model was decreased by 15% with respect to that of the regular network in
asynchronous, the average membrane potential is below the value of the mean potential of the up state (Figure 7B), and a subpopulation of neurons in the network fire continuously while the others tend to stay in the down state most of the time. Three of the neurons described in Figure 2 have greatly reduced activity, while the other has become more active (compare the traces in Figure 7C with those in Figure 2D). Another relevant feature of this regime is that the average inhibitory synaptic conductance is larger than the excitatory ( Figure 7D). These two features, namely the existence of a population of silent neurons and the dominance of inhibition, have been observed in a recent experiment during the activated state characteristic of cortical networks in awake animals (Rudolph et al., 2007).
The transition from the regular network to the active network shown in Figure 7 is reminiscent of the transition from slow-wave sleep oscillations to the activated state Timofeev et al., 2001), controlled by neuromodulators (Steriade and McCarley, 1990). In particular, release of acetylcholine reduces or blocks potassium conductances (McCormick, 1992; leading to a greater excitability of cortical neurons. This issue has been considered in biophysically detailed models by blocking resting potassium conductances (Bazhenov et al., 2002) or reducing other potassium conductances (Compte et al., 2003). The fact that the network becomes dominated by inhibition and splits into a firing and a silent population after the transition to the tonically active state was not apparent in those models. Although we have controlled the network dynamics by changes in the noise parameters, similar results could be obtained if the neuron excitability were increased by other means.   Figures 2, 5, and 6. (D): Evolution of the average conductances. The average inhibitory conductance is now larger than the excitatory in agreement with experimental observations (Rudolph et al., 2007).

Figure 8. A rather silent network. The inhibitory conductance of the noise was increased by 10%. Neurons still make transitions between up and down states, but the synchrony is lost. (A) Average firing rate. (B) Raster. (C) Average membrane potential. (D) Sample membrane potential traces for the same four neurons shown in previous figures. The bottom trace (brown), which corresponds to the neuron that stayed mostly in the up state in the more active networks (brown traces in Figures 2, 5 and 7) now makes transitions and develops a bimodal potential (panel G). (E) Histogram of up state durations.
These are shorter than in more active networks. (F) Evolution of the average conductances, which are much smaller than in the previous networks. There is no longer any structure in the conductance plane.
If g syn,I is made 10% larger than its value in the regular network, the network becomes more tied to the down state (Figure 8). Although there is still some spiking and individual neurons still transition between the two states, the coherence has been lost, as evident in the trace of the average membrane potential ( Figure 8C) and in the values of the average synaptic conductances which are about a factor 10 smaller than in the initial network ( Figure 8F). In the following section, we address the issue of the excitability of this network. As we will see, stimulating the system, while it is in this regime evokes up states similar to those generated spontaneously in the regular network.

Sensory stimulation
Sensory stimulation can evoke responses similar to the up state seen during spontaneous activity (Petersen et al., 2003). Up states can also be evoked in slices by stimulating thalamic fibers (MacLean et al., 2005). The activity patterns produced in this way have several similarities to those generated spontaneously. The response of barrel cortex neurons to sensory stimulation was seen to depend on whether it is applied during an up or a down state of the recorded neuron (Petersen et al., 2003). Electric stimulation of the thalamus gives a similar result (Sachdev et al., 2004). To study these issues within our model, we first considered whether stimulation of the silent network of Figure 8 is able to evoke up states with properties similar to those seen in the spontaneous activity of the more regular network of Figure 2. In a second part of our analysis, we stimulated the regular network either during an up or a down state and compared the spiking responses. Up and down states are defined for this purpose using the average membrane potential, which is our surrogate for the local field potential, a global quantity that well characterizes the state of the network. Notice that this procedure is different from what is normally done in experiments, where the stimulus is applied during the up or the down state of the recorded neuron rather than the network (Petersen et al., 2003). If the synchrony of the transitions is strong, there should not be much difference between these two procedures. However if it is not, as in (Lampl et al., 1999), it seems more sensible to stimulate during the up or the down states defined at the population level because, in this way, the time of application is correlated with a specific network state.
We first stimulated the silent network of Figure 8, which is in a regime corresponding to a down state, by applying minimal conductance pulses every 2 seconds. This evoked up states most of the time (Figure 9). Note that during the 25 seconds of this simulation the stimulus failed to evoke an up state only once (at t = 8 seconds in Figures 9A-C), and, even in this case, the trace of the average potential ( Figure 9C) shows that many neurons in the network made a transition to that state. It is likely that the network transition was not completed because the stimulus failed to propagate and recruit a sufficient number of neurons. Because each time that a pulse is applied the state of the network is different, the temporal profiles of global quantities are variable. A notable difference from the spontaneous up state is the existence of two peaks in the population rate ( Figure 9A). Presumably, the first peak is due to the response of the neurons receiving a direct stimulation, and the time between the peaks corresponds to the time needed for the propagation of the evoked activity through the network until a substantial number of neurons also responds to the stimulus. A response with two peaks is also present in experiments (see Figure 5 in (Petersen et al., 2003)).
There is considerable similarity between the regular (Figure 2) and the stimulated silent (Figure 9) networks, even when the stimulation period is only roughly equal to the average spontaneous up-down state period, and when the spontaneously evoked up states are not strictly periodic. To facilitate comparison, the black dots shown with the potential distributions ( Figure 9E) and the red dots in the conductance plane ( Figure 9F) are results from Figure 2.
In the example of Figure 9, the period of the stimulation was long enough to allow the network to recover back to the silent (down) state. If the stimulation frequency is increased, the second pulse can occur while the network is in a state close to the up state evoked by the first pulse, and the response can change dramatically. The effect of stimulation frequency on the generation of up states is described in Figure 10. The traces and rastergram at the top correspond to a single pulse applied at t = 2 s to the silent network shown in Figure 8. The next four rows present the result of stimulating with different frequencies; pulses have been applied every 1.3 second, 1.4 second, 100 ms, and 50 ms (from top to bottom). In the first case, the second pulse fails to evoke an up state because the network falls into the down state and there is little excitation. Although a subpopulation of neurons fires most of the time, there is a delay before the activity spreads to enough neurons to produce a synchronous transition to the up state. The trace of the average potential (Figure 10, right column) shows that, although many neurons made the transition, the excitation did not extend to a large portion of the network. In the example of the third row of Figure 10, the second pulse is applied after 1.4 seconds, and the extra 0.1 seconds provides enough time for the network to gather sufficient excitation to produce a second up state. Even so, it takes a rather long time for the activity to spread across the network and evoke a global up state. Had we applied the second pulse a little later (e.g., after 1.5 seconds), the transition would have been faster (data not shown). The third pulse in this example comes too soon after the preceding up state, so its effect on both the firing and the subthreshold responses is small, and again it fails to evoke a synchronous transition. An example at an even lower frequency has already been seen in Figure 9 where, as we discussed, up states are evoked with high probability. On the other hand, as the frequencies become higher, the second pulse arrives on the decaying phase of the up state and its effect is minute. As an example, we show the responses for a stimulation frequency of 10 Hz (fourth row of Figure 10). The effect of each pulse is small, but the frequency is relatively high so the effect of consecutive pulses accumulates and up states are evoked sooner than in the previous examples. In the final example, a train of pulses at 20 Hz is applied for more than 2 seconds (Figure 10, bottom panel). At this frequency, the increase of the average potential evoked by a pulse roughly compensates its decay, and the network stays in a depolarized state intermediately between the down and the up states.
The previous discussion shows that the response to sensory stimulation depends on how deeply into the up or down state the network is at the time of the application of a pulse. After a transition from an up to a down state, the network has to recover before being able to evoke another up state. This recovery occurs through the neurons that are able to continue firing most of the time. Too close to the previous up state, there is still some inhibition that prevents these neurons from firing, but after some  Figure 9 shows that the activity propagates more slowly (the second peak in the population rate occurs after a longer time). For stimulation at 10 Hz and above (fourth row and bottom), the membrane remains depolarized during all of the stimulation time.
time the network arrives in its down state (where there is no appreciable inhibition), and the active neurons increase their firing and put the network into a more responsive state.
The response to sensory stimulation is much larger if a pulse arrives when the network is in the excitable phase of its down state than when it is in an evoked up state. On may wonder whether the same is true when the stimulation is applied to a network capable of generating spontaneous transitions, such as the regular network described in Figures 2 and 3. The result of this analysis is shown qualitatively in Figures 11 and 12 and more quantitatively in Figures 12D. The left column in Figures 11 shows three spontaneous up states. To exhibit the dependence of the response on the spontaneous fluctuations, we stimulated on the second up state (at t = 3.4 second) and compared the response with the responses to stimulation in two down states, at t = 2.8 and 4.0 seconds.
In the right column of Figure 11, we see that stimulating during the spontaneous up state has little effect. As the traces of the population rate and the average membrane potential indicate, the effect of the stimulation is localized in time. Shortly after the stimulation, these traces continue their temporal course without undergoing any relevant change, and the third The stimulus has a very different effect when it is applied during a down state (Figure 12A-C). In the two cases presented here (stimulus applied during the first (left column) and during the second (right column) down states), a new up state is evoked and the next spontaneous state is pushed forward in time. The increment in the number of spikes is clearly larger under stimulation during the down than during the up states. Some experimental observations in rats seem to indicate that the spiking response is higher in absolute terms as well (Petersen et al., 2003), although in cats the opposite result is obtained (Haider et al., 2007). We studied this issue in our model by plotting the number of spikes produced by individual neurons under the conditions used in Figure 11 (right column) against the number of spikes produced under stimulation in one of two down states. The result of this test is shown in Figure 12D. While the stimulation during the first down state (Figure 12, left column) agrees with the experimental observation in (Petersen et al., 2003), exhibiting a much larger response for stimulation in the down state, stimulation during the second down state (Figure 12, right column) reveals a more balanced situation. The explanation for this difference is again the different state of the network at these two points. At t = 4 seconds, the average potential is almost at its lowest point in the second down state (Figure 11, left column) and the network is almost as hyperpolarized as it ever is during spontaneous behavior. In contrast, at t = 2.8 seconds the network is already naturally evolving toward the next spontaneous up state, a fact that is clearly seen in the average potential although it is less evident in the population activity (see Figure 11, left column). At this point the network is ready to fire, but it is not yet doing so and, as a result, the arrival of the stimulus has a strong impact. This is also why the peak of the average potential is reached sooner in this case.
In contrast with the case of evoked up states, the network with spontaneous fluctuations has an excitable phase, located at the beginning of the up state. When the response to stimulation in this region (e.g., at t = 3.2 seconds in Figure 11) is compared with the response to stimu- lation in a down state (e.g., at t = 4 seconds) one finds that, in absolute terms, the response in the up state is higher than that in the down state.
We now ask which response is stronger when the stimulation time is chosen randomly. Because both the up and the down states have an excitable phase and a less excitable phase, the answer to this issue depends on their relative durations. Given the variability observed in the time course of the average membrane potential (present even in our regular network), a careful analysis is required. We have run repeated simulations of the regular network, stimulating at different times. The stimulation period was 50 ms and the longest simulation had a duration of 25 seconds. After this, a set of neurons (either the whole network, those receiving the stimulus directly or a set of randomly chosen neurons) was selected, and for each neuron a stimulation time in an up state and another in a down state was chosen, also at random. For the regular network, the response to stimulation in the up state was larger. For example, when the test described here was done over the whole network, the spiking response (total number of spikes) during the up states was about 1.55 times larger than during the down states.
This result is in agreement with experimental observations in the cat (Haider et al., 2007). It holds regardless of the degree of localization of the stimulus. However, because a change in the values of the conductances and other model parameters can change the regularity of the transitions and the relative size of the excitable phases of the up and down states, this model could, in principle, exhibit regimes where the response to stimulation in the down state is the stronger.

DISCUSSION
We have presented a simple model that is able to reproduce some of the most important properties of the up-down dynamics observed in cortical networks. The model has two main features: the interaction between a nonlinear intrinsic property of the neurons with the synaptic activity and the heterogeneity in the neuron parameters. The first provides two stable states and fluctuations that facilitate the transitions between them, whereas the second generates a subpopulation of neurons that spontaneously reactivates the network after it returns to a down state. Along with a regime exhibiting spontaneous synchronous transitions between up and down states, the model has irregular, active, and inactive states, and the network can transit between them under the control of some of the model parameters. The response of the network to a stimulus depends on the state at the moment of the stimulation, with a higher response occurring when the network is in the down state.

The up-down dynamics
In our model, the transition from the down to the up network states occurs because of the activity of neurons that remain in their up state most of the time. A similar phenomenon occurs in more detailed biophysical models (Compte et al., 2003;Hill and Tononi, 2005;Kang et al., 2004). In (Kang et al., 2004), the activity of a subpopulation of pacemaker neurons is based on the I h current in combination with a low threshold Ca 2+ current. In (Hill and Tononi, 2005) an I h current is used in combination with a persistent sodium current, which activates some neurons and leads the whole network into the active state. Other modeling studies proposed a mechanism based on the presence of spike-independent minis during the inactivated network state that can add up to produce a transition from the down to the up state (Bazhenov et al., 2002;Timofeev et al., 2000).
The mechanism for the termination of the up state in our model is different from those proposed in other modeling studies. In our model, the up state is terminated by a network oscillatory mechanism in which the inhibition following excitation destabilized the up state causing the network to return to a down state. The time scale of this process, which determines the average duration of the up states, depends on the synaptic time constants and can be controlled by the type of synaptic receptors used in the model. For example, the frequency of the slow oscillation in the regular regime increases to about 4 Hz, namely in the delta range, if only fast excitation and inhibition (AMPA and GABA A ) are included (data not shown).

Response to sensory stimulation
Large fluctuations of the membrane potential can affect the response to sensory stimulation. In rat barrel cortex, if the stimulus occurs while the potential is in an up state both subthreshold and spiking responses are suppressed relative to the response to a stimulus arriving during a down state. Possible sources of this phenomenon include network and neuronal factors. The strong network activity during the up state increases the conductance leading to shunting of the EPSPs. Short-term depression could also have a suppressive role because it acts during the up state (a model of the up-down dynamics based on synaptic short-term depression was proposed in (Holcman and Tsodyks, 2006). In addition, differences in the strength of the driving forces between the two states and in the value of the threshold for action potentials could also contribute to the different responses (Sachdev et al., 2004).
In cats, the response in the up state is the strongest (Haider et al., 2007). The model reproduces this phenomenon. In the model, the bias in the strength of the response towards either the down or the up state is due to a difference in the relative sizes of the excitable phases of those states. In turn, this difference depends on the strength of the synaptic conductances. We have studied this issue only in our regular network, finding that, in this regard, it predicts a response similar to the findings in cats. It is an open question whether the model with different parameter values can also explain the findings in rat barrel cortex or if it is necessary to include effects such as short-term depression, which was not considered in the model.

CONCLUSIONS
In summary, a network model built from IF neurons augmented with a nonlinear membrane current and connected sparsely through slow and fast excitatory and inhibitory synaptic conductances can capture much of the phenomenology of down and up states in cortical slices and in vivo recordings. The model suggests that an examination for bistable properties that arise when network effects interact with intrinsic conductances would be an interesting way to explore experimentally what appears to be an important element of up-down state transitions.