Emerging phenomena in neural networks with dynamic synapses and their computational implications

In this paper we review our research on the effect and computational role of dynamical synapses on feed-forward and recurrent neural networks. Among others, we report on the appearance of a new class of dynamical memories which result from the destabilization of learned memory attractors. This has important consequences for dynamic information processing allowing the system to sequentially access the information stored in the memories under changing stimuli. Although storage capacity of stable memories also decreases, our study demonstrated the positive effect of synaptic facilitation to recover maximum storage capacity and to enlarge the capacity of the system for memory recall in noisy conditions. Possibly, the new dynamical behavior can be associated with the voltage transitions between up and down states observed in cortical areas in the brain. We investigated the conditions for which the permanence times in the up state are power-law distributed, which is a sign for criticality, and concluded that the experimentally observed large variability of permanence times could be explained as the result of noisy dynamic synapses with large recovery times. Finally, we report how short-term synaptic processes can transmit weak signals throughout more than one frequency range in noisy neural networks, displaying a kind of stochastic multi-resonance. This effect is due to competition between activity-dependent synaptic fluctuations (due to dynamic synapses) and the existence of neuron firing threshold which adapts to the incoming mean synaptic input.


INTRODUCTION
In the last decades many experimental studies have reported that transmission of information through the synapses is strongly influenced by the recent presynaptic activity in such a way that the postsynaptic response can decrease (that is called synaptic depression) or increase (or synaptic facilitation) at short time scales under repeated stimulation (Abbott et al., 1997;Tsodyks and Markram, 1997). In cortical synapses it was found that after induction of long-term potentiation (LTP), the temporal synaptic response was not uniformly increased. Instead, the amplitude of the initial postsynaptic potential was potentiated whereas the steady-state synaptic response was unaffected by LTP (Markram and Tsodyks, 1996).
From a biophysical point of view it is well accepted that short-term synaptic plasticity including synaptic depression and facilitation has its origin in the complex dynamics of release, transmission and recycling of neurotransmitter vesicles at the synaptic buttons (Pieribone et al., 1995). In fact, synaptic depression occurs when the arrival of presynaptic action potentials (APs) at high frequency does not allow an efficient recovering at short time scales of the available neurotransmitter vesicles to be released near the cell membrane (Zucker, 1989;Pieribone et al., 1995). This causes a decrease of the postsynaptic response for successive APs. Other possible mechanisms responsible for synaptic depression have been described including feedback activation of presynaptic receptors and from postsynaptic processes such as receptor desensitization (Zucker and Regehr, 2002). On the other hand, synaptic facilitation is a consequence of residual cytosolic calcium-that remains inside the synaptic buttons after the arrival of the firsts APs-which favors the release of more neurotransmitter vesicles for the next arriving AP (Bertram et al., 1996). This increase in neurotransmitters causes a potentiation of the postsynaptic response or synaptic facilitation. It is clear that strong facilitation causes a fast depletion of available vesicles so at the end it also induces a strong depressing effect. Other possible mechanisms responsible for short-term synaptic plasticity include, for instance, glial-neuronal interactions (Zucker and Regehr, 2002).
In the two seminal papers (Tsodyks and Markram, 1997) and (Abbott et al., 1997) a simple phenomenological model has been proposed based in these biophysical principles which nicely fits the evoked postsynaptic responses observed in cortical neurons. The model is characterized by three variables x j (t), y j (t), z j (t) that follow the dynamics where y j (t) is the fraction of neurotransmitters which is released into the synaptic cleft after the arrival of an AP at time t j sp , x j (t) is the fraction of neurotransmitters which is recovered after previous arrival of an AP near the cell membrane and z j (t) is the fraction of inactive neurotransmitters. The model assumes conservation of the total number of neurotransmitter resources in time so one has x j (t) + y j (t) + z j (t) = 1. The released neurotransmitter inactivates with time constant τ in and the inactive neurotransmitter recovers with time constant τ rec . The synaptic current received by a postsynaptic neuron from its neighbors is then defined as I i (t) = j A ij y j (t) where A ij represents the maximum synaptic current evoked in the postsynaptic neuron i by an AP from presynaptic neuron j which in cortical neurons is around 40 pA .
For constant release probability U j , the model describes the basic mechanism of synaptic depression. The model is completed to account for synaptic facilitation by considering that U j increases in time to its maximum value U as the consequence of the residual cytosolic calcium that remains after the arrival of very consecutive APs, and follows the dynamics Short term synaptic plasticity has profound consequences on information transmission by individual neurons as well as on network functioning and behavior. Previous works have shown this fact on both feed-forward and recurrent networks. For instance, in feed-forward networks activity-dependent synapses act as nonlinear filters in supervised learning paradigms (Natschläger et al., 2001), being able to extract statistically significant features from noisy and variable temporal patterns (Liaw and Berger, 1996). For recurrent networks, several studies revealed that populations of excitatory neurons with depressing synapses exhibit complex regimes of activity (Senn et al., 1996;Tsodyks et al., 1998Tsodyks et al., , 2000Bressloff, 1999;Kistler and van Hemmen, 1999), such as short intervals of highly synchronous activity (population bursts) intermittent with long periods of asynchronous activity, as is observed in neurons throughout the cortex . Related with this, it was proposed (Senn et al., 1996(Senn et al., , 1998) that synaptic depression may serve as a mechanism for rhythmic activity and central pattern generation. Also, recent studies on rate models have reported the importance of dynamic synapses in the emergence of persistent activity after removal of stimulus which is the base of the so called working memories (Barak and Tsodyks, 2007), and in particular it has been also reported the relevant role of synaptic facilitation, mediated by residual calcium, as the main responsible for appearance of working memories (Mongillo et al., 2008).
All these phenomena have stimulated much research to elucidate the effect and possible functional role of short term synaptic plasticity. In this paper we review our own efforts over the last decade in this research field. In particular, we have demonstrated both theoretically and numerically the appearance of different non-equilibrium phases in attractor networks as the consequence of the underlying noisy activity in the network and of the existence of synaptic plasticity (see section 2). The emergent phenomenology in such networks includes a high sensitivity of the network to changing stimuli and a new phase in which dynamical attractors or dynamical memories appear with the possibility of regular and chaotic behavior and rapid "switching" between different memories Cortes et al., 2004Cortes et al., , 2006Torres et al., 2005Torres et al., , 2008Marro et al., 2007). The origin of such new phases and the extraordinary sensibility of the system to varying inputs-even in the memory phase-is precisely the "fatigue" of synapses due to heavy presynaptic activity competing with different sources of noise which induces a destabilization of the regular stable memory attractors. One of the main consequences of this behavior is the strong influence of short-term synaptic plasticity on storage capacity of such networks Mejias and Torres, 2009) as we will explain in section 3.
The switching behavior is characterized by a characteristic time scale during which the memory is retained. The distribution of time scale depends in a complex way on the parameters of the dynamical synapse model and is the result of a phase transition. We have investigated the conditions for the appearance of power-law behavior in the probability distribution of the permanence times in the Up state, which is a sign for criticality (see section 4). This dynamical behavior has been associated (Holcman and Tsodyks, 2006) to the empirically observed transitions between states of high activity (Up states) and low activity (Down states) in the mammalian cortex (Steriade et al., 1993a,b).
The enhanced sensibility of neural networks with dynamic synapses to external stimuli could provide a mechanism to detect relevant information in weak noisy external signals. This can be viewed as a form of stochastic resonance (SR), which is the general phenomenon that enhances the detection by a non-linear dynamical system of weak signals in the presence of noise. Recent experiments in auditory cortex have shown that synaptic depression improves the detection of weak signals through SR for a larger noise range (Yasuda et al., 2008). In a feed-forward network model of spiking neurons, we have modeled these experimental findings Torres et al., 2011). We demonstrated theoretically and numerically that, in fact, short-term synaptic plasticity together with non-linear neuron excitability induce a new type of SR where there are multiple noise levels at which weak signals can be detected by the neuron. We denoted this novel phenomenon by bimodal stochastic resonances or stochastic multiresonances (see section 5) and, very recently, we have proved that this intriguing phenomenon not only occurs in feed-forward neural networks but also in recurrent attractor networks (Pinamonti et al., 2012).

APPEARANCE OF DYNAMICAL MEMORIES
In this section we review our work on the appearance of dynamical memories in attractor neural networks with dynamical synapses as originally reported in Torres et al., 2002Torres et al., , 2008Mejias and Torres, 2009). For simplicity and in order to obtain straightforward mean-field derivations we have considered the case of a network of N binary neurons (Hopfield, 1982;Amit, 1989). However, we emphasize that the same qualitative behavior emerges in networks of integrate and fire (IF) neurons .
Each neuron in the network, whose state is s i = 1, 0 depending if the neuron is firing or not an AP, receives at time t from its neighbor neurons a total synaptic current, or local field, given by where ω ij (t) is the synaptic current received by the postsynaptic neuron i from the presynaptic neuron j when this fires an AP (s j (t) = 1). If the synaptic current to neuron i, h i (t), is larger than some neuron threshold value θ i , neuron i fires an AP with a probability that depends on the intrinsic noise present in the network. The noise is commonly modeled as a thermal bath at temperature T. We assume parallel dynamics (Little dynamics) using the probabilistic rule with σ = 1, 0.
To account for short-term synaptic plasticity in the network we consider where D j (t) and F j (t) are dynamical variables representing synaptic depression and synaptic facilitation mechanisms. The constants ω ij denote static maximal synaptic conductances, that contain information concerning a number P of random patterns of neural activity, or memories, ξ μ ≡ {ξ μ i = 1, 0; i = 1, . . . , N, μ = 1, . . . , P} previously learned and stored in the network. Such static memories can be achieved in actual neural systems by LTP or depression of the synapses due to network stimulation with these memories. For concreteness, we assume here that these weights are the result of a Hebbian-like learning process that takes place on a time scale that is long compared to the dynamical time scales of the neurons and the dynamical synapses. The Hebbian learning takes the form also known as the covariance learning rule, with a = ξ μ i representing the mean level of activity in the patterns. It is well-known that a recurrent neural network with synapses (Equation 6) acts as an associative memory (Amit, 1989). That is, the stored patterns ξ μ become local minima of the free-energy and within the basin of attraction of each memory, the neural dynamics (Equation 4) drives the network activity toward this memory. Thus, appropriate stimulation of (a subset of) neurons that are active in the stored pattern initiates a memory recall process in which the network converges to the memory state.
To model the dynamics of the synaptic depression D j (t) and facilitation F j (t), we simplify the phenomenological model of dynamic synapses described by Equations (1, 2), taking into account that in actual neural systems such as the cortex τ in τ rec , which implies that y i (t) = 0 for most of the time and only at the exact point at which the AP arrives has a non-zero value y j (t sp ) = x j (t sp )U j (t sp ). Thus, the synaptic current evoked in the postsynaptic neuron i by a presynaptic neuron j every time it fires is approximatively which has the form given by Equation (5) We set U = 1 without loss of generality in order to have D j (t) = F j (t) = 1∀j, t for τ rec , τ fac 1, that corresponds to the well know limit of static synapses without depressing and facilitating mechanism. In this limit, in fact, one recover the classical Amari-Hopfield model of associative memory (Amari, 1972;Hopfield, 1982) when one chooses the neuron thresholds as It is important to point out that due to the discrete nature of the probabilistic neuron dynamics (Equation 4) together with the approach τ in τ rec , only discrete versions of the dynamics for x i (t) and U i (t) [see for instance ] are needed here, namely Equations (4-8) completely define the dynamics of the network. Note, that in the limit of τ rec, fac → 0 the model reduces to the standard Amari-Hopfield model with static synapses. To numerically and analytically study the emergent behavior of this attractor neural network with dynamical synapses, it is useful to measure the degree of correlation between the current network state s ≡ {s i ; i = 1, . . . , N} and each one of the stored patterns ξ μ by mean of the overlap function Monte Carlo simulations of the network storing a small number of random patterns (loading parameter α ≡ P/N → 0), each pattern having 50% active neurons (a = 0.5), no facilitation (U j (t) = 1) and an intermediate value of τ rec is shown in Figures 1A,B. It shows a new phase where dynamical memories characterized by quasi-periodic switching of the network activity between pattern (ξ μ ) and anti-pattern (1 − ξ μ ) configurations appear. For lower values of τ rec the network reduces to the attractor network with static synapses and shows the emergence of the traditional ferromagnetic or associative memory phase at relatively low T, where network activity reaches a steady state that is highly correlated with one of the stored patterns, and a paramagnetic or no-memory phase at high T where the network activity reaches a highly fluctuating disordered steady state.
The Figure 1C shows simulation results of a network with P = 10 patterns and a = 0.1, demonstrating that switching behavior is also obtained for relatively large number of patterns and sparse network activity. Figure 2B shows that the switching behavior is not an artifact of the binary neuron dynamics and is also obtained in a network of more realistic networks of spiking integrate-andfire neurons.
All time constants, such τ rec or τ fac are given in units of Monte Carlo steps (MCS) a temporal unit that in actual systems can be associated, for instance, with the duration of the refractory period and therefore of order of 5 ms.
In the limit of N → ∞ (thermodynamic limit) and α → 0 (finite number of patterns) the emergent behavior of the model can be analytically studied within a standard mean field approach [see for details Torres et al., 2008)]. The dynamics of the system then is described by a 6P-dimensional discrete map where F is a 6P-dimensional non-linear function of the order parameters that are averages of the microscopic dynamical variables over the sites that are active and quiescent, respectively, in a given pattern μ, that is with c i (t) being m i (t), x i (t), and U i (t), respectively. Local stability analysis of the fixed point solutions of the dynamics (Equation 10) shows that, similarly to the Amari-Hopfield standard model and in agreement with Monte Carlo simulations described above, the stored memories ξ μ are stable attractors in some regions of the space of relevant parameters, such as T, U, τ rec , and τ fac . Varying these parameters, there are, however, some critical values for which the memories destabilize and an oscillatory regime, in which the network visits different memories, can emerge. These critical values are depicted in Figures 2A,C,D in the form of transition lines between phases or dynamical behaviors in the system. For instance, for only depressing synapses (τ fac = 0, U j (t) = 1), there is a critical monotonic line τ * rec (T −1 ), as in a second order phase transition, separating the no-memory phase and the oscillatory phase (solid line in Figure 2A) where oscillations start to appears with small amplitude as in a supercritical Hopf bifurcation. Also there is a transition line τ * * rec (T −1 ), also monotonic, between the oscillatory phase FIGURE 2 | (A) Phase diagram (τ rec , β ≡ T −1 ) of an attractor binary neural network with depressing synapses for α = 0. A new phase in which dynamical memories appear-with the network activity switching between the different memory attractors-emerges between the traditional memory and no-memory phases that characterize the behavior of attractors neural networks with static synapses. (B) The emergent behavior depicted in (A) is robust when a more realistic attractor network of IF neurons and more stored patterns are considered (5 in this simulation). From top to bottom, the behavior of the network activity for τ rec = 0, 300, 800 and 3000 ms is depicted, respectively. For some level of noise the network activity pass from the memory phase to the dynamical phase and from this to the no-memory phase when τ rec is increased. (C) Phase diagram (T , τ fac ) for τ rec = 3 and U = 0.1 of an attractor binary neural network with short-term depression and facilitation mechanisms in the synapses and α = 0. (D) Phase diagram (τ rec , τ fac ) for T = 0.1 and U = 0.1 in the same system than in (C). In both, (C,D), the diagrams depict the appearance of the same memory, oscillatory and no-memory phases than in the case of depressing synapses. The transition lines between different phases, however, show here a clear non-linear and non-monotonic dependence with relevant parameters consequence of the non-trivial competition between depression and facilitation mechanisms. This is very remarkable in (C) where for a given level of noise, namely T = 0.22 (horizontal dotted line), the increase of facilitation time constant τ fac induces the transition of the activity of the network from a no-memory state to a memory state, from this one to a no-memory state again, and finally from this last to an oscillatory regime. and the memory phase which occurs sharply as in a first order phase transition (dashed line in Figure 2A). When facilitation is included, the picture is more complex, although similar critical and sharp transitions lines appear separating the same phases. Now, however, the lines separating different phases are nonmonotonic and highly non-linear which shows the competition between a priori opposite mechanisms, depressing and facilitating, as is depicted in Figures 2C,D. In fact, among other features, synaptic depression induces fatigue at the synapses which destabilizes the attractors, and synaptic facilitation allows a fast access to the memory attractors and to stay there during a shorter period of time (Torres et al., 2008). As in Figure 1, in all phase diagrams appearing in Figure 2, τ rec and τ fac are given in MCS units (see above) with a value for that temporal unit of around the typical duration of the refractory period in actual neurons (∼5ms). The attractor behavior of the recurrent neural network has the important property to complete a memory based on partial or noisy stimulus information. In this section we have seen that memories that are stable with static synapses become metastable with dynamical synapses, inducing a switching behavior among memory patterns in the presence of noise. In this manner, dynamic synapses provide the associative memory with a natural mechanism to dissociate from a memory in order to associate with a new memory pattern. In contrast, with static synapses the network would stay in the stable memory state forever, preventing recall of new memories. Thus, dynamic synapses change stable Frontiers in Computational Neuroscience www.frontiersin.org April 2013 | Volume 7 | Article 30 | 5 memories into meta-stable memories for certain ranges of the parameters.

STORAGE CAPACITY
It is important to analyze how short-term synaptic plasticity affects the maximum number of patterns of neural activity the system is able to store and efficiently recall, that is, the so called maximum storage capacity. In a recent paper we have addressed this important issue using a standard mean field approach in the model described by Equations (3-8) when it stored P = αN activity random patterns with α > 0 and N → ∞, a = 1/2 and in the absence of noise (T = 0). In fact, for very low temperature (T 1), redefining the overlaps as  (Hertz et al., 1991) to see that the steady state of the system is described by the set of mean field equations is the spin-glass order parameter, r = 1 α ν = 1 (M ν ) 2 is the pattern interference parameter and (Mejias and Torres, 2009). Then, the 1 N i appearing in Equation (13) becomes an average over P(ζ). Using standard techniques in the limit T = 0 (Hertz et al., 1991), the set of the resulting three mean-field equations reduces to a single meanfield equation which gives the maximum number of patterns that the system is able to store and retrieve, namely (see mathematical details in Mejias and Torres, 2009) where y ≡ M/ 2αr + 2α 1+γγ −γ γ 2 with M being the overlap of the current state of the network activity with the pattern that is being retrieved. The Equation (14) has a trivial solution y = 0 (M = 0). Non-zero solutions (with non-zero overlap M) exist for α less than some critical α, which defines the maximum storage capacity of the system α c . A complete study of the system by means of Monte Carlo simulations (in a network with N = 3000 neurons) has demonstrated the validity of this mean field result and is depicted in Figure 3A. The figure shows the behavior of α c obtained from Equation (14) (different solid lines), when some relevant parameters of the synapse dynamics are varied, and it is compared with the maximum storage capacity obtained from the Monte Carlo simulations (different symbols). The most remarkable feature is that in the absence of facilitation the storage capacity decreases when the level of depression increases (that is, large release probability U, or large recovering time τ rec ); see black curves in the top and middle panels of Figure 3A. This decrease is caused by the loss of stability of the memory fixed points of the network due to depression. Facilitation (see dark and light gray curves) allows to recover the maximal storage capacity of static synapses, which is the well know limit α c ≈ 0.14 (dotted horizontal line), in the presence of some degree of synaptic depression. In general the competition between synaptic depression and facilitation induces a complex non-linear and non-monotonic behavior of α c for different synaptic dynamics parameters as is shown in different panels of Figure 3B. In general, large values of α c appear for moderate values of U and τ rec , and large values of τ fac . These values qualitatively agree with those described in facilitating synapses in some cortical areas, where U is lower than in the case of depressing synapses and τ rec is several times lower than τ fac . Note that facilitation or depression never increases the storage capacity of the network above the maximum value α c ≈ 0.14.

CRITICALITY IN UP-DOWN TRANSITIONS
In a recent paper (Holcman and Tsodyks, 2006), the emergent dynamic memories described in section 2 that result from short-term plasticity have been related to the voltage transitions observed in cortex between a high-activity state (the Up state) and a low-activity state (the Down state). These transitions have been observed in simultaneous individual single neuron recordings as well as in local field measurements.
Using a simple but biologically plausible neuron and synapse model similar to the models described in sections 1 and 2, we have theoretically studied the conditions for the emergence of this intriguing behavior, as well as their temporal features (Mejias et al., 2010). The model consists of a simple stochastic bistable rate model which mimics the average dynamics of a population of interconnected excitatory neurons. The neural activity is summarized by a single activity ν(t), whose dynamics follows a stochastic mean field equation  (14) for the dependence of α c for different combinations of relevant parameters. This corresponds-from top to bottom-to the surfaces α c (U, τ fac ) for τ rec = 2, α c (U, τ fac ) for τ rec = 50 and α c (τ rec , τ fac ) for U = 0.02. In all panels, τ rec and τ fac are given in MCS units that can be associated to a value of 5 ms if one assumes that a MCS corresponds to the duration of the refractory period in actual neurons.
where τ ν is the time constant for the neuron dynamics, ν m is the maximum synaptic input to the neuron population, J is the (static) synaptic strength and θ is the neuron threshold. The function S[X] is a sigmoidal function which models the excitability of neurons in the population. The synaptic input from other neurons is modulated by a short-term dynamic synaptic process x(t) which satisfies the stochastic mean field equation The parameters τ r , U and D are, respectively, the recovery time constant for the stochastic short-term synaptic plasticity mechanism, a parameter related with the reliability of the synaptic transmission (the average release probability in the population) and the amplitude of this synaptic noise. The explanation of each term appearing in the rhs of Equation (16) is the following: the first term accounts for the slow recovery of neurotransmitter resources, the second term represents a decrease of the available neurotransmitter due to the level of activity in the population and the third term is a noise term that accounts for all possible sources of noise affecting transmission of information at the synapses of the population and that remains at the mesoscopic level.
A complete analysis of this model, both theoretically and by numerical simulations, shows the appearance of complex transitions between high (up) and low (down) neural activity states driven by the synaptic noise x(t), with permanence times in the up state distributed according to a power-law for some range of the synaptic dynamic parameters. The main results of this study are summarized in Figure 4. On Figure 4A, a typical time series of the temporal behavior of the mean neural activity ν(t) of the system in the regime in which irregular up-down transitions occur is depicted. In Figure 4B, the histogram of ν(t) for this time series shows a clear bimodal shape corresponding to the two only possible states for ν(t). Figure 4C shows how the parameters τ r and D, that control the stochastic dynamics of x(t), also are relevant for the appearance of power law distributions P(T) for the permanence time in the up or down state T. As is outlined in (Mejias et al., 2010), the dynamics can be approximately described in an adiabatic approximation, in which the neuron dynamics is subject to an effective potential . Figure 4D shows how changes for different values of the mean synaptic depression x.
For relatively small x (orange and brown lines) all synapses in the population have a strong degree of depression and the population has a small level of activity, that is, the global minimum of the potential function is the low-activity state (the down state). On the other hand, when synapses are poorly depressed and x takes relatively large values (dark and light green lines) the neuron activity level is high and the potential function has its global minimum in a high-activity state (up state). For intermediate values of x (black line) the potential becomes bistable. Figure 4E shows the complete phase diagram of the system and illustrates the regions in the parameter space (D, τ r ) where different behaviors emerge. In the phase (P) no transition between a high-activity state and low-activity state occurs. In phase (E) transitions between up and down states are exponentially distributed. The phase (C) is characterized by the emergence of power-law distributions P(T), and therefore is the most intriguing phase since it could be associated to a critical state. Finally, phase (S) is characterized by a highly fluctuating behavior of both ν(t) and x(t). In fact, ν(t) is behaving as a slave variable of x(t) and, therefore, it presents the dynamical features of the dynamics (Equation 16), which has some similarities with those of colored noise for U small. In fact for U = 0, and making the change z(t) = x(t) − 1 the dynamics (Equation 16) transforms in that for an Ornstein-Uhlenbeck (OU) process (van Kampen, 1990). From these studies, we can conclude that the experimentally observed large fluctuations in up and down permanence times in the cortex can be explained as the result of sufficiently noisy dynamical synapses (large D) with sufficiently large recovery times (large τ r ). Static synapses (τ r = 0) or dynamical synapses in the absence of noise (D = 0) cannot account for this behavior, and only exponential distributions for P(T) emerge in this case.

STOCHASTIC MULTIRESONANCE
In section 2 we mentioned that short-term synaptic plasticity induces the appearance of dynamic memories as the consequence of the destabilization of memory attractors due to synapse fatigue. The synaptic fatigue in turn is due to strong neurotransmitter vesicle depletion as the consequence of high frequency presynaptic activity and large neurotransmitter recovering times. Also, we concluded that this fact induces a high sensitivity of the system to respond to external stimuli, even if the stimulus is very weak and in the presence of noise. The source of the noise can be due to the neural dynamics as well as the synaptic transmission. It is the combination of non-linear dynamics and noise that causes the enhanced sensitivity to external stimuli. This general phenomenon is the so called stochastic resonance (SR) (Benzi et al., 1981;Longtin et al., 1991).
In a set of recent papers we have studied the emergence of SR in feed-forward neural networks with dynamic synapses Torres et al., 2011). We considered a post-synaptic neuron which receives signals from a population of N presynaptic neurons through dynamic synapses modeled by Equations  (1, 2). Each one of these presynaptic neurons fires a train of Poisson distributed APs with a given frequency f n . In addition the postsynaptic neuron receives a weak signal S(t) which we can assume sinusoidal. In addition, we assume a stationary regime, where the dynamic synapses have reached their asymptotic values u ∞ = U+Uτ fac f n 1+Uτ fac f n and x ∞ = 1 1+u ∞ τ rec f n . If all presynaptic neurons fire independently the total synaptic current is a noisy quantity with meanĪ N and variance σ 2 N given bȳ with I p = A u ∞ x ∞ and A the synaptic strength. To explore the possibility of SR, we vary the firing frequency of the presynaptic population f n . The reason for this choice is that varying f n changes the output variance σ 2 N and f n can also be relatively easily controlled in an experiment.
To quantify the amount of signal that is present in the output rate we use the standard input-output cross-correlation or power norm (Collins et al., 1995) during a time interval t and defined as: where ν(t) is the firing rate of the post-synaptic neuron. The behavior of C 0 as a function of f n for static synapses is depicted in Figure 5A which clearly shows a resonance peak at certain nonzero input frequency f n . The output of the postsynaptic neuron at the positions in the frequency domain labeled with "a," "b," and "c" is illustrated in Figure 5B and compared with the weak input signal. This shows how stochastic resonance emerges in this system. For low firing frequency (case labeled with "a") in the presynaptic population the generated current is so small that the postsynaptic neuron only has sub-threshold behavior weakly correlated with S(t).
For very large f n (case labeled with "c") bothĪ N and σ 2 N are large and the postsynaptic neuron is firing all the time, so it can not detect the temporal features of S(t). However, there is an optimal value of f n at which the postsynaptic neuron fires strongly correlated with S(t); in fact it fires several APs each time a maximum in S(t) occurs (case labeled with "b").
This behavior dramatically changes when dynamic synapses are considered, as is depicted in Figures 5C,D. In fact, for dynamic synapses there are two frequencies at which resonance occurs. That is, short-term synaptic plasticity induces the appearance of stochastic multi-resonances (SMR). Interestingly, the position of the peaks is controlled by the parameters that control the synapse dynamics. For instance, in Figure 5C it is shown how for a fixed value of facilitation and increasing depression (increasing τ rec ) the second resonance peak moves toward low values of f n while the position of the first resonance peak remains unchanged. On the other hand, for a given value of depression, the increase of facilitation time constant τ fac moves the first resonance peak while the position of the second resonance peak is unaltered (see Figure 5D). This clearly demonstrates that in actual neural systems synapses with different levels of depression and facilitation can control the signal processing at different frequencies.
The appearance of SMR in neural media with dynamic synapses is quite robust: SMR also appears when the post-synaptic neuron is model with different types of spiking mechanisms, such as the FitzHugh-Nagumo (FHN) model or the integrate and fire model (IF) with an adaptive threshold dynamics . SMR also appears with more realistic stochastic dynamic synapses and more realistic weak signals such as a train of inputs with small amplitude and short durations distributed in time according to a rate modulated Poisson process .
The physical mechanism behind the appearance of SMR is the existence of a non-monotonic dependence of the synaptic current fluctuations with f n -due to the dynamic synapses-together with the existence of an adaptive threshold mechanism in the postsynaptic neuron to the incoming synaptic current. In this way, the distance in voltage between the mean post-synaptic subthreshold voltage and the threshold for firing remains constant or decreases very slowly for increasing presynaptic frequencies. This implies the existence of two values of f n at which current fluctuations are enough to induce firing in the post-synaptic neuron [see Mejias and Torres (2011) for more details].
In light of these findings, we have reinterpreted recent SR experimental data from psycho-physical experiments on human blink reflex (Yasuda et al., 2008). In these experiments the neurons responsible for the blink reflex receive inputs from neurons in the auditory cortex, which are assumed to be uncorrelated due to the action of some external source of white noise. The subject received in addition a weak signal in the form of a periodic small air puff into the eyes. The authors measured the correlation between the air puff signal and the blink reflex and their results are plotted in Figure 6A (dark gray square error-bar symbols). They used a feed-forward neural network with a postsynaptic neuron with IF dynamics with fixed threshold to interpret their findings (light-gray dashed line). With this model, only the highfrequency correlation points can be fitted. Using instead a FHN model or an IF with adaptive threshold dynamics, we were able to fit all experimental data points (black solid line). The SMR is also observed with more realistic rate-modulated weak Poisson pulses (light-gray filled circles) instead of the sinusoidal input (black solid line). Both model predictions are consistent with the SMR that is observed in this experiments. In Figure 6B we summarize the conditions that neurons and synapses must satisfy for the emergence of SMR in a feed forward neural network.

RELATION WITH OTHER WORKS
The occurrence of non-fixed point behavior in recurrent neural networks due to dynamic synapses has also been reported by others (Senn et al., 1996;Tsodyks et al., 1998;Dror and Tsodyks, 2000). These studies differ from our work because one assumes continuous deterministic neuron dynamics (instead of binary and stochastic, as in our work). The oscillations observed in these networks do not have the rapid switching behavior as we observe and seem unrelated to the metastability that we have found in our work.
In addition, it has been reported that oscillations in the firing rate can be chaotic (Senn et al., 1996;Dror and Tsodyks, 2000) and present some intermittent behavior that resembles observed patterns of EEG. The chaotic regime in these continuous models seems unrelated to the existence of fixed point behavior and most likely understood as a generic feature of non-linear dynamical systems. It is worth noting that for each neuron, the effect of dynamic synapses is modeled through a single variable x i that multiplies the synaptic strength w ij for all synapses that connect to i. There is one depression variable per neuron and not per connection. As a result, one can obtain the same behavior of the network by interpreting x i as implementing a dynamic firing thresholds (Horn and Usher, 1989) instead of a dynamic synapse.
The switching behavior that we described in this paper, is somewhat similar to the neural network with chaotic neurons that displays a self-organized chaotic transition between memories (Tsuda et al., 1987;Tsuda, 1992).
The possible interpretation of the switching behavior as up/down cortical transitions is controversial, because similar cortical oscillations can be generated without synaptic dynamics, where the up state is terminated because of hyperpolarizing potassium ionic currents (Compte et al., 2003). However, a very recent study has focused on the interplay between synaptic depression and these inhibitory currents and concludes that synaptic depression is relevant for maintaining the up state (Benita et al., 2012). The reason for that counterintuitive behavior is that synaptic depression decreases the firing rate in the up state which also decreases the effect of the hyper-polarizing potassium currents and, as a consequence, the prolongation of the up state.
Related also is a recent study on the effect of dynamic synapses on the emergence of a coherent periodic rhythm within the Up state which results in the phenomenon of stochastic amplification (Hidalgo et al., 2012). It has been shown that this rhythm is an emergent or collective phenomenon given that individual neurons in the up state are unlocked to such a rhythm.
The relation between dynamic synapses and storage capacity has also been studied by others. For very sparse stored patterns (a 1) it has been shown that storage capacity decreases with synaptic depression (Bibitchkov et al., 2002), in agreement with our findings. On the other hand, it has been reported that the basin of attraction of the memories are enlarged by synaptic depression (Matsumoto et al., 2007) and these are even enlarged more when synaptic facilitation is taken into account (Mejias and Torres, 2009). (Otsubo et al., 2011) reported a theoretical and numerical study on the role of short-term depression on memory storage capacity in the presence of noise, showing that noise reduces the storage capacity (as is also the case for static synapses). (Mejias et al., 2012) shows the important role of facilitation to enlarge the regions for memory retrieval even in the presence of high noise.
In the last decade there has been some discussion whether neural systems, or even the brain as a whole, can work in a critical state using the notion of self-organized criticality (Beggs and Plenz, 2003;Tagliazucchi et al., 2012). As we stated in section 4, the combination of colored synaptic noise and short-term depression can cause power-low distributed permanence times in the Up and Down states, which is a signature of criticality. The emergence of critical phenomena as a consequence of dynamic synapses has also been explored by others (Levina et al., 2007(Levina et al., , 2009Bonachela et al., 2010;Millman et al., 2010).
Finally, it is worth mentioning a recent work that has investigated the formation of spatio-temporal structures in an excitatory neural network with depressing synapses (Kilpatrick and Bressloff, 2010). As a result of dynamic synapses, robust complex spatio-temporal structures, including different types of travelling waves, appear in such a system.

CONCLUSIONS
It is well-known that during transmission of information, synapses show a high variability with a diverse origin, such as the stochastic release and transmission of neurotransmitter vesicles, variations in the Glutamate concentration through synapses and the spatial heterogeneity of the synaptic response in the dendrite tree (Franks et al., 2003). The cooperative effect of all these mechanisms is a noisy post-synaptic response which depends on past pre-synaptic activity. The strength of the postsynaptic response can decrease or increase and can be modeled as dynamical synapses.
In a large number of papers, we have studied the effect of dynamical synapses in recurrent an feed-forward networks, the result of which we have summarized in this paper. The main findings are the following: Dynamic memories: Classical neural networks of the Hopfield type, with symmetric connectivity, display attractor dynamics. This means that these networks act as memories. A specific set of memories can be stored as attractors by Hebbian learning. The attractors are asymptotically stable states. The effect of synaptic depression in these networks is to make the attractors lose stability. Oscillatory modes appear where the network rapidly switches between memories. Instead, the permanence time to stay in a memory can have any positive value and becomes infinite in the regime where memories are stable. Thus, the recurrent network with dynamical synapses implements a form of dynamical memory. Input sensitivity: The classical Hopfield network is relatively insensitive to external stimuli, once it has converged into one of its stable memories. Synaptic depression improves the sensitivity to external stimuli, because it destabilizes the memories. In addition, synaptic facilitation further improves the sensitivity of the attractor network to external stimuli. Storage capacity: The storage capacity of the attractor neural network, i.e., the maximum number of memories that can be stored in a network, is proportional to the number of neurons N and scales as P max = αN with α = 0.138. Synaptic depression causes a decrease of the maximum storage capacity but facilitation allows to recover the capacity of the network with static synapses under some conditions. Up and down states: The emergence of dynamic memories has been related to the well-known up-down transitions observed in local-field recording in the cortex. We demonstrated that the observed distributions of permanence times can be explained by a stochastic synaptic dynamics. Scale free permanence time distributions could signal a critical state in the brain. Stochastic multiresonance: Whereas static synapses in a stochastic network give rise to a single stochastic resonance peak, dynamical synapses produce a double resonance. This phenomenon is robust for different types of neurons and input signals. Thus, dynamic synapses may explain recently observed SMR in psychophysical experiments. SMR also seems to occur in recurrent neural networks with dynamic synapses as it has been recently reported (Pinamonti et al., 2012). This work demonstrates the relevant role of short-term synaptic plasticity for the appearance of the SMR phenomenon in recurrent networks, although the exact underlying mechanism behind it is slightly different than in the case described here, namely feed-forward neural networks.
It is important to point out that although the phenomenology reported in this review has been obtained using different models, all the reported phenomena can be also derived in a single model consisting in a network of binary neurons with dynamic synapses as described in section 1. The phenomena reported in sections 2 and 3 have in fact been obtained using this model and the phenomenon of stochastic multiresonance (section 5) has been reported recently in such a model by Pinamonti et al. (2012). The results on critical up and down states that are reported in section 4 have been obtained in a mean-field model that can be derived from the same binary model and by assuming in addition sparse neural activity and sparse connectivity, which increases the stochasticity in the synaptic transmission through the whole network.
In addition, our studies show that the reported phenomena are robust to detailed changes in the model, such as replacing the binary neurons by graded response neurone or integrate-and-fire neurone.