Skip to main content

HYPOTHESIS AND THEORY article

Front. Comput. Neurosci., 08 October 2018
Volume 12 - 2018 | https://doi.org/10.3389/fncom.2018.00081

Theoretical Principles of Multiscale Spatiotemporal Control of Neuronal Networks: A Complex Systems Perspective

  • 1Department of Physics, Massachusetts Institute of Technology, Cambridge, MA, United States
  • 2Center for Brains, Minds and Machines, Massachusetts Institute of Technology, Cambridge, MA, United States

Success in the fine control of the nervous system depends on a deeper understanding of how neural circuits control behavior. There is, however, a wide gap between the components of neural circuits and behavior. We advance the idea that a suitable approach for narrowing this gap has to be based on a multiscale information-theoretic description of the system. We evaluate the possibility that brain-wide complex neural computations can be dissected into a hierarchy of computational motifs that rely on smaller circuit modules interacting at multiple scales. In doing so, we draw attention to the importance of formalizing the goals of stimulation in terms of neural computations so that the possible implementations are matched in scale to the underlying circuit modules.

1. Overview

In this theoretical perspective article, we propose the need for a multiscale information theoretic framework on how to better control an adaptive complex system such as the mammalian nervous system. We start with a brief historical overview of neural stimulation from Galvani's pioneering work to modern opto-electrical methods. Pointing to control obstacles, we portray why an understanding of the information processing levels of neuronal networks is crucial to the proper design of a control paradigm. We then emphasize the importance of scale-interdependence and examine the shortcomings of network control without it. This interpretation lays out both the need for attention to the dynamic nature of computation and the functional robustness in the light of structure variability that illustrate the multiscale information processing nature of neuronal networks. We advance the idea that, within this framework, better control can be achieved through targeting the aggregate computational output rather than attempting to control the system at its finest scales.

2. Control Obstacles and Failures of Neural Stimulation

In 1780, Luigi Galvani discovered that an electrical spark causes the twitching of a dead frog's legs. This discovery was pivotal to the birth of bioelectromagnetism and led the to the idea of controlling action and behavior with electricity (Whittaker, 1910; Bresadola, 1998). A century after Galvani's famous experiment, Gustav Fritsch and Eduard Hitzig showed that, in dogs, in vivo motor cortex stimulation causes limb movement (Fritsch and Hitzig, 2009). Electrical neuromodulation as a therapeutic tool has since been recognized as a viable and concrete path to control neurological diseases (Gildenberg, 2005). Recognition of the implication of the Substantia Nigra's damaged cells in Parkinson's disease (Langston et al., 1983), better understanding of the operational principles of basal ganglia-thalamocortical loops (Alexander et al., 1991) and the effective reduction of tremor by lesioning the subthalamic nucleus in monkey model of Parkinson's (Bergman et al., 1990) provided the path to tackle Parkinson's. Following these discoveries and a century after Fritsch and Hitzig's experiments the first deep brain stimulation (DBS) operation was performed in Parkinsonian patients in 1987 and positive clinical effects were reported a few years later (Limousin et al., 1995; Kringelbach et al., 2007).

However, despite the technological advancements in electrical stimulation, control of the behavior has been plagued by improper precision and lack of understanding of the network response. In the case of macro-stimulation (such as TMS—transcranial magnetic stimulation, tDCS—transcranial direct stimulation, DBS), much of the effort is focused on improved targeting of a smaller area. The hope is that via fine-tuned targeting, one would eventually achieve the proper level of control. Nonetheless, electrical micro-stimulation has not shown much promise in precise control of the output of circuits. For example, microstimulation of MT (middle temporal visual area) directional-selective neurons shows variable behavioral efficacy across single trials, where this variability has been ascribed to attention gating (Moran and Desimone, 1985) or spatial feature-selective gating (Motter, 1994; Seidemann et al., 1998). A major issue with the electrical micro-stimulation is the imprecision in targeting a specific cell-group (excitatory vs inhibitory, or a given group of inhibitory cells) as well as the nebulous temporal control of the stimulation effect. These imprecisions turn neural control to art rather than exact science. Fine tuning DBS for selection of good stimulus parameters is done mostly with trial and error (Wilson and Moehlis, 2014). Additionally, the timescale within which TMS effects may last can vary over orders of magnitude. Moreover, the effects of TMS or tCS (direct or alternating transcranial stimulation) can easily spread to spatial neighbors of the actual target (Davis and Koningsbruggen, 2013).

The discovery of optogenetics (use of light to control neuron) in cultured mammalian neurons (Zemelman et al., 2002) triggered the next wave of neuromodulatory attempts through temporally precise stimulation of individual excitatory and/or inhibitory neurons with millisecond resolution (Boyden et al., 2005; Zhang et al., 2007). An ever-growing list of studies relying on optogenetics stimulation have since followed, including those targeting Parkinson's (Kravitz et al., 2010), aimed at behavioral conditioning (Tsai et al., 2009) or fear-conditioning (Haubensak et al., 2010), as well as targeting deep structures (brainstem) (Lin et al., 2013), or for inducing fast (γ) rhythms (Cardin et al., 2009), modulating sensory processing (Shusterman et al., 2011) and attempts to control network disorder (hippocampal seizure) (Krook-Magnuson et al., 2013). From a device engineering perspective, optogenetics' major challenges include geometrical and mechanical design issues, light delivery stability and precision, optimization of light/power efficiency, heat dissipation (Williams and Denison, 2013; Goncalves et al., 2017; Zhao, 2017). Other issues, especially for applicability in clinical settings, relate to safe opsin molecular engineering, safe opsin delivery and optical stimulation techniques (Jarvis and Schultz, 2015). For these reasons, optogenetics is still not a viable option for clinical purposes (Jarvis and Schultz, 2015; Goncalves et al., 2017). However, the main biophysical challenges are photoelectric artifact (Becquerel effect) and photothermal effects (Kozai and Vazquez, 2015). Specifically, simultaneous optical stimulation and electrical recording leads to photoelectric artifacts which could be incorrectly interpreted as the light-induced rhythmic activity (Kozai and Vazquez, 2015; Zhao, 2017).

Many of these optogenetics challenges will be resolved with engineering advances. However, the attempts for control through optogenetics have faced other serious challenges related to the underlying information processing of the systems under study. The biggest hurdle is our limited system level understanding of brain functions (Jarvis and Schultz, 2015). The observed variability of the evoked behavior following transient inactivation of the motor cortex in rats and nucleus interface (Nif) in songbirds (Otchy et al., 2015) has seriously challenged the assumption that optogenetics can effectively overcome nonspecific cell targeting and temporal imprecision. As proposed, ignoring the indirect effects of downstream circuits (Otchy et al., 2015) and disregarding the interconnectedness of the complex circuitry are among the key reasons that even a temporally-precise cell-specific control of individual elements (i.e., neurons) of the network can not yield precise control of the network function. The macroscopic behavior of the system (such as network balance of excitation/inhibition) is insensitive to the computational state of individual neurons (Dehghani et al., 2016). This insensitivity is not because the functional symmetry of individual elements transcends to the total state (Anderson, 1972), but because interconnectedness renders many details (at fine scale) to be irrelevant at the large-scale behavior of the system (Goldenfeld and Kadanoff, 1999). Thus, attempts for precise control of the system at its fine scale is precisely where it will fail. This failure roots in the breakdown of the constructionist view and its emphasis on the independence of behavior at microscopic and macroscopic scales (Anderson, 1972). In what follows, we emphasize on the need for a paradigm shift to understand and control the system in a multiscale information theoretic framework.

3. Levels of Information Processing

Proper success in augmentation/alteration of the nervous system's behavior is rooted in the deep understanding of the complexity of this computational system. Here, by complexity we explicitly refer to (a) existence of multiple scales of structure/dynamics, (b) although the structural elements of coarser scales can be physically reduced to those of the finer scales, the macro-micro dynamical elements are neither fully independent nor completely coherent, and (c) the number of possible states of the system (i.e., the amount of information—in bits–needed to define the system) is scale-dependent. Early attempts to describe the nervous system at multiple scales, led to the formation of a tri-level hypothesis of information processing. This approach divides the description of the system into its computational, algorithmic, and implementational subsets after David Marr (Marr and Poggio, 1976; Marr, 1982) or to semantic, syntactic, and physical categories according to Pylyshyn (1984). While this view in neuroscience and philosophy/cognitive science has had a tremendous impact (specifically in the field of visual neuroscience), the missing links across the tri-levels have been the target of criticisms and a source of observed failures in the description of neural computation at the system level and its connection to behavior. Interestingly, even though Marr initiated this classification of levels, his own studies were always focused on only one level at a time. In fact, Marr's own quest was initially more focused on the fine scale level (Marr, 1969, 1970, 1971), but later he became skeptical (Marr, 1975) and switched to the top, i.e., computational, level (McClamrock, 1991; Rolls, 2011). The main issue in this approach is reliance on the separation of scales, a notion deeply in conflict with the nature of complex systems (Bar-Yam, 1997). The behavior of the systems harboring hierarchical levels of sub-assemblies is defined by the interaction of the sub-assemblies at higher levels not by details in a given sub-assembly (Simon, 1962, 1969). Surprisingly, a recent reversal of interest in the implementational level (mainly due to advances in microscopy and computational tools) has led to the revival of the constructionist view. This approach suggests that by taking into account the adjacent biophysical details of cell type categories, their placement in microcircuitry and their afferent and efferent projections, one could achieve proper control/alteration of neural information processing systems.

The recent surge of interest in the fine level details of both structure (Briggman et al., 2011; Kleinfeld et al., 2011) and simulation (Markram et al., 2015) parallels Marr's initial emphasis on fine implementational details. This approach has drawn criticism from advocates for his later emphasis on the computational level. The underlying assumption is that through a combined study of in vivo physiology and network anatomy (Bock et al., 2011), one can build functional connectomics (Seung, 2011) and decipher the behavior of the system. In small systems lacking the hierarchical architecture of complexity, separation of scales is justifiable and such an approach may lead to a good understanding of the system's behavior (such as Dorsophila motion detection Takemura et al., 2013), or yield relatively good control of simple behaviors (such as optogenetic control of simple motor in C elegans Leifer et al., 2011). The opposing view – mainly driven by system neuroscience investigations of the mammalian neocortex – argues that the understanding the underlying details will not resolve the issues relevant to the neural computation of interest (Carandini, 2012). This vantage point suggests that a repetition of canonical computation such as linear filtering (Movshon et al., 1978; Rust et al., 2006) or divisive normalization (Heeger, 1992; Zoccolan et al., 2005; Solomon et al., 2006) serves as a building block of computation and is the cornerstone of information processing (Carandini and Heeger, 2011) leading to progressively more complex representation in the hierarchy of sensory processing cortical areas (Kouh and Poggio, 2008). It is suggested that such canonical computations themselves could be embedded in canonical circuits (Douglas et al., 1989; Douglas and Martin, 2004). The canonical circuit is thus portrayed as the structural component of the computational unit (Douglas and Martin, 2007) where slight modifications of either the hardware or software can shape a rich repertoire of network output (Harris and Shepherd, 2015; Miller, 2016). Thus, an argument for straying away from the implementational level is rooted in computational identity of different circuits (across species) in the face of the multiplicity of their physical implementation (Tank, 1989; Priebe and Ferster, 2012).

We suggest that for fine scale information (connectome or biophysically-detailed simulations) to provide any significant insight about the nature of computation, they should be further examined within a scale-dependent framework and considering:

The observation scale: Although one can ignore detailed connectivity profile as unnecessary while considering the overall statistical properties as important, the proper choice of the scale remains crucial.

The structure-function relationship: Physical details of the circuit affect the dynamics, suggesting that the structure constrains the computational function. However, constraining the space of possible computations does not provide an exact characterization of the performed computations because there exists no one-to-one mapping of structure-function.

Plasticity: The ever-changing structure of the network implies a robustness in computational function despite changes in fine details of connectivity.

Subcellular control elements: These elements (whether they are gap junctions, ion channels, dendritic spines, etc) provide mechanisms for change of function without apparent change of structure at the scale of microcircuitry.

We have to recognize that since our system of interest harbors plasticity and homeostasis among other fundamental features outlined above (e.g., its hierarchy). These two aspects demand that our manipulations of the system be dynamic in nature in order to achieve the intended effect over a sustained over period of time despite the changing conditions (plasticity) and overall constraints (homeostasis).

4. What Does Network Control Entail?

In the search for a proper spatiotemporal dynamics of control, some may resort to modern interpretations of network control. It is essential to recognize the shortcomings of such an admixture of computational and implementational levels. Although network structure determines certain properties of network dynamics such as limit-cycle oscillator synchrony (Strogatz, 2001) or the likelihood of reliable dynamical attractors (Klemm and Bornholdt, 2005), not all dynamics can be captured by the network structure alone. Some studies have considered a connectivity graph of complex networks equivalent to its dynamical nature, and have drawn the conclusion that recognizing the driver nodes is sufficient for understanding the strategy for controlling the network (Liu et al., 2011). By reducing the dynamics to structural connectivity, they conclude that a large fraction of driver nodes (80%) are needed to control biological systems. Even though they point to the difficulty of controlling sparse inhomogeneous networks, in comparison to dense homogeneous ones (Liu et al., 2011), their assumptions of reduction of dynamics to network structure and the definition of control and numerical methods in defining driver nodes are criticized by (a) the evidence that a few inputs can reprogram biological networks (Müller and Schuppert, 2011), (b) the trade-off between phase space nonlocality of the control trajectory and control input nonlocality (Sun and Motter, 2013) and (c) that the node dynamics—not degree distributions—define the nature of controllability (Cowan et al., 2012).

Another approach in advocating the structural control has been based on the assumption that active nodes can simultaneously activate all its connected neighbors but no further than that (Nacher and Akutsu, 2013). This assumption is neither valid in the nervous system nor has any relevance to biological networks composed of elements with a variety of time constants and delayed communication. Additionally, not only the internal structure but also its connectivity pattern to the drivers and the depth of the network are essential in the emergent dynamics that follow stimulation. If the network is composed of a high external/internal node ratio, it will be largely influenced by the outside and its individual nodes will have a greater degree of independent behavior from the external stimulus. In contrast, if within system links are strong, then the system moves toward synchrony (Chinellato et al., 2015). Likewise, the depth of the system is very crucial in its response to incoming stimuli. Systems with shallow depth are easier to force to behave in the desired way, as we can directly probe and influence the effects of external nodes on internal ones. A successful implementation of this principle is the first experimental evidence for the importance of individual neurons in C. elegans locomotion (Yan et al., 2017). Using the wiring diagram of a shallow network (based on the only available full connectome, i.e., that of C. elegans; White et al., 1986; Varshney et al., 2011), it was predicted and confirmed that within a given class of motor neurons, only ablation of a subset should affect locomotion (Yan et al., 2017). However, it was also noted that the connectome alone can not distinguish between different behaviors even with the same sets of input /output nodes. This issue is not surprising, since as we discussed in section 5, one of the characteristics of a given network (even) with fixed wiring, is its ability to manifest functional variability.

In contrast to shallow networks, deep networks are harder to control, and along each step we have to tweak the internal vs external nodes to achieve the desired outcome. One of the main practical challenges is that even controlling a “static” large network would require significant energy (Sun and Motter, 2013; Pasqualetti et al., 2014). The requirement for high energy is due to the nonlocality of control trajectories in the phase space and its trade-off with the nonlocality of the control inputs in the network itself (Sun and Motter, 2013). As a result, if the number of control nodes were to be constant, the required energy for driving the network scales exponentially with the number of network nodes (Pasqualetti et al., 2014). This “required energy” exponentially decreases if the number of network nodes were to be constant while the number of driver nodes increases (Yan et al., 2015). In addition, even if in certain cases network dynamics could be linearly approximated locally, two standing issues would remain that are not easily resolvable: first, control trajectories follow a nonlinear mode and second, the local linear dynamics do not explain the major global properties of the network such as the basins of attraction (Cornelius et al., 2013; Sun and Motter, 2013). Even if controllablity of static networks could be achieved, as soon as dynamic enters the game (in time varying systems), the required energy for controlling the system will make the control infeasble (Yan et al., 2015). Overall, in high-dimensional systems that are governed by nonlinear dynamics, have dissipative properties (trajectories are confined only to a limited part of the permissible phase space), and where feedback imposes constraints on controllability, the mere identification of driver nodes and quantifying node variables are not sufficient to control the network (Motter, 2015; Gates and Rocha, 2016). Thus the control of complex networks requires both knowledge of the structure and dynamics across multiple scales of the system. To achieve multiscale control, and for not being obstructed by the inherent nonlinearity of the network dynamics, neuroengineers have to resort to designing system that are capable of providing compensatory perturbations in order to harness the nonlinear dynamics. This form of systematic compensatory perturbations is shown to be an effective tool in control of networks that manifest nonlinear dynamics (Cornelius et al., 2013).

5. Dynamical Nature of Computation and Structure/Function Variability

The alterations in network behavior can originate from constant structural reshaping of the network (addition or deletion of links), or through modified intrinsic properties of the network nodes (i.e., neurons) and via changes in possible states of the interaction among the network nodes (i.e., synaptic weights). These different modes of reconfiguration of the network behavior set the dynamics of the information flow and the active behavior of the organism. A functional translation of these attributes leads to few rules, where a) one neuron can be involved in more than one behavior, b) one behavior can involve several circuits and c) one neuromodulator can alter multiple circuits. Naturally the involvement of these different modes in reconfiguration of the network behavior changes according to the size of the system and the repertoire of its functional states. In non-vertebrates with a very limited set of neurons, it is possible to pinpoint individual or several neurons as the key or even unique elements involved in highly reliable functions. For example, it is suggested that a single neuron in C elegans can be involved in highly reliable function (de Bono and Maricq, 2005). In vertebrates (with larger brains), where routing and coordination of information flow becomes increasingly more important, local or global effects of neuromodulators and oscillatory rhythms play an essential role (Kopell et al., 2014; Le Van Quyen et al., 2016). One can conclude that precision in control of the system depends on its complexity profile (i.e., the amount of information necessary to represent a system as a function of scale). As the number of neurons/circuits and diversity of network components increases, the reliability of functional dependence on individual components decreases. This increased complexity provides robustness at larger scale dynamics while it forgoes the details. Assessing the variability of structure/function thus becomes an essential aspect for directing stimulus at the right scale.

While the observed variability can be externally or intrinsically-driven (Renart and Machens, 2014), in order to achieve functional adaptability in the light of external variability, the nervous system benefits from adapting stochastic responses (Ernst and Banks, 2002) where a given population activity can be construed as a likely sample from a posteriori distribution (Hoyer and Hyvärinen, 2003). Intrinsic variability in the nervous system originates at different levels, from ion channels (Faisal et al., 2008) to synapses (Destexhe and Rudolph-Lilith, 2012), single cells (Rabinovich et al., 2008; Litwin-Kumar and Doiron, 2012) and population level (Boerlin et al., 2013). As a result, in driving the system, instead of aiming for a pre-determined response, the target should be to induce population activity such that it could be interpreted as one stochastic instance of the likely probability distribution. This supposed probability distribution of set of responses should depend on a multidimensional parameter space, where robustness and flexibility are attained through overlapping redundant functions. This form of redundancy is in stark contrast to engineered systems where multiple instances of identical copies are used to guarantee the robustness (Marder and Goaillard, 2006). Instead, in the biological systems (and specifically in the nervous system), the robustness is achieved through (a) variability at fine scales, where single cell variability exists in parallel with a robust representation at the population level (Rabinovich et al., 2008; Litwin-Kumar and Doiron, 2012; Dehghani et al., 2016), (b) networks with different configurations (of the underlying parameters) manifest robust response to neuromodulators (Prinz et al., 2004) and (c) structurally different circuits respond reliably to external perturbation (Marder, 2011). The existence of distinct stable basins of attraction despite the permissible extensive variability at fine scales, is an essential characteristic of complex systems. Such systems' response to perturbation depends on the scale and the extent of the stimuli. In a sense, tolerance for small errors in complex systems comes at the price of the intolerance to large errors (Laughlin, 2014; Dehghani et al., 2016). As a result, when the intrinsic variability is reduced due to excessively increased coupling, the abnormally greater ensemble correlation dramatically reduces the plethora of macroscopic states, a situation which is the hallmark of loss of complexity in organs with networks of excitable cells. Heart arrhythmia (Pincus and Goldberger, 1994; Pincus and Singer, 1996) or seizures (Dehghani et al., 2016) are examples of such loss of complexity.

6. Are Complex Systems With Fine-Scale Variability and Robust Macroscopic Features Predictable?

In simple chaotic systems, the lack of predictability is due to sensitivity to initial conditions. In contrast, in complex adaptive systems, information exchange across the scales is the main obstacle in the proper control of the system and the lack of predictability is due to relevant interactions and novel information created by these interactions. Ignoring the multiscale levels of information processing, one may jump to the conclusion that for proper control, we need to have a detailed knowledge of the individual elements, the initial conditions and their interaction among elements. Since we know that this is impractical if not impossible, can we do better? To properly control a system through stimulation, one has to have a proper knowledge of the interaction in the subunits of the system. However, only a controlled trajectory of the macroscopic states is desirable and yet achievable. From a dynamical system's point of view, pushing the system at the right time and the right scale is the key to fine control. Given the scale-dependent interaction of the nervous system, we should ask: a) are complex systems with many scales inherently unpredictable? and b) if the answer is no, then how do we manipulate the system in a predictable way? Surprisingly, the answer to the first question is “no, complex systems are not unpredictable”. However the predictability requires a few specific conditions and particulars. The existence of nonlinear feedback creates either a fast or slow “wait-time of divergence” and therefore, up to some point, prediction holds and it rapidly deteriorates afterwards. This is similar to the behavior of some bifurcation systems. In fact, many systems that are considered to be periodic are inherently chaotic. A prime example is the solar system where its very slow divergence time (4 million years) leads to the observation of chaotic behavior only when 100 million years of the entire solar system are examined (Sussman and Wisdom, 1992). Therefore to navigate the system in the aimed trajectory, we have to constantly adjust and push the system such that it does not deviate from the desired path after the implementation of the last stimulus. This strategy can be effective only if the system does not have low-dimensional chaotic behavior. Otherwise, any simple perturbation could lead to unrecoverable massive perturbation. In the case of the ensemble activity (such as cortical processing with the involvement of a much larger set of neurons in comparison to the simple nervous system of invertebrates such as C Elegans), this notion becomes highly relevant since low dimensional neural trajectories provide a surprisingly accurate portrait of the circuit dynamics (Rabinovich et al., 2008) and dimensionality reduction methods can be effectively used for population decoding (Cunningham and Yu, 2014).

In non-chaotic system, our ability to modify the system is limited (Shinbrot et al., 1993). If the system's behavior is periodic or multiply periodic (nested periodicity), we are stuck with the system's intrinsic periodicity dynamics. We can slightly change the orbit by small perturbations or we can only induce perturbations that would switch the dominant periodicity from one of the existent orbits to another existent one. This limitation also means that the achieved effect is robust because of the existent orbits, but it will not permit large alterations in the system. In contrast, the existence of chaos could be helpful in controlling a chaotic system. Since systems with a chaotic attractor have an infinite number of unstable periodic orbits, one can exploit this property and push the system toward one of the already existing unstable periodic orbits (Ott et al., 1990). However, if the system is high dimensional evolving chaotically on a low dimensional attractor, this method no longer works. The reason for the ineffectiveness of this method in this case is that one would require an excessive amount of information extracted from the data and a very long history of the dynamics in order to be able to properly achieve control (Auerbach et al., 1992). Though, interestingly, the situation is not hopeless, as one can exploit the low-dimensionality of the system dynamics and through a feedback control and repeated application of tiny perturbations, control the system (Auerbach et al., 1992). Here we advance the idea that recurrent feedback in the neuronal networks might indeed be the evolutionary mechanism developed to specifically deal with the low dimensional dynamics of a high dimensional system for providing intrinsic control for response reliability. This method has been extended to control excitable biological systems in the past. It has been shown that by irregular (based on the chaotic time) delivery of the electrical stimulation, the cardiac arrhythmia can be pushed back to a low-order periodic regime (Garfinkel et al., 1992). In the hippocampal slice (CA3), the same method of entrained spontaneous burst discharges proved to be much more effective than periodic control (Schiff et al., 1994). Other methods, such as periodic (Lima and Pettini, 1990; Azevedo and Rezende, 1991) or stochastic (Fahy and Hamann, 1992) stimulation can affect chaotic systems too but it is hard to predict their effect for networks with many layers.

7. Precision and Reliability in Control of Neuronal Networks

Perturbation due to noise and the presence of variability are the key challenging issues in the control of chaotic (recurrent) neural networks. Recurrent networks operating at high gain, even without any stimulation, show complex patterns and chaotic dynamics that are very sensitive to noise (Sompolinsky et al., 1988). In high-gain regime, small changes in synaptic strength of recurrent networks can easily lead to chaotic instability, lack of robust output and high sensitivity to noise (Sompolinsky et al., 1988; van Vreeswijk and Sompolinsky, 1996; Banerjee et al., 2008). This behavior is in contradiction to biological networks which show robustness in the presence of high synaptic plasticity (Chance and Abbott, 2000). As a result, while recurrent networks are potentially computationally powerful systems, their unreliable behavior, in the face of noise and synaptic plasticity, renders them ineffective models of their true biological counterparts. Despite these challenges, some solutions have been proposed to tame the chaos in firing rate recurrent networks. By tuning the weight of the recurrent connections, i.e., minimizing the error between a desired and the current trajectory, robustness can be achieved (Liu and Buonomano, 2009). This tuning leads to a regime that exhibits chaotic dynamics and locally stable non-periodic trajectories, allowing reproducible behavior (Laje and Buonomano, 2013). The stable trajectories act as dynamic attractors enabling the network to resist deviations in response to perturbation. Unfortunately, this well-desired property of robustness is very hard to achieve in spiking recurrent networks where the irregular firing states manifest intense chaos. Both experimental (rat barrel cortex under anesthesia) (London et al., 2010) and theoretical (Monteforte and Wolf, 2010, 2012) studies show that a single spike can rapidly decorrelate the microstate of the network. Surprisingly, state perturbations decay very quickly and single-spike perturbations only lead to minor changes in the population firing rates (Monteforte and Wolf, 2012). Yet still, these complex but unstable trajectories always lead to exponential state separation and, as a result, these networks can not reliably map the same input to nearby neighborhood in population trajectory. In addition, in vivo non-anesthesized neurons have higher conductance (Destexhe et al., 2003) in comparison to the anesthesized state, where the spike-induced stability has been observed experimentally (London et al., 2010). Therefore, it is highly likely that the perturbation induced instability is much more intense in the presence of higher intrinsic noise in non-anesthesized cortex.

In the face of complex network topology and synaptic plasticity, reliability of continuous nonlinear networks and their controlled perturbation remains an unsolved challenge. Sensitivity to the input, insensitivity to perturbations and robustness despite changes in initial conditions are the requirement of stable yet useful computation (Laje and Buonomano, 2013). These attributes are much easier to achieve in the idealized classical model of neural state space (Hopfield, 1982), where the stability of computation arises due to convergence to steady-state patterns as fixed-point attractors. Yet higher cognition requires “efficient computation”, “time delay feedback”, the capacity to “retain information” and “contextual” computation (Dehghani and Wimmer, 2018). These desired properties are more in tune with the characteristics of reservoir computing (rather than static attractors), namely “separation property”, “approximation property” and “fading memory” (Dehghani and Wimmer, 2018). However, the transient dynamics of such liquid-state-like machines (Maass et al., 2002) require a sequence of successive metastable (“saddle”) states, i.e., stable heteroclinic channels (Rabinovich et al., 2008) to manifest robust computation in the presence of noise and perturbation. The necessary condition for transient stability in a high-dimensional systems with asymmetric connections is to form a heteroclinic sequence linking saddle points (Rabinovich et al., 2001, 2008). The successive metastable states ensure that all the trajectories in the neighborhood of these saddle points tend to keep the flow of computation in the desired channel. This property can be harnessed to ensure robustness of controllability despite the inherent variability of transient dynamics. To achieve proper control, instead of resorting to manipulation of the target behavior at its end-point, it is more efficient to apply minimal, yet repeated stimuli that can minimize the deviation of trajectories between the metastable states. While it remains impossible to control continuous nonlinear networks, targeted control at multiple scales seems to be a good remedy and solution where neuroengineers should focus.

The limitations for proper control of abstract and simple neural network models may lead to the conclusion that infinite precision is required to fully describe or control the computation of analog neuronal networks with unbounded precision. Given that infinite precision would turn to nontrivial nonlinearity, one could infer that the proper control of natural neuronal networks would be hard/impossible to achieve. Yet the nervous system mysteriously manifests routine robust macroscopic behavior. This puzzling dilemma hints on some possible solutions. Surprisingly, it has been shown that linear precision is sufficient to describe (up to some limits of) analog computation (Siegelmann and Sontag, 1994; Ben-Hur et al., 2002). This linearity would map to N bits of weights and N bits of activation values of driver neurons for up to the Nth step of the computation. Following this principle, “analog shift map”, a dynamical system with unbounded (analog) precision was designed and shown to portray computational equivalence to the recurrent neural network; yet its tuning only required linear precision (Siegelmann, 1995). Therefore it seems likely that the nervous system reaches reliability by bounding its analog precision to modular computation, i.e., computation at scale. This multiscale property of neural computation means that internal control is achieved by decomposition of tasks with many needed cycles of computation to smaller modules in order to keep the tuning linearity in place. Equivalently, to achieve control during perturbation, one needs to pay attention to the computational limits at each scale, where no computational procedure of N steps could be mapped to more than N bits of weights and N bits of activation. Any attempt to control the system at higher precision will not change the result and lesser precision will fail to achieve control. Perhaps this computational principle is also among the reasons that the primitive nervous systems are mostly formed as diffuse nerve nets, while more advanced nervous systems harbor a multi-scale structure to fully benefit context-dependent information processing (Dehghani, 2017). In engineering neural control systems, one has to pay close attention to the achievable precision at each scale, its indifference to small values, yet the saturation as measured as a function of the computation time within that scale.

8. Concluding Remarks

In linking the structure and dynamics of the neuronal network, mapping the space of possible interactions becomes important and a matter of challenge. Since all the possible interactions at a fine scale occupy a vast multidimensional space, in order to have robust behavior, the system is likely to rely on a more limited set of probable interactions between local densities (i.e., ensemble activity). As the scale grows, the set of possible interactions decreases yet the outcome of such interactions better matches with the behavioral (macroscopic) outputs of the system. Relevant parameters are those permissible sets of interactions that have increased probability of occurrence as the scale increases. Functional state transitions depend on the spatial variation and interaction of the ensemble activity densities. As a result, the complexity profile defined as the amount of information necessary to represent a system as a function of scale gives us the number of possible states of the system at a particular scale (Bar-Yam, 1997). Therefore the finer the computational scale of the system, the more information is needed to describe it. It is through these mechanisms that the system can maintain its intrinsic dynamical balance yet manifest responsiveness across multiple time scales (Dehghani et al., 2016) and provide stereotypical macroscopic spatiotemporal patterns in the lights of microscopic variability (Le Van Quyen et al., 2016). This viewpoint defies the blind big-data approach of incorporating more details with the hope of yielding a better model and more accurate prediction (Bar-Yam, 2016). Even with the assumption that at some point in the future we can map the connectivity and the activity of all the elements of the neuronal network, we are still in need of a formalism that shows how the output is sensitive to fine scale perturbations, and how the coarse-scale reflects redundancy and synergy of the aggregate activity of finer scales (Daniels et al., 2016). In order to enhance the effectiveness of targeting multiscale neural systems, alterations of biophysically-based features should match the level of the desired computation. In targeting a stream of computation, the link between the computation and the architecture is what defines the optimal solution for maximizing the efficiency of the stimulation. In summary, to better control the system, we have to focus on the information transfer across multiple scales. Only with this approach can engineering advancements in precise opto-electric stimulation open ways to alter the system in the desired way.

Author Contributions

ND conceived and performed the research and wrote the manuscript.

Conflict of Interest Statement

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

Author wishes to thank Luis Seoane for helpful comments.

References

Alexander, G. E., Crutcher, M. D., and DeLong, M. R. (1991). “Basal ganglia-thalamocortical circuits: Parallel substrates for motor, oculomotor, prefrontal and limbic functions,” in The Prefrontal Its Structure, Function and Cortex Pathology, Vol. 85, eds H. B. M. Uylings, C. G. Van Eden, J. P. C. De Bruin, M. A. Corner, and M. G. P. Feenstra (Elsevier), 119–146. doi: 10.1016/S0079-6123(08)62678-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Anderson, P. W. (1972). More Is Different. Science 177, 393–396. doi: 10.1126/science.177.4047.393

PubMed Abstract | CrossRef Full Text | Google Scholar

Auerbach, D., Grebogi, C., Ott, E., and Yorke, J. A. (1992). Controlling chaos in high dimensional systems. Phys. Rev. Lett. 69, 3479–3482. doi: 10.1103/physrevlett.69.3479

PubMed Abstract | CrossRef Full Text | Google Scholar

Azevedo, A., and Rezende, S. M. (1991). Controlling chaos in spin-wave instabilities. Phys. Rev. Lett. 66, 1342–1345.

PubMed Abstract | Google Scholar

Banerjee, A., Seriès, P., and Pouget, A. (2008). Dynamical constraints on using precise spike timing to compute in recurrent cortical networks. Neural Comput. 20, 974–993. doi: 10.1162/neco.2008.05-06-206

PubMed Abstract | CrossRef Full Text | Google Scholar

Bar-Yam, Y. (1997). Dynamics of Complex Systems. Boston, MA: Addison-Wesley.

Google Scholar

Bar-Yam, Y. (2016). From big data to important information. Complexity. 21, 73–98. doi: 10.1002/cplx.21785

CrossRef Full Text | Google Scholar

Ben-Hur, A., Siegelmann, H. T., and Fishman, S. (2002). A theory of complexity for continuous time systems. J. Complex. 18, 51–86. doi: 10.1006/jcom.2001.0581

CrossRef Full Text | Google Scholar

Bergman, H., Wichmann, T., and DeLong, M. (1990). Reversal of experimental parkinsonism by lesions of the subthalamic nucleus. Science 249, 1436–1438. doi: 10.1126/science.2402638

PubMed Abstract | CrossRef Full Text | Google Scholar

Bock, D. D., Lee, W. C. A., Kerlin, A. M., Andermann, M. L., Hood, G., Wetzel, A. W., et al. (2011). Network anatomy and in vivo physiology of visual cortical neurons. Nature 471, 177–182. doi: 10.1038/nature09802

PubMed Abstract | CrossRef Full Text | Google Scholar

Boerlin, M., Machens, C. K., and Denève, S. (2013). Predictive Coding of Dynamical Variables in Balanced Spiking Networks. PLoS Comput. Biol. 9:e1003258. doi: 10.1371/journal.pcbi.1003258

PubMed Abstract | CrossRef Full Text | Google Scholar

Boyden, E. S., Zhang, F., Bamberg, E., Nagel, G., and Deisseroth, K. (2005). Millisecond-timescale genetically targeted optical control of neural activity. Nat. Neurosci. 8, 1263–1268. doi: 10.1038/nn1525

PubMed Abstract | CrossRef Full Text | Google Scholar

Bresadola, M. (1998). Medicine and science in the life of Luigi Galvani (1737–1798). Brain Res. Bull. 46, 367–380. doi: 10.1016/s0361-9230(98)00023-9

PubMed Abstract | CrossRef Full Text | Google Scholar

Briggman, K. L., Helmstaedter, M., and Denk, W. (2011). Wiring specificity in the direction-selectivity circuit of the retina. Nature 471, 183–188. doi: 10.1038/nature09818

PubMed Abstract | CrossRef Full Text | Google Scholar

Carandini, M. (2012). From circuits to behavior: a bridge too far? Nat. Neurosci. 15, 507–509. doi: 10.1038/nn.3043

PubMed Abstract | CrossRef Full Text | Google Scholar

Carandini, M., and Heeger, D. J. (2011). Normalization as a canonical neural computation. Nat. Revi. Neurosci. 13, 51–62. doi: 10.1038/nrn3136

PubMed Abstract | CrossRef Full Text | Google Scholar

Cardin, J. A., Carlén, M., Meletis, K., Knoblich, U., Zhang, F., Deisseroth, K., et al. (2009). Driving fast-spiking cells induces gamma rhythm and controls sensory responses. Nature 459, 663–667. doi: 10.1038/nature08002

PubMed Abstract | CrossRef Full Text | Google Scholar

Chance, F. S., and Abbott, L. F. (2000). Divisive inhibition in recurrent networks. Network 11, 119–129. doi: 10.1088/0954-898X/11/2/301

PubMed Abstract | CrossRef Full Text | Google Scholar

Chinellato, D. D., Epstein, I. R., Braha, D., Bar-Yam, Y., and de Aguiar, M. A. M. (2015). Dynamical response of networks under external perturbations: exact results. J. Stat. Phys. 159, 221–230. doi: 10.1007/s10955-015-1189-x

CrossRef Full Text | Google Scholar

Cornelius, S. P., Kath, W. L., and Motter, A. E. (2013). Realistic control of network dynamics. Nat. Commun. 4:1942. doi: 10.1038/ncomms2939

PubMed Abstract | CrossRef Full Text | Google Scholar

Cowan, N. J., Chastain, E. J., Vilhena, D. A., Freudenberg, J. S., and Bergstrom, C. T. (2012). Nodal dynamics not degree distributions, determine the structural controllability of complex networks. PLoS ONE 7:e38398. doi: 10.1371/journal.pone.0038398

CrossRef Full Text | Google Scholar

Cunningham, J. P., and Yu, B. M. (2014). Dimensionality reduction for large-scale neural recordings. Nat. Neurosci. 17, 1500–1509. doi: 10.1038/nn.3776

PubMed Abstract | CrossRef Full Text | Google Scholar

Daniels, B. C., Ellison, C. J., Krakauer, D. C., and Flack, J. C. (2016). Quantifying collectivity. Curr. Opin. Neurobiol. 37, 106–113. doi: 10.1016/j.conb.2016.01.012

PubMed Abstract | CrossRef Full Text | Google Scholar

Davis, N. J., and Koningsbruggen, M. V. (2013). “non-invasive” brain stimulation is not non-invasive. Front. Syst. Neurosci. 7:76. doi: 10.3389/fnsys.2013.00076

CrossRef Full Text | Google Scholar

de Bono, M., and Maricq, A. V. (2005). Neuronal substrates of complex behaviors in C. elegans. Annu. Rev. Neurosci. 28, 451–501. doi: 10.1146/annurev.neuro.27.070203.144259

PubMed Abstract | CrossRef Full Text | Google Scholar

Dehghani, N. (2017). Design of the Artificial: lessons from the biological roots of general intelligence. arXiv [Preprint]. arXiv:1703.02245.

Google Scholar

Dehghani, N., Peyrache, A., Telenczuk, B., Lee Van Quyen, M., Halgren, E., Cash, S. S., et al. (2016). Dynamic balance of excitation and inhibition in human and monkey neocortex. Sci. Rep. 6:23176. doi: 10.1038/srep23176

PubMed Abstract | CrossRef Full Text | Google Scholar

Dehghani, N., and Wimmer, R. D. (2018). A computational perspective of the role of Thalamus in cognition. arXiv [Preprint]. arXiv: 1803.00997.

Google Scholar

Destexhe, A., Rudolph, M., and Paré, D. (2003). The high-conductance state of neocortical neurons in vivo. Nat. Rev. Neurosci. 4:739. doi: 10.1038/nrn1198

PubMed Abstract | CrossRef Full Text | Google Scholar

Destexhe, A., and Rudolph-Lilith, M. (2012). Neuronal Noise. Springer Science + Business Media. Available online at: https://www.springer.com/us/book/9780387790190

PubMed Abstract | Google Scholar

Douglas, R. J., and Martin, K. A. (2004). Neuronal circuits of the neocortex. Annu. Rev. Neurosci. 27, 419–451. doi: 10.1146/annurev.neuro.27.070203.144152

PubMed Abstract | CrossRef Full Text | Google Scholar

Douglas, R. J., and Martin, K. A. (2007). Mapping the matrix: the ways of neocortex. Neuron 56, 226–238. doi: 10.1016/j.neuron.2007.10.017

PubMed Abstract | CrossRef Full Text | Google Scholar

Douglas, R. J., Martin, K. A., and Whitteridge, D. (1989). A Canonical Microcircuit for Neocortex. Neural Comput. 1, 480–488. doi: 10.1162/neco.1989.1.4.480

CrossRef Full Text | Google Scholar

Ernst, M. O., and Banks, M. S. (2002). Humans integrate visual and haptic information in a statistically optimal fashion. Nature 415, 429–433. doi: 10.1038/415429a

PubMed Abstract | CrossRef Full Text | Google Scholar

Fahy, S., and Hamann, D. R. (1992). Transition from chaotic to nonchaotic behavior in randomly driven systems. Phys. Rev. Lett. 69, 761–764. doi: 10.1103/physrevlett.69.761

PubMed Abstract | CrossRef Full Text | Google Scholar

Faisal, A. A., Selen, L. P., and Wolpert, D. M. (2008). Noise in the nervous system. Nat. Rev. Neurosci. 9, 292–303. doi: 10.1038/nrn2258

PubMed Abstract | CrossRef Full Text | Google Scholar

Fritsch, G., and Hitzig, E. (2009). Electric excitability of the cerebrum (Über die elektrische Erregbarkeit des Grosshirns). Epilep. Behav. 15, 123–130. doi: 10.1016/j.yebeh.2009.03.001

PubMed Abstract | CrossRef Full Text | Google Scholar

Garfinkel, A., Spano, M. L., Ditto, W., and Weiss, J. (1992). Controlling cardiac chaos. Science 257, 1230–1235. doi: 10.1126/science.1519060

PubMed Abstract | CrossRef Full Text | Google Scholar

Gates, A. J., and Rocha, L. M. (2016). Control of complex networks requires both structure and dynamics. Sci. Rep. 6:24456. doi: 10.1038/srep24456

PubMed Abstract | CrossRef Full Text | Google Scholar

Gildenberg, P. L. (2005). Evolution of Neuromodulation. Stereotact. Funct. Neurosurg. 83, 71–79. doi: 10.1159/000086865

PubMed Abstract | CrossRef Full Text | Google Scholar

Goldenfeld, N., and Kadanoff, L. P. (1999). Simple lessons from complexity. Science 284, 87–89. doi: 10.1126/science.284.5411.87

PubMed Abstract | CrossRef Full Text | Google Scholar

Goncalves, S. B., Ribeiro, J. F., Silva, A. F., Costa, R. M., and Correia, J. H. (2017). Design and manufacturing challenges of optogenetic neural interfaces: a review. J. Neural Eng. 14, 041001. doi: 10.1088/1741-2552/aa7004

PubMed Abstract | CrossRef Full Text | Google Scholar

Harris, K. D., and Shepherd, G. M. (2015). The neocortical circuit: themes and variations. Nat. Neurosci. 18, 170–181. doi: 10.1038/nn.3917

PubMed Abstract | CrossRef Full Text | Google Scholar

Haubensak, W., Kunwar, P. S., Cai, H., Ciocchi, S., Wall, N. R., Ponnusamy, R., et al. (2010). Genetic dissection of an amygdala microcircuit that gates conditioned fear. Nature 468, 270–276. doi: 10.1038/nature09553

PubMed Abstract | CrossRef Full Text | Google Scholar

Heeger, D. J. (1992). Normalization of cell responses in cat striate cortex. Vis. Neurosci. 9, 181–97. doi: 10.1017/S0952523800009640

PubMed Abstract | CrossRef Full Text | Google Scholar

Hopfield, J. J. (1982). Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. U.S.A. 79, 2554–2558. doi: 10.1073/pnas.79.8.2554

PubMed Abstract | CrossRef Full Text | Google Scholar

Hoyer, P. O., and Hyvärinen, A. (2003). “Interpreting Neural Response Variability as Monte Carlo Sampling of the Posterior,” in Advances in Neural Information Processing Systems 15 eds S. Becker, S. Thrun and K. Obermayer (Cambridge, MA: MIT Press), 293–300.

Google Scholar

Jarvis, S., and Schultz, S. R. (2015). Prospects for optogenetic augmentation of brain function. Front. Syst. Neurosci. 9:157. doi: 10.3389/fnsys.2015.00157

PubMed Abstract | CrossRef Full Text | Google Scholar

Kleinfeld, D., Bharioke, A., Blinder, P., Bock, D. D., Briggman, K. L., Chklovskii, D. B., et al. (2011). Large-scale automated histology in the pursuit of connectomes. J. Neurosci. 31, 16125–16138. doi: 10.1523/jneurosci.4077-11.2011

PubMed Abstract | CrossRef Full Text | Google Scholar

Klemm, K., and Bornholdt, S. (2005). Topology of biological networks and reliability of information processing. Proc. Natl. Acad. Sci. U.S.A. 102, 18414–18419. doi: 10.1073/pnas.0509132102

PubMed Abstract | CrossRef Full Text | Google Scholar

Kopell, N. J., Gritton, H. J., Whittington, M. A., and Kramer, M. A. (2014). Beyond the connectome: the dynome. Neuron 83, 1319–1328. doi: 10.1016/j.neuron.2014.08.016

PubMed Abstract | CrossRef Full Text | Google Scholar

Kouh, M., and Poggio, T. (2008). A canonical neural circuit for cortical nonlinear operations. Neural Comput. 20, 1427–1451. doi: 10.1162/neco.2008.02-07-466

PubMed Abstract | CrossRef Full Text | Google Scholar

Kozai, T. D., and Vazquez, A. L. (2015). Photoelectric artefact from optogenetics and imaging on microelectrodes and bioelectronics: new challenges and opportunities. J. Mater. Chem. B 3, 4965–4978. doi: 10.1039/C5TB00108K

PubMed Abstract | CrossRef Full Text | Google Scholar

Kravitz, A. V., Freeze, B. S., Parker, P. R., Kay, K., Thwin, M. T., Deisseroth, K., et al. (2010). Regulation of parkinsonian motor behaviours by optogenetic control of basal ganglia circuitry. Nature 466, 622–626. doi: 10.1038/nature09159

PubMed Abstract | CrossRef Full Text | Google Scholar

Kringelbach, M. L., Jenkinson, N., Owen, S. L., and Aziz, T. Z. (2007). Translational principles of deep brain stimulation. Nat. Rev. Neurosci. 8, 623–635. doi: 10.1038/nrn2196

PubMed Abstract | CrossRef Full Text | Google Scholar

Krook-Magnuson, E., Armstrong, C., Oijala, M., and Soltesz, I. (2013). On-demand optogenetic control of spontaneous seizures in temporal lobe epilepsy. Nat. Commun. 4:1376. doi: 10.1038/ncomms2376

PubMed Abstract | CrossRef Full Text | Google Scholar

Laje, R., and Buonomano, D. V. (2013). Robust timing and motor patterns by taming chaos in recurrent neural networks. Nat. Neurosci. 16:925. doi: 10.1038/nn.3405

PubMed Abstract | CrossRef Full Text | Google Scholar

Langston, J. W., Ballard, P., Tetrud, J., and Irwin, I. (1983). Chronic Parkinsonism in humans due to a product of meperidine-analog synthesis. Science 219, 979–980. doi: 10.1126/science.6823561

PubMed Abstract | CrossRef Full Text | Google Scholar

Laughlin, R. B. (2014). Physics emergence, and the connectome. Neuron 83, 1253–1255. doi: 10.1016/j.neuron.2014.08.006

PubMed Abstract | CrossRef Full Text | Google Scholar

Le Van Quyen, M., Muller, L. E., Telenczuk, B., Halgren, E., Cash, S., Hatsopoulos, N. G., et al. (2016). High-frequency oscillations in human and monkey neocortex during the wake–sleep cycle. Proc. Natl. Acad. Sci. U.S.A. 113, 9363–9368. doi: 10.1073/pnas.1523583113

PubMed Abstract | CrossRef Full Text | Google Scholar

Leifer, A. M., Fang-Yen, C., Gershow, M., Alkema, M. J., and Samuel, A. D. (2011). Optogenetic manipulation of neural activity in freely moving Caenorhabditis elegans. Nat. Methods 8, 147–152. doi: 10.1038/nmeth.1554

PubMed Abstract | CrossRef Full Text | Google Scholar

Lima, R., and Pettini, M. (1990). Suppression of chaos by resonant parametric perturbations. Phys. Rev. A 41, 726–733. doi: 10.1103/physreva.41.726

PubMed Abstract | CrossRef Full Text | Google Scholar

Limousin, P., Pollak, P., Benazzouz, A., Hoffmann, D., Bas, J. F., Perret, J., et al. (1995). Effect on parkinsonian signs and symptoms of bilateral subthalamic nucleus stimulation. Lancet 345, 91–95. doi: 10.1016/s0140-6736(95)90062-4

PubMed Abstract | CrossRef Full Text | Google Scholar

Lin, J. Y., Knutsen, P. M., Muller, A., Kleinfeld, D., and Tsien, R. Y. (2013). ReaChR: a red-shifted variant of channelrhodopsin enables deep transcranial optogenetic excitation. Nat. Neurosci. 16, 1499–1508. doi: 10.1038/nn.3502

PubMed Abstract | CrossRef Full Text | Google Scholar

Litwin-Kumar, A., and Doiron, B. (2012). Slow dynamics and high variability in balanced cortical networks with clustered connections. Nat. Neurosci. 15, 1498–1505. doi: 10.1038/nn.3220

PubMed Abstract | CrossRef Full Text | Google Scholar

Liu, J. K., and Buonomano, D. V. (2009). Embedding multiple trajectories in simulated recurrent neural networks in a self-organizing manner. J. Neurosci. 29, 13172–13181. doi: 10.1523/JNEUROSCI.2358-09.2009

PubMed Abstract | CrossRef Full Text | Google Scholar

Liu, Y. Y., Slotine, J. J., and Barabási, A. L. (2011). Controllability of complex networks. Nature 473, 167–173. doi: 10.1038/nature10011

PubMed Abstract | CrossRef Full Text | Google Scholar

London, M., Roth, A., Beeren, L., Häusser, M., and Latham, P. E. (2010). Sensitivity to perturbations in vivo implies high noise and suggests rate coding in cortex. Nature 466:123. doi: 10.1038/nature09086

PubMed Abstract | CrossRef Full Text | Google Scholar

Maass, W., Natschläger, T., and Markram, H. (2002). Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Comput. 14, 2531–2560. doi: 10.1162/089976602760407955

PubMed Abstract | CrossRef Full Text | Google Scholar

Marder, E. (2011). Variability compensation, and modulation in neurons and circuits. Proc. Natl. Acad. Sci. U.S.A. 108(Suppl. 3), 15542–15548. doi: 10.1073/pnas.1010674108

PubMed Abstract | CrossRef Full Text | Google Scholar

Marder, E., and Goaillard, J. M. (2006). Variability compensation and homeostasis in neuron and network function. Nat. Rev. Neurosci. 7, 563–574. doi: 10.1038/nrn1949

PubMed Abstract | CrossRef Full Text | Google Scholar

Markram, H., Muller, E., Ramaswamy, S., Reimann, M. W., Abdellah, M., Sanchez, C. A., et al. (2015). Reconstruction and simulation of neocortical microcircuitry. Cell 163, 456–492. doi: 10.1016/j.cell.2015.09.029

PubMed Abstract | CrossRef Full Text | Google Scholar

Marr, D. (1969). A theory of cerebellar cortex. J. Physiol. 202, 437–470. doi: 10.1113/jphysiol.1969.sp008820

PubMed Abstract | CrossRef Full Text | Google Scholar

Marr, D. (1970). A theory for cerebral neocortex. Proc. R. Soc. B Biol. Sci. 176, 161–234.

PubMed Abstract | Google Scholar

Marr, D. (1971). Simple memory: a theory for archicortex. Philos. Trans. R. Soc. B Biol. Sci. 262, 23–81.

PubMed Abstract | Google Scholar

Marr, D. (1975). Approaches to biological information processing. Science 190, 875–876.

Google Scholar

Marr, D. (1982). Vision: A Computational Investigation Into the Human Representation and Processing of Visual Information. San Francisco, CA: W.H. Freeman.

Marr, D., and Poggio, T. (1976). From Understanding Computation to Understanding Neural Circuitry (A.I. Memo 357). Technical Report, Massachusetts Institute of Technology.

Google Scholar

McClamrock, R. (1991). Marrs three levels: a re-evaluation. Minds Mach. 1, 185–196.

Google Scholar

Miller, K. D. (2016). Canonical computations of cerebral cortex. Curr. Opin. Neurobiol. 37, 75–84. doi: 10.1016/j.conb.2016.01.008

PubMed Abstract | CrossRef Full Text | Google Scholar

Monteforte, M., and Wolf, F. (2010). Dynamical entropy production in spiking neuron networks in the balanced state. Phys. Rev. Lett. 105:268104. doi: 10.1103/PhysRevLett.105.268104

PubMed Abstract | CrossRef Full Text | Google Scholar

Monteforte, M., and Wolf, F. (2012). Dynamic flux tubes form reservoirs of stability in neuronal circuits. Phys. Rev. X 2:041007. doi: 10.1103/PhysRevX.2.041007

CrossRef Full Text | Google Scholar

Moran, J., and Desimone, R. (1985). Selective attention gates visual processing in the extrastriate cortex. Science 229, 782–784.

PubMed Abstract | Google Scholar

Motter, A. E. (2015). Networkcontrology. Chaos 25, 097621. doi: 10.1063/1.4931570

PubMed Abstract | CrossRef Full Text | Google Scholar

Motter, B. C. (1994). Neural correlates of feature selective memory and pop-out in extrastriate area V4. J. Neurosci. 14, 2190–2199.

PubMed Abstract | Google Scholar

Movshon, J. A., Thompson, I., and Tolhurst, D. (1978). Spatial summation in the receptive fields of simple cells in the cat's striate cortex. J. Physiol. 283, 53–77.

PubMed Abstract | Google Scholar

Müller, F. J., and Schuppert, A. (2011). Few inputs can reprogram biological networks. Nature 478, E4–E4. doi: 10.1038/nature10543

PubMed Abstract | CrossRef Full Text | Google Scholar

Nacher, J. C., and Akutsu, T. (2013). Structural controllability of unidirectional bipartite networks. Sci. Rep. 3, 1647. doi: 10.1038/srep01647

PubMed Abstract | CrossRef Full Text | Google Scholar

Otchy, T. M., Wolff, S. B. E., Rhee, J. Y., Pehlevan, C., Kawai, R., Kempf, A., et al. (2015). Acute off-target effects of neural circuit manipulations. Nature 528, 358–363. doi: 10.1038/nature16442

PubMed Abstract | CrossRef Full Text | Google Scholar

Ott, E., Grebogi, C., and Yorke, J. A. (1990). Controlling chaos. Phys. Rev. Lett. 64, 1196–1199.

PubMed Abstract | Google Scholar

Pasqualetti, F., Zampieri, S., and Bullo, F. (2014). Controllability metrics, limitations and algorithms for complex networks. IEEE Trans. Control Netw. Syst. 1, 40–52. doi: 10.1109/TCNS.2014.2310254

CrossRef Full Text | Google Scholar

Pincus, S. M., and Goldberger, A. L. (1994). Physiological time-series analysis: what does regularity quantify? Am. J. Physiol. 266, H1643–H1656.

PubMed Abstract | Google Scholar

Pincus, S. M., and Singer, B. H. (1996). Randomness and degrees of irregularity. Proc. Natl. Acad. Sci. U.S.A. 93, 2083–2088.

PubMed Abstract | Google Scholar

Priebe, N. J., and Ferster, D. (2012). Mechanisms of neuronal computation in mammalian visual cortex. Neuron 75, 194–208. doi: 10.1016/j.neuron.2012.06.011

PubMed Abstract | CrossRef Full Text | Google Scholar

Prinz, A. A., Bucher, D., and Marder, E. (2004). Similar network activity from disparate circuit parameters. Nat. Neurosci. 7, 1345–1352. doi: 10.1038/nn1352

PubMed Abstract | CrossRef Full Text | Google Scholar

Pylyshyn, Z. W. (1984). Computation and Cognition: Toward A Foundation for Cognitive Science. Cambridge, MA: MIT Press.

Google Scholar

Rabinovich, M., Huerta, R., and Laurent, G. (2008). Transient dynamics for neural processing. Science 321, 48–50. doi: 10.1126/science.1155564

PubMed Abstract | CrossRef Full Text | Google Scholar

Rabinovich, M., Volkovskii, A., Lecanda, P., Huerta, R., Abarbanel, H. D., and Laurent, G. (2001). Dynamical encoding by networks of competing neuron groups: winnerless competition. Phys. Rev. Lett. 87:068102. doi: 10.1103/PhysRevLett.87.068102

PubMed Abstract | CrossRef Full Text | Google Scholar

Renart, A., and Machens, C. K. (2014). Variability in neural activity and behavior. Curr. Opin. Neurobiol. 25, 211–220. doi: 10.1016/j.conb.2014.02.013

PubMed Abstract | CrossRef Full Text | Google Scholar

Rolls, E. T. (2011). David marrs vision: floreat computational neuroscience. Brain 134, 913–916. doi: 10.1093/brain/awr013

CrossRef Full Text | Google Scholar

Rust, N. C., Mante, V., Simoncelli, E. P., and Movshon, J. A. (2006). How MT cells analyze the motion of visual patterns. Nat. Neurosci. 9, 1421–1431. doi: 10.1038/nn1786

PubMed Abstract | CrossRef Full Text | Google Scholar

Schiff, S. J., Jerger, K., Duong, D. H., Chang, T., Spano, M. L., and Ditto, W. L. (1994). Controlling chaos in the brain. Nature 370, 615–620.

PubMed Abstract | Google Scholar

Seidemann, E., Zohary, E., and Newsome, W. T. (1998). Temporal gating of neural signals during performance of a visual discrimination task. Nature 394, 72–5.

PubMed Abstract | Google Scholar

Seung, H. S. (2011). Neuroscience: towards functional connectomics. Nature 471, 170–172. doi: 10.1038/471170a

PubMed Abstract | CrossRef Full Text | Google Scholar

Shinbrot, T., Grebogi, C., Yorke, J. A., and Ott, E. (1993). Using small perturbations to control chaos. Nature 363, 411–417.

Google Scholar

Shusterman, R., Smear, M. C., Koulakov, A. A., and Rinberg, D. (2011). Precise olfactory responses tile the sniff cycle. Nat. Neurosci. 14, 1039–1044. doi: 10.1038/nn.2877

PubMed Abstract | CrossRef Full Text | Google Scholar

Siegelmann, H. T. (1995). Computation beyond the turing limit. Science 268, 545–548.

PubMed Abstract | Google Scholar

Siegelmann, H. T., and Sontag, E. D. (1994). Analog computation via neural networks. Theor. Comput. Sci. 131, 331–360.

Google Scholar

Simon, H. A. (1962). The Architecture of Complexity. Proc. Am. Philos. Soc. 106, 467–482.

Google Scholar

Simon, H. A. (1969). The Sciences of the Artificial. Cambridge, MA: M.I.T. Press.

Solomon, S. G., Lee, B. B., and Sun, H. (2006). Suppressive surrounds and contrast gain in magnocellular-pathway retinal ganglion cells of macaque. J. Neurosci. 26, 8715–8726. doi: 10.1523/jneurosci.0821-06.2006

PubMed Abstract | CrossRef Full Text | Google Scholar

Sompolinsky, H., Crisanti, A., and Sommers, H. J. (1988). Chaos in random neural networks. Phys. Rev. Lett. 61, 259–262.

PubMed Abstract | Google Scholar

Strogatz, S. H. (2001). Exploring complex networks. Nature 410, 268–276. doi: 10.1038/35065725

PubMed Abstract | CrossRef Full Text | Google Scholar

Sun, J., and Motter, A. E. (2013). Controllability transition and nonlocality in network control. Phys. Rev. Lett. 110, 208701. doi: 10.1103/physrevlett.110.208701

PubMed Abstract | CrossRef Full Text | Google Scholar

Sussman, G. J., and Wisdom, J. (1992). Chaotic evolution of the solar system. Science 257, 56–62.

PubMed Abstract | Google Scholar

Takemura, S. y., Bharioke, A., Lu, Z., Nern, A., Vitaladevuni, S., Rivlin, P. K., et al. (2013). A visual motion detection circuit suggested by Drosophila connectomics. Nature 500, 175–181. doi: 10.1038/nature12450

PubMed Abstract | CrossRef Full Text | Google Scholar

Tank, D. (1989). What details of neural circuits matter? Semin. Neurosci. 1, 67–79.

Google Scholar

Tsai, H. C., Zhang, F., Adamantidis, A., Stuber, G. D., Bonci, A., de Lecea, L., et al. (2009). Phasic firing in dopaminergic neurons is sufficient for behavioral conditioning. Science 324, 1080–1084. doi: 10.1126/science.1168878

PubMed Abstract | CrossRef Full Text | Google Scholar

van Vreeswijk, C., and Sompolinsky, H. (1996). Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science 274, 1724–1726.

PubMed Abstract | Google Scholar

Varshney, L. R., Chen, B. L., Paniagua, E., Hall, D. H., and Chklovskii, D. B. (2011). Structural properties of the caenorhabditis elegans neuronal network. PLoS Comput. Biol. 7:e1001066. doi: 10.1371/journal.pcbi.1001066

PubMed Abstract | CrossRef Full Text | Google Scholar

White, J. G., Southgate, E., Thomson, J. N., and Brenner, S. (1986). The structure of the nervous system of the nematode Caenorhabditis elegans. Philos. Trans. R. Soc. Lond. B Biol. Sci. 314, 1–340. doi: 10.1098/rstb.1986.0056

PubMed Abstract | CrossRef Full Text | Google Scholar

Whittaker, E. T. (1910). A History of the Theories of Aether and Electricity From the Age of Descartes to the Close of the Nineteenth Century. Smithsonian Institution.

Williams, J. C., and Denison, T. (2013). From optogenetic technologies to neuromodulation therapies. Sci. Trans. Med. 5:177ps6. doi: 10.1126/scitranslmed.3003100

PubMed Abstract | CrossRef Full Text | Google Scholar

Wilson, D., and Moehlis, J. (2014). Locally optimal extracellular stimulation for chaotic desynchronization of neural populations. J. Comput. Neurosci. 37, 243–257. doi: 10.1007/s10827-014-0499-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Yan, G., Tsekenis, G., Barzel, B., Slotine, J.-J., Liu, Y.-Y., and Barabási, A.-L. (2015). Spectrum of controlling and observing complex networks. Nat. Phys. 11:779.

Google Scholar

Yan, G., Vértes, P. E., Towlson, E. K., Chew, Y. L., Walker, D. S., Schafer, W. R., et al. (2017). Network control principles predict neuron function in the caenorhabditis elegans connectome. Nature 550:519. doi: 10.1038/nphys3422

PubMed Abstract | CrossRef Full Text | Google Scholar

Zemelman, B. V., Lee, G. A., Ng, M., and Miesenböck, G. (2002). Selective photostimulation of genetically ChARGed neurons. Neuron 33, 15–22. doi: 10.1016/s0896-6273(01)00574-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhang, F., Wang, L. P., Brauner, M., Liewald, J. F., Kay, K., Watzke, N., et al. (2007). Multimodal fast optical interrogation of neural circuitry. Nature 446, 633–639. doi: 10.1038/nature05744

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhao, H. (2017). Recent progress of development of optogenetic implantable neural probes. Int. J. Mol. Sci. 18:E1751. doi: 10.3390/ijms18081751

PubMed Abstract | CrossRef Full Text | Google Scholar

Zoccolan, D., Cox, D. D., and DiCarlo, J. J. (2005). Multiple object response normalization in monkey inferotemporal cortex. J. Neurosci. 25, 8150–8164. doi: 10.1523/jneurosci.2058-05.2005

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: complexity, information theory, dynamical systems, networks, control theory

Citation: Dehghani N (2018) Theoretical Principles of Multiscale Spatiotemporal Control of Neuronal Networks: A Complex Systems Perspective. Front. Comput. Neurosci. 12:81. doi: 10.3389/fncom.2018.00081

Received: 01 April 2018; Accepted: 11 September 2018;
Published: 08 October 2018.

Edited by:

Ali Ghazizadeh, Sharif University of Technology, Iran

Reviewed by:

Jian K. Liu, University of Leicester, United Kingdom
Da-Hui Wang, Beijing Normal University, China

Copyright © 2018 Dehghani. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Nima Dehghani, nima.dehghani@mit.edu

Download