On the Importance of Being Flexible: Dynamic Brain Networks and Their Potential Functional Significances

In this theoretical review, we begin by discussing brains and minds from a dynamical systems perspective, and then go on to describe methods for characterizing the flexibility of dynamic networks. We discuss how varying degrees and kinds of flexibility may be adaptive (or maladaptive) in different contexts, specifically focusing on measures related to either more disjoint or cohesive dynamics. While disjointed flexibility may be useful for assessing neural entropy, cohesive flexibility may potentially serve as a proxy for self-organized criticality as a fundamental property enabling adaptive behavior in complex systems. Particular attention is given to recent studies in which flexibility methods have been used to investigate neurological and cognitive maturation, as well as the breakdown of conscious processing under varying levels of anesthesia. We further discuss how these findings and methods might be contextualized within the Free Energy Principle with respect to the fundamentals of brain organization and biological functioning more generally, and describe potential methodological advances from this paradigm. Finally, with relevance to computational psychiatry, we propose a research program for obtaining a better understanding of ways that dynamic networks may relate to different forms of psychological flexibility, which may be the single most important factor for ensuring human flourishing.


INTRODUCTION
In what follows we consider dynamic perspectives on the brain and mind from methodological, neurological, and psychological views. The topics covered range from the highly intuitive to the highly technical. While we endeavored to provide sufficiently detailed handlings to provide understanding of important connections (please see Glossary), some parts will be highly challenging for people without background in those domains. However, readers can safely engage with these discussions in an "a la carte" manner, focusing on material of particular interest, with an 'impressionistic' understanding of technical portions being sufficient for following the overall trajectory of the paper through idea-space . While we believe there is a deep complementarity between these various points of view, we have constructed this paper with some degree of modular organization for the sake of efficient/flexible communication.
In what follows, this flexibility/modularity also applies to particular claims and theoretical commitments. For example, while at points a strong case is made for replacing standard models of representation-based cognition with "enactive" dynamical systems, these arguments are independent of (but potentially highly relevant to) the dynamic network methods we describe. Further, while we emphasize the dynamic point of view here, we also acknowledge that brains could be understood as hybrid 'architectures' with both representational and nonrepresentational sub-systems (Friston et al., 2017a;Parr and Friston, 2018). [Alternatively, one could instead think of these quasi-representational elements of nervous systems as highlyconnected causal structures that are particularly effective at bringing order to the overall mental economy/ecosystem]. By the end of these explorations, we hope to have provided an overview of some of the myriad ways in which dynamical perspectives may provide insights into the nature(s) of flexibility as a core organizing principle for understanding the cognitive abilities of brains, and perhaps complex adaptive systems more generally.

BRAINS AND MINDS AS DYNAMICAL SYSTEMS
Finding meaningful patterns in neuronal activity is the primary basis of neuroimaging studies. A major challenge arises from the observation that interactions are highly context-specific. This means that neuronal events involve high degrees of freedom, thereby "living" in high dimensions and extremely hard to predict. The challenge in neuroimaging involves precisely estimating the unknown causes of neuronal activity (what we do not know) with neuroimaging data sets possessing many attributes (what we know). Thus, a common goal in neuroimaging is to develop and fit the model that best explains the causes of the observed data, i.e., neuronal activity.
One possible way of understanding and modeling neuronal activity is in terms of its hypothesized computational bases. This means that neuronal activity, seemingly, unfolds in the context of highly fixed structures such as domain-specific modules with highly specific information transmission properties (Arbib et al., 2003;Sporns and Betzel, 2016;Yan and Hricko, 2017;Damicelli et al., 2019;Mattar and Bassett, 2019). Even single neurons may be understood as modules, supposedly representing their respective parameters, e.g., state variables. In this neural network account, we attempt to obtain explanatory purchase by mapping out the topologies of functional connections maintained between modules. Dynamical modeling has evolving dynamics at its target of explanation, rather than computational inferences based on static connectivity (Friston, 2011;Zarghami and Friston, 2020). This paradigm considers the brain as a dynamical system (as opposed to a modular, information processing computer). 1 This dynamical renaissance in understanding nervous systems has been compellingly described by Favela (2020). Although the methods are not particularly new -e.g., differential equationsunfolding developments make these approaches central to the defense of the "dynamical hypothesis" in neuroscience. An account of the brain under the "dynamic hypothesis" explains neuronal activity using concepts such as "fixed point attractors, " "limit cycles, " and "phase transitions." The concept of an attractor is particularly important, constituted by areas of state space moved toward by the trajectory of a system. High-dimensional systems like the brain tend to be computationally challenging to assess by virtue of having multiple phase transitions and attractors. Dynamical Systems Theory (DST) is uniquely qualified to deal with these high-dimensional processes characteristic of neurobiological systems. A dynamical system is an abstract description of physical identity, with a rule that can be specified at any given time by a set of variables. One possible example is as follows: d dt x = f (x, t, u, β) + ω (1) Such a system consists of a time derivative of of state of x as function of the present state of x, a controlled input (u), parameters (β) and stochastic forcing (ω), i.e., random influences that can only be modeled probabilistically. The function (f) is dynamic in the sense that it consists of a vector field with values at every point in space and time. This configuration allows us to predict future states by applying this dynamical rule for system evolution to the present state. Dynamical systems are specifically qualified to account for highly dimensional systems like the brain (with high degrees of freedom) by virtue of accounting for non-linearity. Dynamical Systems Theory uses differential equations to describe system-evolution over time, where variables are treated as continuous, accounting for the highly situated, contextual nature of action in complex systems. An example is work on motor control through dissipative structures (Kugler et al., 1980), in which muscles involved in action are treated as coordinative structures of single units (see also Heylighen, 2016). This explanation is consistent with (1) Bernstein's problem of how to explain the regulation of the many biokinematic degrees of freedom with minimal recourse to an "intelligent regulator" and (2) operation by the same organizational principles as other non-equilibrium thermodynamic systems. Importantly, this kind of work obviates needs for appealing to computational representations in many contexts. Haken et al. (1985) offered a model of self-organization of perceptual-motor coordination such that different states are treated as dynamical patterns, as opposed to computations. A well-established (and of high instructional value) model in dynamical systems theory is Watt governors, which use mechanically-based negative feedback to regulate the amount of activity from steam engines. Such feedback mechanisms can also be used to explain aspects of cognitive activity without invoking computations or representations: "cognitive systems may in fact be dynamical systems, and cognition the behavior of some (noncomputational) dynamical system" (Van Gelder, 1995, p. 358, italics in the original).
In this spirit, Hipólito et al. (2021a) recently advanced a model of action that does not require an optimal control system. Their model uses proprioceptive predictions to replace not only forward and inverse models with a generative model, but also obviates the need for motor plans (considered to be unrealistic due the required specificity of such plans and the many degrees of freedom of neuromuscular systems). Instead, perceptual-motor coordination is treated as coordinative structures of single units that operate by the same organizational principles as other non-equilibrium thermodynamic systems; more precisely, selforganized coordination in which different states are treated as dynamical patterns (without appealing to computational representations for explanatory power).
The "dynamical hypothesis" has also been applied to understanding brains as complex adaptive systems without appeal to representations or functional/computational principles. While there can be good reason to hold for pluralism in respect to combining structural and dynamical approaches for epistemic purposes, instrumental pluralism does not suffice to hold an ontology or ontological commitments. To put it more precisely, while it is possible that epistemic tools are combined to grant understanding, the ontological characterisation/understanding of neural activity differs between the computational/representational accounts (such as neural networks, deep learning, etc.), on the one hand, and the Dynamical Systems Theory, on the other. To encapsulate the traditional computation/representationalist account: "any system that is going to behave intelligently in the world must contain representations that reflect the structure of the world" (Poldrack, 2020, p. 1; see also Davis and Poldrack, 2013), holding the general assumption that "the mind is (1) an information processing system, (2) a representational device, and (3) (in some sense) a computer" (Bechtel and Graham, 1998, p. xiii). Dynamical Systems Theory, on the contrary, rejects the analogy of the mind/brain with a computer and the existence of neural representations altogether: "rather than computation, cognitive processes may be dynamical systems; rather than computation, cognitive processes may be state-space evolution within these very different kinds of systems" (Van Gelder, 1995, p. 346).
A notable example of this kind of dynamical handling is work in masses of neural sets in which cortical activity is explained via groups of action potentials (i.e., 'masses') that synchronize with other groups of neurons (Freeman, 1975;Freeman and Kozma, 2010). Another example is the model advanced by Hipólito et al. (2021b) in which neuronal circuits and organization do away with traditional, fixed modular processors. As will be described in greater detail below, this technique leverages the formalisms of Markov blankets (i.e., probabilistically-defined system boundaries based on conditional dependence/independence relationships) as well as active inference (i.e., a normative model of intelligent behavior) to analyze neuronal dynamics at multiple scales, ranging from single neurons, to brain regions, to brain-wide networks . This treatment is based upon canonical microcircuitry characterized in empirical studies of dynamic effective connectivity, with potentially far-reaching practical applications for neuroimaging. This connection between macro-and mesoscale dynamics with microscale processes is especially relevant when considered within the framework of variational Bayes and information geometry (Ramstead et al., 2020), with further quantitative support obtainable via mathematical formalisms from Renormalisation Group theory ) (we will return to this in the last section). Zarghami and Friston (2020) developed a model of neural dynamics that generates trajectories in the parametric space of effective connectivity modes (i.e., states of connectivity). This approach allows for more detailed characterization of functional brain architectures by extending the powerful technique of spectral Dynamic Causal Modelling (DCM) (Friston et al., 2003Daunizeau et al., 2011;Razi and Friston, 2016). Effective connectivity is -by definition -model-based. It develops hypothesis-driven generative models to explain empirical observations of neuronal activity (Friston, 2011). The effective connectivity paradigm is grounded in a view of brains in which continuously-expressed patterns of transiently coordinated activity emerge and dissolve in response to internal and external perturbations. Importantly, the emergence and evolution of such metastable coordination dynamics (in selforganizing complex systems, such as the brain) is inherently non-linear, context-sensitive, and thereby flexibly adaptable. It is through this dynamical perspective that we interpret the mesoscale neural phenomena described below (see also Atasoy et al., 2019;Wens et al., 2019).
The goal of generative modeling of neural phenomena is to explain how the brain generates neuroimaging data. This approach begins from the assumption that neuronal dynamics are generated by patterns of intrinsic (within region) and extrinsic (between region) connectivity, which continuously change through time. The generative modeling of these iterant brain states is motivated by empirical studies of dynamic functional connectivity (Vidaurre et al., 2017), theoreticallygrounded in models in which macroscopic (slowly-evolving) dynamical modes (Haken, 1983;Jirsa et al., 1994) visit a succession of unstable fixed points in a parameter space for directed connectivity.
This technique enables the attainment of the main component for developing a dynamic model: its parameters configured according to a Markov process. This specification determines how connectivity parameter space will visit a succession of unstable fixed points (Rabinovich et al., , 2012). Any natural system can be described as a dynamical system. By defining an initial condition and applying a dynamical rule in several iterations we obtain a set of trajectories that allow us to see the behavior of a system as approaching or avoiding certain points. This means that the behavior of a system can be seen as tending to visit stable fixed points and avoiding other points called repellers.
For example, in a moving pendulum a repeller is a point at which the pendulum would remain upward. Pendulums do not tend to stay upward as they are attracted to a fixed, stable point, which is their resting state in a downward direction. We can thereby say that the behavior of a pendulum can be explained as moving against the repeller point (upward) and toward a stable, fixed point (downward). This can be plotted in an orbit or trajectory of a cycle between two points (between repeller and attractor points). Zarghami and Friston (2020), in their generative model of neural connectivity, have two assumptions as their point of departure. First, connectivity patterns are assumed to trace out a heterocyclic orbit. A heterocyclic orbit is a kind of cycle, in which as time evolves, a typical trajectory would stay for increasingly longer period of time near a solution, which can be an equilibrium point, a periodic orbit, or a chaotic invariant set (in the case of effective connectivity below it is equilibria). Notably, an interesting property of heterocyclic cycles is that they are robust under perturbations (external force applied to the system's activity).
A Stable Heteroclinic Cycle (SHC) (Figure 1) describes the activity of the brain as trajectories visiting a discrete number of unstable fixed points in a winnerless competition among brain activity states (Afraimovich et al., 2008;Deco and Jirsa, 2012;Friston et al., 2014). Supposing neural activity as a heterocyclic cycle means that it is asymptomatically stable, i.e., can be described as approaching trajectories spend longer periods of time in a neighborhood of successful equilibria.
The second assumption is that transitions from one unstable fixed point to the next are relatively fast with respect to the duration over which effective connectivity remains in the neighburhood of a fixed-point attractor. This assumption leverages two further features of the SHC: (i) the origination of structural stability through self-organization, and (ii) long passages of time spent in the vicinity of saddles 2 in the presence of moderate noise with high-dimensionality connectivity (with large numbers of degrees of freedom) (Rabinovich and Varona, 2018). In short, the two assumptions in the light of thinking effective connectivity as heteroclinic events are that wondering sets will (a) increasingly spend more time in stable points and (b) transitions from unstable point to another are fast. Zarghami and Friston (2020) concluded that small random perturbations are unlikely to alter the global structure of heteroclinic events (recall that heterocyclic cycles is that they are robust under perturbations), but rather renders greater stochasticity in the duration of these events. This is consistent with evidence from EEG microstates as short time-periods of stable scalp potential fields (Michel and Koenig, 2018). Further, in agreement with (Stone and Holmes, 1990), while weak additive noise does not essentially alter the structure of phase space solutions, it induces radical change in leading to "a selection of timescales" in the evolution of dynamic interactions and their emergent/synergistic properties. Effective connectivity timescales might thereby be useful in informing canonical models of metastability for both normally functioning and pathological brains. In turn, the study of such diverse nervous systems and their associated properties can further elucidate the functional role of noise and manifold instabilities in inducing potentially altered heteroclinic structures and time-scales. This means that mapping the patterns of activity of effective connectivity would allow us to indicate patterns typically associated with welladjusted or healthy neuropsychology, and, conversely, patterns associated with psychopathology. This is relevant because it takes us one step further from the topological structures of the brain typically mapped out by functional connectivity. Many functional principles cannot be reduced to particular neural structures, but instead require focusing on emergent patterns of activity that both influence and are influenced by multiple systems and their various combinations of dynamic interactions. An example of this is, with respect to our discussion (below) of neural systems enabling flexible adaptation, that many of these areas have been found to have less myelinated internal connectivity, potentially affording more slowly evolving dynamics as they integrate and influence other systems with which they couple (Haueis, 2021).
Although other paradigms, e.g., neural networks (Koch, 1999;Smith et al., 2020), note the importance of timevarying processes in neuronal activity, the question is the extent to which they depart from views in which brains are fundamentally understood as dynamical systems. As we have seen, the "dynamical hypothesis" attempts to explain neuronal activity in terms of concepts such as "fixed point attractors, " "limit cycles, " "phase transitions." These conceptualizations and methodological approaches are what make dynamical systems theory uniquely qualified to explain the non-linear and contextsensitive evolution of neurobiological systems. Below we will explore particular analysis techniques that may be particularly apt for describing the flexibly adaptive character of brains as dynamical systems.

FLEXIBILITY IN BRAINS AND MINDS
How do functional aspects of brains emerge from the network properties of nervous systems? How do these functions and processes vary across and within individuals? Do greater tendencies toward exploration in behavior and cognition correlate with individuals having brains with greater (or more dynamic) degrees of interconnectivity (Beaty et al., 2016;Kenett et al., 2018)? Do more creative cognitive styles correlate with greater capacities for flexibly transitioning between metastable functional networks (Váša et al., 2015;Wens et al., 2019)? Could broader ranges of preferences or behaviors correspond to having a greater diversity of dynamics in functional networks (Beaty et al., 2015;Atasoy et al., 2017;Huang et al., 2020)? In which ways does the dynamic character of brain connectivity influence the development of people, personalities, and their capacities for ongoing learning/evolution through time and experience?
Here we attempt to describe how these questions might be addressed with methods for characterizing the dynamic properties of brains. We mostly focus on a measure of "network FIGURE 1 | Reprinted from Wikipedia (CC): A SHC is a set in the phase space of a dynamical system that consists of a circle of equilibrium points and connecting heterocyclic connections. In this image, the Y -axis indicates varying trajectories, and the X-axis indicates phase with respect to particular orbits, with the red arrow indicating the completion of one revolution. flexibility" as introduced by Bassett et al. (2011), which assesses the degree to which brain networks dynamically reconfigure themselves over time. To calculate network flexibility, the community structure of a brain is estimated at successive time windows in a measurement session. The degree to which nodes change their community allegiance across these time windows corresponds to network flexibility. Intuitively, brains in which nodes change their communities more often have greater overall flexibility. Similarly, individual nodes or groups of nodes can be assessed with respect to the frequency with which they change community allegiances. In this way, we can look at flexibility on multiple scales, so characterizing both global and regional dynamics (which may differ substantially), as well as their interrelations.
The basic technique for estimating network flexibility involves dividing up a time series into a series of epochs, where the number of divisions depends on the sensitivity of the measure and estimated temporal granularity for phenomena of interest. Across these epochs, data is clustered into groups of correlated nodes known as cliques, or communities, or modules, which are assumed to reflect functional subnetworks over which relatively higher degrees of communication takes place. The modeler then determines the proportion of times that nodes switch community-allegiances, providing a number between 0 and 1, where 0 reflects minimally flexible nodes that never change cliques, and 1 reflects maximally flexible nodes that always change cliques. These values can then be averaged over the whole system to determine the entire network's flexibility. Although in this paper we will focus on neuroscience applications, one of the many exciting things about this technique is that it can be applied to any time series data with a graph structure, which means any time-varying dataset.
In Bassett et al. (2011), overall brain network flexibility was found to predict the rate at which sequences were learned. In another study, flexibility in the striatum correlated with enhanced reinforcement learning of visual cues and outcomes (Gerraty et al., 2018). Whole-brain flexibility has been further shown to correlate with working memory as assessed by N-back tasks (Braun et al., 2015). An additional study replicated these N-back findings, and found correlations between whole-brain flexibility and number of hours slept, as well as performance on a relational reasoning and planning task (Pedersen et al., 2018). These findings may be further consistent with a study in which variability in flexibility was explained by fatigue (Betzel et al., 2017). Betzel et al. (2017) also found that positive mood was correlated with reduced flexibility in the dorsal attention network (DAN), which is notable in highlighting that flexibility is not necessarily something that is strictly beneficial in all contexts. For example, Chai et al. (2016) found that language comprehension involved a relatively stable set of regions in the left hemisphere, but with a more flexible periphery of right-lateralized regions. As will be discussed in greater detail below, adaptive functioning may involve a kind of synergistic division of labor where systems are tuned to exhibit varying degrees of flexible responsiveness, which might reflect optimization for particular functional roles. Fascinatingly, this sort of "divide-and-conquer" approach to achieving both stability and plasticity may not only apply to networks of effective connectivity within brains, but also to emergent dynamic modules in the metabolism of individual cells (Jaeger and Monk, 2021).
Along these lines, a recent study found that excessive flexibility in the visual areas of infants negatively correlated with the rate at which subsequent developmental milestones were surpassed (Yin et al., 2020). Such a pattern might be expected if efficient perception is reflected by more stable intramodule dynamics. However, progressing through developmental stages was generally characterized by increased flexibility in somatomotor areas as well as higher-order brain regions including the temporoparietal junction, anterior cingulate, and anterior insula. This "flexible club" was largely distinct (60% non-overlap) from areas identified as functional hubs with high betweenness centrality, and also distinct from a "diverse club" of areas with high participation coefficients. These more flexible areas were also characterized by relatively weak (but more variable) connection strengths, which was suggested to "enable the system to function within many difficult-to-reach states, reflecting a capacity to adapt to novel situations." Interestingly, flexibility in these areas has also been found to correlate with intelligence-related phenomena such as "need for cognition" and creative achievement (He et al., 2019). These flexible areas appear to constitute a combination of hubs from the default mode network involved in creative and imaginative cognition (Hassabis and Maguire, 2009) and 'modeling' of self and other (Saxe et al., 2006;Davey and Harrison, 2018), frontoparietal control/attention network (Huang et al., 2020), as well as salience-determining network for high-level action selection (Rueter et al., 2018;Toschi et al., 2018). These areas appear to facilitate overall metastability in the brain (Wens et al., 2019), with analogous mechanisms being observable across a potentially surprising range of organic systems (Hanson, 2021).
While this "flexible club" was largely similar across age groups ranging from infants to adults, there were also notable differences (Yin et al., 2020) (Figure 2). From 0 to 24 months of age, infants gained flexibility in frontal and premotor areas, consistent with the gradual emergence of intentional control processes. Further, in moving from adolescence to adulthood, while frontal and somatomotor areas are included in the adolescent flexible club, these areas lose their relative flexibility in adulthood, consistent with adolescence being a time of intense change. The precuneus showed a more complex pattern of decreasing flexibility up to 3 months of age, followed by subsequent increases with further development. These findings are intriguing in light of the roles of this brain area in conscious visual perception and mental imagery (Utevsky et al., 2014;Ye et al., 2018;Safron, 2020). Perhaps even more intriguingly, the precuneus became relatively less flexible in adulthood, which could (speculatively) be taken to reflect a (potentially adaptive) reduction of psychological flexibility with age.

DIFFERENT FORMS OF FLEXIBILITY: DISJOINTEDNESS VS. COHESIVENESS
Network flexibility is not necessarily strictly beneficial, and may exhibit an inverted U relationship with desirable characteristics (Yerkes and Dodson, 1908;Northoff and Tumati, 2019). Relatively elevated brain network flexibility may be associated with adaptive behaviors, but excessive flexibility may be indicative of systems pushed past an adaptive point of metastability (Carhart-Harris et al., 2014;Atasoy et al., 2019). For instance, flexibility has been shown to be elevated in people with schizophrenia, as well as in their relatives who may also be at increased risks for psychosis (Braun et al., 2016). However, Telesford et al. (2017) proposed two different variants of flexibility analyses with very different properties: disjoint flexibility measures the degree to which nodes tend to migrate separately; cohesive flexibility measures the degree to which nodes migrate together (Figure 3). Red indicates regions with significantly higher flexibility than the whole brain, and blue indicates regions with significantly lower flexibility than the whole brain. Orange indicates regions with no significant difference in flexibility from the whole brain.
Node disjointedness is calculated by assessing the number of times a node changes communities independently, divided by the number of times a node could have potentially changed communities. Node cohesion is calculated by assessing the proportions of times a node changes communities in conjunction with other nodes in its (previous) community allegiance, summing over all pairwise co-migrations. For example, if a community splits into two new communities that each contain multiple nodes, the average disjointedness will be zero because all nodes migrate as part of a group, but average cohesion will be greater than zero (because each node migrates with at least some other nodes).
While cohesive flexibility positively correlated with learning rate in Telesford et al. (2017), disjoint flexibility did not show this association, and was speculated to reflect something like a general measure of neural entropy. However, this should not be taken to indicate that disjoint dynamics are necessarily maladaptive. In this particular study (Figure 4), while hierarchically higher areas became less disjoint with training, hierarchically lower areas tended to become more disjoint. Speculatively, this pattern could be taken as indicating predictive processing in which higherlevel coherent neural activity successfully suppresses bottom-up information flows (Bastos et al., 2020;Walsh et al., 2020).
While disjoint flexibility could be viewed as a neural entropy measure in terms of independent node switching, entropy has also been associated with adaptive processing and even intelligence (Carhart-Harris et al., 2014;Chang et al., 2018;Herzog et al., 2020;Vivot et al., 2020;Cieri et al., 2021). Thus, disjointedness may also sometimes exhibit an inverted U relationship with psychological functioning. Qualitatively speaking, low-to-moderate levels of disjointed migration may be better than stasis/rigidity, but the non-coherent nature of node migration may tend to indicate a breakdown of adaptive functioning. That is, it is possible that some degree of disjoint dynamics may be required for adaptability, but correspond to non-adaptive dynamics past a certain (and potentially low) threshold for stochasticity. A minimal degree of disjoint flexibility may correspond to adaptive meta-stability, and possibly regimes characterized by "stochastic resonance" (Vázquez-Rodríguez et al., 2017), which have been shown to have many desirable properties such as enhanced abilities to transmit and integrate information. However, excessive disjointedness could potentially indicate a disruption of integrated functioning of the kind associated with disconnection-type psychotic states FIGURE 4 | Reprinted with permissions from (Telesford et al., 2017). Patterns of changing disjoint flexibility as associated with learning. Blue indicates regions where disjointedness increased between Day 1 and Day 2 of the motor learning task, and red indicates regions where disjointedness decreased between Day 1 and Day 2 of the motor learning task. . Cohesive flexibility, in contrast, might involve a more linear relationship with adaptive functioning, but with potentially maladaptive consequences at extremes (e.g., manic psychosis).

COHESIVE FLEXIBILITY, CRITICALITY, AND CONSCIOUSNESS?
Beyond its implications for psychological functioning, it may be the case that cohesive flexibility represents a hallmark of kinds of universality classes, or emergent properties that can be found across a wide range of complex systems. Along these lines, flexibility analyses may potentially be used as proxies for self-organized criticality (Bak et al., 1987;Atasoy et al., 2019). Self-organized criticality refers to the tendency of systems to exhibit phase transitions as attracting states. Not only does this self-organization allow for enhanced access to wider regions of phase space, but these "edge of chaos" inter-regimes also balance disordered and ordered dynamics, with sufficient variation to support adaptations, while also providing sufficient stability for the accumulation of structure in (generalized) evolution (Sneppen et al., 1995;Paperin et al., 2011). Such near-critical organization is also essential for inference/learning (Friston et al., 2012;Hoffmann and Payton, 2018), which can itself be considered to be a kind of evolutionary process (Campbell, 2016). These kinds of flexibly adaptive processes may also be essential for realizing consciousness as a dynamic core and integrated world model (Safron, 2020), whose emergent functioning further enhances the ability of systems to flexibly adapt to novel situations. This dynamical systems interpretation of the sources of flexibility could potentially be tested by looking for correlations with putative hallmarks of self-organized criticality such as power-law distributions, increased fractal dimension, and critical slowing down (Friston et al., 2012;Touboul and Destexhe, 2017;Varley et al., 2019).
With respect to potential associations with consciousness, one fascinating study applied flexibility analyses to investigate dynamic alterations in the modular structure of nervous systems with varying depths of anesthesia (Standage et al., 2020). In this investigation, the anesthetic isoflurane was used to modulate consciousness level in rhesus macaques measured with high-field strength fMRI. In addition to examining disjoint and cohesive flexibility, an additional measure of promiscuity was utilized (Figure 5), calculated as the number of communities that a node participates in over time, divided by the total number of potentially available community allegiances (Sizemore and Bassett, 2018). In contrast to flexibility, promiscuity assesses the degree to which nodes take part in the full range of available communities, and so would speak to the "diverse club" findings described above for Yin et al. (2020).
Fascinatingly, Standage et al. (2020) observed that deeper sedation correlated with higher mean disjointedness, lower mean promiscuity, and lower mean cohesion strength (Figure 6). These findings potentially suggest that relatively high degrees of promiscuity/diversity might require cohesive dynamics to be realized. These patterns were further associated with greater network fragmentation, as evidenced by a larger number of communities with higher modularity values (interpreted as indicating functional isolation). Four functional networks were identified, corresponding to the cingulate-temporal-parietal, visual-somatomotor, temporal-parietal-prefrontal, and lateral parietal-frontal-cingulate-temporal areas. With greater degrees of sedation, these networks tended to become less distinct from each other, with visual-somatomotor areas constituting a notable exception in terms of maintained separability. Speculatively, this would make some sense given the close engagement of these areas with sense data, and potential (experience-dependent) development of more modular structure with greater locality in information processing.
Integrated Information Theory (IIT) may provide another means of assessing connections between network flexibility, criticality, and consciousness (Tononi et al., 2016;Safron, 2020Safron, , 2021. IIT was initially developed as a theory that started from the hypothesis that consciousness involves synergy, or wholes with informational properties that are greater than the sum of their parts (Tononi and Edelman, 1998). The theory was subsequently developed as a general model of emergent causation that analyzes (potentially conscious) systems in terms of their "irreducible self-cause-effect-power, " or capacity of present configurations to place informational constraints on their past and future states (Balduzzi and Tononi, 2009). The intuition underlying this modeling approach is that conscious systems are composed of "differences that make a difference" to themselves, or have intrinsic functional significance. Notably, integrated information appears to be maximized by systems that balance integration and differentiation, which is widely considered to be a prerequisite for adaptive complexity, and which has also been associated with connectomic properties stipulated to be necessary for realizing consciousness (Dehaene and Changeux, 2011;Shanahan, 2012;van den Heuvel et al., 2012;Sporns, 2013;Shine, 2019Shine, , 2021Mashour et al., 2020;Safron, 2020Safron, , 2021.
While their potential sufficiency for establishing subjective experience is highly debatable, these analyses may nonetheless point to necessary conditions for realizing sufficiently complex (and thereby powerful) processing for realizing consciousness, potentially via cohesive flexibility. While IIT's formal analyses are not computationally tractable for most biological networks (Mayner et al., 2018), a variety of approximations have been developed (Massimini et al., 2012;Tegmark, 2016;Mediano et al., 2019). Based on such estimates, integrated information appears to be maximized by systems that exhibit selforganized criticality (Arsiwalla and Verschure, 2016;Aguilera and Di Paolo, 2021). Further, if consciousness involves the flexible establishment of large-scale integrative modules (or "workspaces"), and if such complexes can only form for networks capable of establishing patterns of effective connectivity with balanced integration/differentiation-and order/stochasticitythen we ought to expect valid proxies of integrated information to correlate with such dynamic modularity. Speculatively, valid measures of modularity may provide computationally tractable estimates of integrated information if it is the case that such modules are only likely to self-assemble under conditions that allow for flexibly balanced dynamics. Finally, a recently proposed theory of consciousness has suggested connections between IIT's "self-cause-effect-power" and a capacities of systems to generate themselves according to the general systems theory of the Free Energy Principle (Safron, 2020(Safron, , 2021, which we will now discuss.

FREE ENERGY MINIMIZATION AS FUNDAMENTAL PRINCIPLE OF BRAIN ORGANIZATION?
We will now review some fundamental principles of neuronal organization before going on to describe additional ways of characterizing complex networks of effective connectivity. What constitutes an appropriate explanation of neuronal assembly formation? It has been suggested that neuronal organization can be understood as the self-organization of boundaries in dynamical systems that minimize free energy (Friston, 2010(Friston, , 2013. 3 In this section we offer our view as to how the Free Energy Principle (FEP) can be employed within and to extend the dynamical approach we have been explicating so far, The FEP formalism explains the autonomous emergence of order in the brain as a dynamic self-assembling process (Parr et al., 2020Friston et al., 2021).
We find systems that self-organize at every nonequilibrium scale (Friston, 2019). In order to persist, every thermodynamically-open system must self-organize as it exchanges matter and energy with the environment with which it is coupled (Palacios et al., 2017;Lamprecht and Zotin, 2019). That is, persisting systems must self-organize to (temporarily) defy the second law of thermodynamics and keep their (nonequilibrium) steady states from tending toward disorder and dissipation (Friston, 2013;Colombo and Palacios, 2021).  Sizemore and Bassett (2018). This schematic depicts examples of nodes with low (dark blue) or high (pink) flexibility, promiscuitiy, and cohesion. In the (Left) panel, the pink node is more flexible than the blue node (i.e., changes communities more often). In the (Middle) panel, the pink node has higher promiscuity than the blue node (i.e., visits a greater proportion of available communities across time). In the (Right) panel, the pink nodes have higher cohesion strength than the blue nodes (i.e., the pink nodes move between communities in tandem). Standage et al. (2020). On the (Left): relationship between isoflurane dose and mean disjointedness (top panel), mean cohesion strength (middle panel), and mean promiscuity (bottom panel) across the whole brain. The brain images directly next to these graphs show regions where the relationship between the dose and measure was significant (in orange), and significant with an additional false discovery rate correction (in yellow). On the (Right): network architecture identified in the study, depicting four networks: 1/red = cingulate-temporal-parietal; 2/yellow = visual-somatomotor; 3/green = temporal-parietal-prefrontal; 4/blue = lateral parietal-frontal cingulate temporal.

FIGURE 6 | Reprinted with permissions from
To avoid this maximally likely outcome of monotonically increasing disorder, systems must exhibit intelligent adaptivity in exchanging matter and energy with their environments.
While some systems are simply subject to being entrained by environmental stochasticity and some dynamical rule, such as pendulums (Oliveira and Melo, 2015;Kirchhoff et al., 2018;FIGURE 7 | The figure depicts a Markov blanket partitioning of conditionally independent states, internal (blue) and external (cyan); and blanket states, active (red) and sensory (magenta), directly influencing one another. The arrows show that while internal states depend on blanket states (sensory and active states), external states depend on external states and blanket states. This means that internal and external states indirectly influence one another by virtue of the influences between blanket states: i.e., sensory and active states. The Markov blanket formalism is scale free, which means that it can be applied to any scale of the physical world to explain the exchanges and influences amongst open systems. Applied to the brain, the formalism can be applied such that internal states correspond to a single neuron, a cortical microcircuit, a region, or a network (see Hipólito et al., 2021b for a detailed application). Lahav et al., 2018), other systems can interact with their environment in order to achieve more desirable states for survival. Living systems such as cells, organs, and organisms, keep surprisal (i.e., cybernetic entropy) at bay by engaging in behavior that translates into stable minimal points (i.e., uncertainty minima) for a set of potential states within bounds that can mean life and death, respectively.
Evidence from dynamic causal modeling Jafarian et al., 2021) provides compelling reasons to think that synaptic organization conforms with the conditional independence established by a Markov blanket on a dynamical setting Hipólito et al., 2021b;Parr et al., 2021). A Markov blanket is a statistical tool that can be applied to any system that self-organizes. From particles and moving pendulums, to cells, neurons, brains, and organisms. By this formalism it is possible to partition a self-organizing system at every scale in terms of its conditionally independent states, that is, how the environment influences a system and vice versa. While the system of interest, e.g., a neuron, corresponds to the internal states, the environment (e.g., a cortical column of which the neuron is a part) the corresponds to the external states.
The Markov blanket formalism aims to determine how internal and external states influence each other. Mathematically we know that internal and external states do not directly influence one another (this is known as conditional independence), they instead indirectly influence one another by virtue of a further set of states: sensory and active states (known as blanket states) as shown in Figure 7. Technically, it is as if a neuron in cortical column (internal states) engages in predicting the state of the cortical column (external states) by issuing a prediction (active state), where the state of the cortical column (sensory state) directly influences the issuing prediction and vice versa.
The scale-free property of the Markov blanket formalism can be applied to every level of self-organization, from single neurons, to cortical columns, to brain regions, to mesoscale networks, to the entire nervous system, and beyond. In this way, it is possible to explain persisting systems as coupled with their multiscale environments at every level of organization. A system (i.e., internal states) can be seen as engaging in the required behavior of model-uncertainty-minimization by which intelligent systems adapt and maintain themselves. In other words, internal states appear to engage in behavior that reduces internal uncertainty or entropy and so avoids system dissipation. The ways in which the systems reduce uncertainty or entropy can be explained as if the system was minimizing a singular objective functional of informational free energy (i.e., accuracy minus complexity, or evidence with respect to the active inferential models whereby systems promote their existence).
(2) Equation 2 represents free energy minimized with respect to internal and sensory states, corresponding to the difference between complexity and accuracy, or between the Kullback-Leibler (KL) divergence between the variational [q (ψ | µ) | | p (ψ | m)], and the posterior density over hidden states E q [log p (s | ψ, m)]. This framing of complex adaptive systems allows us to treat neuronal organization as an optimization problem. That is, how does the minimization of free energy ensure that brains optimize neuronal assemblies in ways that reduce entropy/uncertainty? At every self-organizing scale of the brain, activity can be explained in terms of this sort of optimization. Brains, as dynamical systems, can be cast as minimizing variational free energy with respect to the sufficient statisticsq of an approximate posterior distribution q(θ|q) (Roweis and Ghahramani, 1999;Friston, 2008). Under the Laplace approximation, these sufficient statistics correspond to the mean and covariance of probabilistic densities. The adaptive shaping of neuronal activity thus becomes FIGURE 8 | In this figure we have an illustration of progressively larger scales (and slower dynamics) arising from subordinate levels. In the upper panels, the conditional dependencies among these vector states (i.e., eigenstates) define a particular partition into particles. This partition then equips each particle with a bipartition into blanket and internal states, where blanket states comprise active (red) and sensory (magenta) states. The behavior of each particle can now be summarized in terms of (slow) eigenstates or mixtures of its blanket states to produce states at the next level or scale. Vector states (i.e., eigenstates) on the bottom can be partitioned into particles (upper panels). Each particle can then be partitioned into internal and blanket states, which involve active (red) and sensory states (magenta). The behavior of each particle can be summarized either as (slow) eigenstates or mixtures of its blanket states to produce states to the next level or scale (i.e., an ensemble of vector states). Note that the first uses the particular partition to group subsets of states (G); while the second uses eigenstates of the resulting blanket states to reduce dimensionality (R) (Figure reproduced  an optimization problem (as opposed to an inference problem) (Dayan et al., 1995) with respect to implicit generative models governing system evolution: This optimization with respect to (implicit) probabilistic densities corresponds to the minimization of free energy given the priorp, such that, Free energy minimization [F p,q ] is expressed in terms of accuracy (first term) minus complexity (second term), which is also the Kullback-Leibler divergence between the approximate posterior (q) and prior (p) distributions. After the negative free energy has been maximized, the following approximate inequalities can be used to estimate the posterior density over unknown model parameters, as well as the log evidence, or (marginal) likelihood of the model: This means that the approximate posterior over parameters is a functional of the approximate posterior for actions inferred for present and future states. This informational free energy corresponds to the inverse probability of predictive responsesas a joint mapping between hidden states and observations, thus specifying a generative model-given an approximate posterior. The free energy gradient (and entailed dynamical fluctuations) are minimal when the average differences between posterior and prior expectations is zero, thus specifying the extent of systems as non-equilibrium steady state distributions.

ON PARTICLES AND PARCELS: DYNAMIC CAUSAL MODELING, MARKOV BLANKETS, AND DYNAMIC MODULARITY
Regardless of whether or not one is compelled by the free energy perspective, this research program has generated a number of powerful analytic techniques. Before returning to more commonly known approaches for characterizing dynamic brain networks, we will briefly explore additional methods for assessing FIGURE 9 | In this figure we have a partition of eigenstates (small colored balls) into particles, where each particle displays 6 blanketed states (active states in red; sensory states in magenta), and 3 internal states (cyan). Based upon the Jacobian and implicit flow of vector states, an adjacency matrix characterizes the coupling between vector states, which defines the blanket forming matrix (B), and to form a Laplacian (G) that is used to define coupled internal states. The internal and blanket states then form a new particle. The procedure is exhausted when unassigned vector states belong to the Markov blanket of the particles identified previously.
emergent modularity, which may lead to advances in both theoretical and applied domains. These analyses could potentially be applied alongside other measures to provide more detailed accounts of the latent processes that generate neuroimaging data. This cross-referencing of methods could further provide enhanced interpretability of datasets, and potentially help to inspire the development of novel analytic tools. Renormalisation Group (RG) theory provides a principled account of dimensionality reduction in complex systems. Applied to the brain as a set of dynamically evolving Markov blankets, it offers formalisms for moving up and down analyticaland perhaps ontological (van Es and Hipolito, 2020)-levels depending on our area of interest for scientific description. In doing so, this approach affords understanding neuronal dynamics across a range of (nested) spatial and temporal scales .
States partitioned by the Markov blanket are multidimensional vector states, or eigenvectors. From these eigenvectors, by means of the RG theory, it is then possible to construct new states at a superordinate scale. We proceed by considering the principal eigenvectors of the blanket states and take those as eigenstates for the scale above. The recursive application of a grouping or partition operator (G), followed by dimensionality reduction (R), allows us to define a renormalisation group. Put simply, this method allows us to scale up across levels of emergent/synergistic organization, with subsequent levels characterized by more encompassing scopes of integration, evolving with increasingly slow/stable temporal dynamics. The dimension operator (R) allows us to eliminate (1) the internal states (as these, by definition, do not contribute to coupling) and (2) fast eigenstates (unstable or fast modes of a dynamical system that more rapidly dissipate). These two simple operations allow us to retain only slow and stable eigenvalues, which we can see as an adiabatic approximation to separate out fast and slow dynamics. This separation rests on the eigenvectors of the Jacobian for each Markov blanket, with eigenvectors separated into small (slow) and large negative (fast) eigenvalues: As an adiabatic reduction (Haken, 1983), with its related manifold theorem (Carr, 1981), we can see that dynamics get progressively slower at successive scales: where the intrinsic coupling among eigenstates is constituted by a diagonal matrix of (negative) eigenvalues, so determining the relative decay of multiscale attractors, In short, we can eliminate fast eigenstates and approximate dynamics with the remaining slow eigenstates that capture the dynamics 'that matter' for explaining large-scale system properties. Importantly, as shown in Figures 7, 8, these renormalized flows allow us to progress from more chaotic high amplitude dynamics to more deterministic (and potentially interpretable) dynamics of slow fluctuations that are likely to dominate overall system evolution (Friston et al., 2012(Friston et al., , 2014. This simply means that we are adopting a common procedure known as 'dimensionality reduction.' The more complex (i.e., higher dimensional) the system, the harder it is to mathematically track its dynamics. Progressing from higher to more deterministic amplitude also means that things become more interpretable. The Jacobian in Figure 9 summarizes effective connectivity at the smallest scale, so allowing us to investigate intrinsic dynamics at progressively larger scales. Lyapunov exponents are the same as the eigenvalues of the Jacobian describing intrinsic coupling. By associating the Jacobian of each particle with Lyapunov exponents, it is possible to score the average exponential rate of divergence or convergence of trajectories in state space (Lyapunov, 1992;Yuan et al., 2011;Pavlos et al., 2012). There is a progressive slowing of intrinsic dynamics as we move up the dynamics at larger (higher) scales toward critical regimes of instability and slowly fluctuating dynamical modes. As previously discussed with respect to self-organized criticality, the importance of such unstable/metastable regimes may be difficult to overstate, and analysis of Lyapunov exponents for signs of critical slowing provides a powerful means of assessing such adaptive organization. This also brings us to a notable point about the brain: particles that constitute the active (gray) matter, when considered in isolation, show autonomous dynamics that can be cast in terms of stochastic chaos or itinerancy. Autonomous neural dynamics emerge as characteristics associated with intrinsic connectivity (where Lyapunov exponents of the intrinsic coupling describe the rate of decay). The extrinsic dynamics concerns, however, the extent to which one eigenstate influences another. Of crucial interest here is rate constants, or the degree to which an eigenstate of one particle responds to the eigenstate of another. Notably, large extrinsic couplings can also be understood as cross-variance functions, or a kind of implicit form of 'representation.' In conclusion, the application of the RG formalism on states partitioned by Markov blankets allows us to see the emergence of order in neuronal activity as intrinsic connectivity (intrinsic dynamics) and dynamic coupling (extrinsic dynamics). Every FIGURE 10 | (A,B) Networks identified as contributing to the high-amplitude co-fluctuations. Positive contributions were indicated by high PC1 scores, with significant contributions made by default mode areas (within DMNa and DMNb) or control areas (within Contb). On the right, asterisks correspond to PC1 scores obtained in a principal component analysis (performed on a correlation matrix not shown in this modified figure). Further details describing the principal component analysis used to obtain these results can be found in Esfahlani et al., 2020. Reprinted with permission from Esfahlani et al. (2020).
FIGURE 11 | Cohesive flexibility and multi-scale psychology. This diagram illustrates how cohesive flexibility may provide a functional bridge between moment-to-moment changes in psychological states, more enduring psychological traits, and the attracting states by which individuals evolve through time as cybernetic (free energy minimizing) systems. At each level of organization, flexible (potentially self-organized critical) dynamic processes allow for intelligent responding, learning, and evolution toward increasing degrees of adaptive complexity. state, at this multiscale system, organizes to keep surprisal at bay. In other words, the action of all complex adaptive systems can be explained as self-organizing to minimize free energy, and so persist through intelligent active inference. Evolutionary predispositions (Badcock et al., 2019), together with stochastic forcing from environmental pressures, canalize dynamics to enact certain trajectories relative to others, corresponding to the emergence of neuronal assemblies capable of functioning as (potentially flexible) attracting manifolds for adaptive functioning .

FUTURE DIRECTIONS FOR UNDERSTANDING FLEXIBILITY IN BRAINS AND MINDS
One limitation of methods for assessing network flexibility (and for connectomics more generally) is an assumption of dichotomous in/out properties with respect to community membership. Theoretically, community allegiance could be made a matter of degree-some of which may also represent differences in kind (Anderson, 1972)-with flexibility weighted by degrees of modularity, which the Generalized Louvain algorithm quantifies as Q (for "quality, " or extent of preferential inner-connectivity for a community) (Jutla et al., 2011). This algorithm is computationally efficient and empirically valid, and also has face validity in terms of modularity being important for allowing for separable optimizations of sub-systems with potentially fewer functional tradeoffs. However, these methods treat modules in overly simplistic ways, and neglect to consider the extent to which nodes can participate in multiple communities to varying degrees, potentially involving functional multiplexing with multiscale organization.
There are other more powerful module detection methods like stochastic block models, which use generative modeling to infer community structure (Lee and Wilkinson, 2019). But these are computationally expensive, and so are less frequently utilized. Infomap is another community-detection method, which estimates communities using random walks and relative dwell times as indicating the degree of modularity for an area (Rosvall and Bergstrom, 2008). This method has correspondences with early versions of Google's PageRank algorithm, and also has connections to percolation methods (Mišić et al., 2015), both of which could be used to reflect overlapping community structures.
Edge-centric time-series are also promising in supporting multiscale/multiplexed accounts. One notable study involving these methods was able to analyze dynamics at the temporal scale of a single fMRI measurement (2000 ms), and found that transient high-amplitude co-fluctuations in cortical activity contributed to overall patterns of connectivity (Esfahlani et al., 2020). These events occurred at variable intervals (Figure 10), sometimes closely spaced, and sometimes separated by tens of seconds or even minutes. Areas contributing to these high-amplitude highly-impactful events included default mode and control network areas. Although anatomical regions were not discussed in this study, these networks include the temporoparietal junction, posterior cingulate, precuneus, as well as dorsomedial, ventromedial, premotor, dorsolateral, and temporal cortices. In other words, with the exception of the precuneus, the most impactful areas of the brain on overall dynamics may be centered on the previously described "flexible club" of the brain. More recent work has identified additional clusters that contribute to highamplitude cofluctuations; however, the DMN still appears to be a consistent contributor to these events (Betzel et al., 2021). The authors describe these events as indicating "a highmodularity brain state and. . . a specific mode of brain activity, in which default mode and control networks fluctuate in opposition to sensorimotor and attention systems." Notably, this functional alternation is a hallmark of conscious processingand potentially workspace dynamics (Safron, 2020(Safron, , 2021and is disrupted in neuropsychiatric conditions (Huang et al., 2020). Fascinatingly, these occurrences also corresponded to periods where participants were most readily uniquely identified using connectomic "fingerprinting" methods. Theoretically, these events could correspond to periods associated with particularly high meaningfulness (Li et al., 2021), with concomitant release of neuromodulators influencing the relative dominance of different functional networks (Shine et al., 2018;Conio et al., 2020).
Perhaps even more intriguingly, these brain areas associated with particularly high flexibility also tend to emerge later in phylogeny, mature later in ontogeny, and exhibit reduced degrees of structure-function tethering, implying greater degrees of freedom (Oligschläger et al., 2019;Vázquez-Rodríguez et al., 2019;Baum et al., 2020). Theoretically, expansion of these areas may have substantially contributed to the evolution of uniquely human cognition, with its capacity for flexible and creative highlevel reasoning (Penn et al., 2008;Gentner, 2010;Buckner and Krienen, 2013;Hofstadter and Sander, 2013). It will be exciting to see whether such hypotheses are supported (or refuted) by future work with comparative neuroanatomy between human and non-human species (van den Heuvel et al., 2019;Changeux et al., 2020;Dumas et al., 2021).
In order to better characterize these kinds of dynamics and their potential functional significance for adaptive functioning, it would be highly desirable to undertake a systematic investigation into the following question: to what extent do different kinds of psychological flexibility correlate with different kinds of neural flexibility? For example, might the extent to which brain networks are capable of reconfiguring themselves due to a change in state [e.g., as induced by pharmacological agents (Doss et al., 2021;Girn et al., 2021)] also impact capacities for change with respect to more enduring traits (e.g., personality structures) (Figure 11)? In which ways might younger individuals exhibit more flexible brain dynamics, and might this relate to cognitive flexibility and more exploratory approaches to searching through hypothesis spaces (Gopnik et al., 2017)? If cohesive flexibility is indeed a hallmark of self-organized criticality as we have previously suggested, then we ought to expect it to manifest across multiple scales, ranging from moment-to-moment dynamic alterations of internal states, to the attractors describing trajectories of entire systems through phase space, and even the overall character of systems as shaped by histories of experience (Safron and DeYoung, 2021). In this way, the conditions under which individuals exhibit different forms of flexible dynamics may have far reaching implications with respect to the project of furthering computational psychiatry and precision medicine (Friston et al., 2017b). Finally, given the multiple scales over which such phenomena may evolve, analyzing the correlates of psychological flexibility is a near perfect use case for implementing the previously described multiscale methods for characterizing dynamical systems in terms of their probabilistic boundaries .

CONCLUSION
In this theoretical review, we have considered brains from a dynamical perspective, discussed ways that such systems could be analyzed with a variety of neuroimaging techniques, and considered their potential grounding in relevant mechanistic processes and formal models. We hope this discussion will help generate enthusiasm for adopting a more dynamic perspective in attempting to understand the emergence of mental phenomena from biophysical processes. It may be difficult to overstate the importance of network flexibility for not just basic, but also applied sciences, since psychological flexibility constitutes a general factor for resilience across both clinical and non-clinical contexts (Hinton and Kirmayer, 2017;Hayes, 2019;Davis et al., 2020;Uddin, 2021). Perhaps more fundamentally, there may be few things more impactful than obtaining a better understanding of the factors contributing to the abilities of systems to adapt in a complex, uncertain, and constantly changing world.

AUTHOR CONTRIBUTIONS
All authors listed have made a substantial, direct, and intellectual contribution to the work, and approved it for publication.
Science Foundation NRT grant 1735095, "Interdisciplinary Training in Complex Networks and Systems." VK would like to thank Richard Betzel and Olaf Sporns for sharing their expertise on dynamic brain networks and network neuroscience. VK would also like to thank Richard Betzel for his feedback on portions of an earlier draft of this paper.