Your new experience awaits. Try the new design now and help us make it even better

REVIEW article

Front. Netw. Physiol., 03 October 2025

Sec. Networks in the Brain System

Volume 5 - 2025 | https://doi.org/10.3389/fnetp.2025.1667656

Biological detail and graph structure in network neuroscience

  • 1Department of Neuroscience and Rehabilitation, Section of Physiology, University of Ferrara, Ferrara, Italy
  • 2Center for Translational Neurophysiology, Fondazione Istituto Italiano di Tecnologia, Ferrara, Italy
  • 3Complex Systems Group & GISC, Universidad Rey Juan Carlos, Madrid, Spain

Representing the brain as a complex network typically involves approximations of both biological detail and network structure. Here, we discuss the sort of biological detail that may improve network models of brain activity and, conversely, how standard network structure may be refined to more directly address additional neural properties. It is argued that generalised structures face the same fundamental issues related to intrinsicality, universality and functional meaningfulness of standard network models. Ultimately finding the appropriate level of biological and network detail will require understanding how given network structure can perform specific functions, but also a better characterisation of neurophysiological stylised facts and of the structure-dynamics-function relationship.

1 Introduction

It is intuitive to represent brain anatomy and the activity that it produces as a network structure, i.e., as a collection of nodes and connecting edges (Bullmore and Sporns, 2009), and ultimately to study the role of this structure in brain dynamics and function (Papo et al., 2014; Papo and Buldú, 2025b; c). A network structure provides a compact, inherently multiscale, characterisation of multi-body systems, possibly preserving its intrinsic properties and symmetries. Moreover, network structure can affect on network dynamics and the processes unfolding on it (Boccaletti et al., 2006; Masuda et al., 2017) and can interact in complex ways with local dynamics (Gross and Blasius, 2008). Thus, network structure may to some extent explain brain dynamics and function, and may help predicting the system’s behaviour, quantifying its evolvability, and, at least in principle, controlling it (Liu and Barabási, 2016), or steering it to desired states (Gutiérrez et al., 2012; 2020).

Complex network theory is a statistical mechanics approach to graph theory (Albert and Barabási, 2002). In this approach, justified by the sheer number of components (Chow and Karimipanah, 2020), the identity of nodes and links loses importance, at least prima facie, as the network’s properties are statistical in nature. Implicit in a statistical mechanics’ approach is the fact that seemingly profoundly different physical systems may be characterised by the same collective behaviour which can be grouped in universality classes. The large scale behaviour of each class can be described in terms of simple effective models specified in terms of an interaction network and a limited number of control parameters, where only a small number of relevant features, viz. symmetries, dimensions, and conservation laws, turn out to be relevant, while microscopic details can be disregarded. But, how universal are brain network representations? To what extent and at what scales do brain dynamics and function depend on the specific details of their nodes and links?

We review aspects of neural activity that may be incorporated into neural network modelling and somehow dually, network models that may help representing brain structure, dynamics, and function. We then address the following two-fold question: what neural and network properties should be incorporated for a network structure to reproduce known anatomical patterns and dynamical phenomenology and to allow a faithful representation of functional brain activity?

2 Ground level of brain network modelling

In its most general form, a network is a structure N=V,E, where V is a finite set of nodes or vertices and EVV a set of pairs of links or edges V. The links can carry a weight, parametrising interactions’ strength, and a direction. All the information in a network structure is contained in the associated connectivity matrix. Encoded into its combinatorial (Bollobás, 1986), topological, and geometric properties (Boccaletti et al., 2006), and its symmetries (Dobson et al., 2022) (See Supplementary Material A1).

In real space, the microscopic scale may be identified with neurons, or neuronal masses at various scales, and may contain more or less biological detail. Cortical columns are often treated as cortical systems’s basic dynamical units, which are coupled through sparse long-range cortical connectivity. Thus, at system-level, neocortical activity is often modelled as an array of weakly-coupled dynamical units, whose behaviour is represented by dynamical attractors of various types (Breakspear and Terry, 2002) (See Supplementary Material A2). In its simplest form, the system’s units are static. The system’s units can also be thought of as dynamical systems (Golubitsky and Stewart, 2002), e.g., spiking neurons and the resulting system is a discrete-space, continuous-time dynamical system (DeVille and Lerman, 2015). Thus, overall, a neural system can be thought of as a set of dynamical systems, whose state variables evolve e.g., according to differential equations and whose interactions are encoded by a graph (Bick et al., 2023). The state of a system can also be defined by the time-varying interplay between its structure and the variable’s dynamics unfolding on it (Ghavasieh and De Domenico, 2022).

Irrespective of the context and the space in which a network structure is defined, the neurophysiology-network representation map often involves drastic simplifications on both sides of the map. For instance, a great number of studies, particularly at macroscopic scales, are predicated upon a simple network structure. A network is said to be simple if it has neither self nor multiple edges between the same pair of nodes (in the same direction for directed networks). In spite of its apparent generality, some known anatomical and dynamical neural stylised facts are not accommodated within the simplified structure used in these studies and this may in principle limit the ability to account for known phenomenology or to reveal as yet unknown one.

In the remainder, we consider a ground level of network structure and use its underlying assumptions and corresponding limitations to analyse on the one hand the neural aspects of brain activity that are not easily accommodated by such a structure and, on the other hand, the nature of the possible network structure that could better reflect intrinsic properties of brain structure, dynamics, and function.

3 Adding biological detail

Both theoretical and experimentally derived network representations typically drastically simplify details of actual brain anatomical and dynamical structure at all scales, including that of single cells. For instance, standard network representations do not include features of neural activity such as hardware heterogeneity, recurrence, or inhibition or, when modelling long-distance inter-areal pathways, the laminar and anisotropic character of the connections (Markov et al., 2013) in addition to their strength and specificity or and the resistive nature of brain tissue. A number of questions ensue: how much and what sort of detail should be added and at what scales? How would refining neurophysiological information change brain models?

3.1 Nodal properties

Various aspects of neural activity are in general thought of as reducible to network nodes. The anatomo-functional criteria allowing this reduction are scale-dependent, the most obvious aspects being related to the cell-level structure of the brain. At neuronal scales, such reduction typically involves various simplifying assumptions on synaptic structure and physiology, including assumptions on hardware, viz. on its homogeneity or, more generally, on the physical units responsible for brain dynamics and function homogeneity, but also on the way afferent information is integrated to produce cell firing.

3.1.1 Defining meaningful neural units

A network representation requires identifying meaningful neurophysiological units (Korhonen et al., 2021). Though prima facie straightforward, this step is nonetheless non-trivial, even at the single neuron scale. Indeed, activity at subneural scale can be related to function at macroscopic scales (Li et al., 2024). Moreover, achieving an appropriate characterisation that captures the essence of neuron information processing activities requires defining independent electrical processing units explaining its overall input–output behaviour (Koch et al., 1982). Although dendrite arborisations and axon terminals already present a network structure carrying out computationally complex operations (Gidon and Segev, 2012), single neurons are often thought of as simple point-like units, where all synapses have an equal opportunity to influence neuronal output, and the output results from a linear weighted sum of all excitatory and inhibitory synaptic inputs. However, pyramidal cells’ terminal branches of the apical and basal trees constitute sets of independent non-linear subunits (Häusser and Mel, 2003). In general, one can distinguish separate functional compartments in the dendritic tree, the number of which depends on the considered aspect of dendritic function, based on the effects that such compartments and their interactions exert on the neuron’s computational power and synaptic plasticity. The spatial extent of propagation of the dendritic spike will also define the spatial range of plasticity. Functional compartments can be defined at scales even finer than those of thin branches. Specifically, the rules for induction of synaptic plasticity may differ at proximal and distal synapses in a way that is defined by the properties of their respective compartments.

The issue is replicated at coarser scales in real space as well as in phase space, as finding meaningful criteria for separation and discretisation becomes more challenging.

3.1.2 Hardware heterogeneity

Both excitatory and inhibitory neurons come in a large number of different types which differentially affect cross-variability, both by their specific connectivity and by their intrinsic properties (Balasubramanian, 2015). However, at a given scale, particularly when considering static network structure, all network nodes are typically assumed to be essentially identical. This approximation may be acceptable at certain scales, but perhaps not at others, particularly at the whole system level, and may serve certain goals, e.g., estimating information transport via degree distribution, but may be misleading whenever function is not an emerging property of topology, e.g., at scales at which information processing is done at nodal scales (Sterling and Laughlin, 2015).

An important question is how node heterogeneity, e.g., in excitability or in coupling strength, may affect collective dynamics. Heterogeneity in excitability across units may play a double role: during states of low modulatory drive, it enriches the system’s dynamical repertoire; on the other hand, it acts as an effective homeostatic control mechanism by damping responses to modulatory inputs and limiting firing rate correlations, ultimately decreasing in a context-dependent way the system’s susceptibility to critical dynamical transitions (Hutt et al., 2023; Balbinot et al., 2025).

Neural heterogeneity may also play a role in neural networks’ computations (Gast et al., 2024). If neural systems’ information-processing capabilities are related to the morphological diversity of neurons, a reliable description of neuronal morphology should be key to the characterisation of neural function, although what level of detail is would be necessary and sufficient to determine function remains to be determined. Note, though, that while morphological information may be thought of as a proxy for function, it does not constitute a necessary or sufficient condition for it.

Finally, an important issue is whether a statistical mechanics is possible given the number of qualitatively different pieces of hardware. Microfoundations of models would imply a detailed description of the hardware. This may seem to weaken the pillars of the statistical mechanics approach underlying graph theoretical modelling, viz. a loss of important symmetries (exchangeability, scaling, and universality).

3.1.3 Beyond neurons

An important question is whether brain dynamics can be understood just in terms of classic excitable units, i.e., neurons, or other units. For instance, in the human brain, glia cells are approximately as numerous as neurons and are tightly integrated into neural networks (Herculano-Houzel, 2014) but are in general not accounted for in brain network models (Turkheimer et al., 2025). Glial cells play a key role in the development of vascular and neural networks and control homeostatic processes in the mature brain, provide neurons with energy, supply neurons with neurotransmitter precursors and catabolise neurotransmitters (Verkhratsky and Nedergaard, 2018). In particular, astrocytes are key to fundamental processes in brain networks’ building, dynamics and repair, regulate synaptic maturation, maintenance, and extinction, and play an important role in the orchestration of synaptic plasticity (De Pittà and Berry, 2019) and in the restoration of connectivity and synchronisation in dysfunctional circuits, e.g., in cerebellar networks (Kanner et al., 2018). Astrocytes actively communicate with neurons, through a process termed gliotransmission (Araque et al., 2014). While their exact mechanisms and functions are poorly understood, gliotransmitters activate neuronal receptors and account for astrocyte-mediated modulation of synaptic transmission and plasticity (Savtchouk and Volterra, 2018), acting as spatio-temporal integrators, decoding information in large arrays of neuronal activity. The relationship between neocortical neurons and astrocytes is a critical factor determining the effects of endogenous and exogenous electric and magnetic field interactions (Martinez-Banaclocha, 2018). For example, while seizure discharges ultimately result from neuronal activity, glias may play an important role in excitation and inflammation in seizures kindling and modulation (Devinsky et al., 2013). More generally, atypical neuron-glia interactions are implicated in brain pathology, viz. in schizophrenia (Radulescu et al., 2025). Finally, synapse-astrocyte communication may also play a fundamental role in cognitive function, e.g., in working memory (De Pittà and Brunel, 2022).

3.2 Link-related properties

Loss of neurophysiological detail in network modelling is also found at the level of bare connectivity. This is partly due to simplification of the anatomical connectivity structure to accommodate it to that of a simple network and partly to lack of knowledge of the functional mechanisms of neural information transport and computation.

3.2.1 Wire properties

When considering neural systems in real space, links represent brain fibres at all scales, and of interest is how these structures support activity. The amount of current or information conveyed by a link depends on wires’ physical characteristics, such as their diameter and length but also their mechanical and conduction properties (Sterling and Laughlin, 2015). Wire geometry therefore contains important information at time scales ranging from evolutionary to experimental.

An important neural property often not incorporated in graph theoretical models of brain activity is load, a local measure given by the ratio between flow and capacity. Together with network topology, information on load and its distribution may be crucial in the prediction of link failure on network processes and to understand which links are critical to a given function (Witthaut et al., 2016).

3.2.2 Delays

Brain networks are embedded in the anatomical space and this leads to time-delays due to finite signal propagation speed. Time-continuous delay systems, which exhibit in practice high dimensionality and short-term memory express a variety of dynamical regimes, ranging from periodic and quasiperiodic oscillations to deterministic chaos (Ikeda and Matsumoto, 1987). Delays can facilitate zero-lag in-phase synchronisation (Ernst et al., 1995; Atay et al., 2004; Fischer et al., 2006) and can both stabilise and destabilise dynamical systems (Schöll and Schuster, 2008). Moreover, delay systems afford simple dynamical systems high-level information-processing capabilities (Appeltant et al., 2011).

Distance-dependent conduction delays are a crucial factor shaping brain dynamics and have a significant impact on the architecture of neocortical phase synchronisation networks (Deco et al., 2009; Petkoski et al., 2016; Roberts et al., 2019; Petkoski and Jirsa, 2019; 2022; Williams et al., 2023), inducing qualitative changes in the phase space of spatially-embedded networks (Voges and Perrinet, 2010). While topology can be thought of as a control parameter steering the dynamics through phase transitions, the dynamics is largely due to heterogeneous connectivity’s time-delay, rather than changes in the topology (Jirsa and Kelso, 2000; Pinder et al., 2024). In the presence of delays, limit-cycle oscillators lead to collective metastable synchronous oscillatory modes at frequencies slower than the oscillators’ natural frequency (Cabral et al., 2022). Time-delays also play an important role in neural networks’ pattern formation (Muller et al., 2016; Roberts et al., 2019; Petkoski and Jirsa, 2022). For instance, spontaneous travelling waves may be an emergent property of horizontal fibre time delays travelling through locally asynchronous states (Davis et al., 2021). Moreover, in the presence of conduction delays, spike-timing dependent plasticity can exert activity-dependent effects on network synchrony in recurrent networks (Lubenov and Siapas, 2009). Finally, conduction delays are essential in long-range communication through coherence in the brain (Bastos et al., 2015).

3.2.3 Activity propagation and flow directionality

According to the law of dynamic polarisation (Ramón y Cajal, 1909), information unidirectionally flows from dendrites to Soma to axon. However, for many types of neurons, excitable ionic dendritic currents allow dendritic action potentials traveling in the opposite direction (Stuart et al., 1997). Thus, the neuron itself contains an endogenous feedback mechanism. Backpropagating action potentials have many important consequences for dendritic function and synaptic plasticity (Linden, 1999). For example, a somatic action potential can trigger a burst due to its interaction with the dendrites (Häusser and Mel, 2003). Moreover, dendritic geometry, together with channel densities and properties, plays a crucial role in determining both forward and backpropagation of action potentials and dendritic spikes (Vetter et al., 2001). Likewise, synapses can propagate activity centrifugally but also centripetally, distributing input and output over the entire group of dendrites (Pribram, 1999).

3.2.4 Connectivity density and anatomo-functional structure

Both anatomical and dynamical brain networks have long been thought of as highly sparse. However, no general consensus exists over global estimates of brain activity. For instance, while estimates of the absolute number of axons suggested that human cortical areas are sparsely connected (Rosen and Halgren, 2022; Hilgetag and Zikopoulos, 2022) cortical areas may be far more connected than previously acknowledged (Markov et al., 2011; Wang and Kennedy, 2016). While in random networks, sparsity would ensure that neurons share a negligible proportion of presynaptic neighbours and inputs, and as a result that their activity is in general uncorrelated, this would not be the case in non-trivial, densely connected cortical populations (Pretel et al., 2024).

Densification may induce non-trivial structural transitions, including phase transitions in the scaling of the number of cliques of various orders with the number of network nodes and absence of self-averaging (Lambiotte et al., 2016), connectivity density may in principle affect network resilience, although neither anatomical disruption nor decreased connectivity are necessary conditions for functional disruption (Papo and Buldú, 2025a). From a modelling viewpoint, an incorrect density estimate, tantamount to downsampling the system (Wilting and Priesemann, 2018) may ultimately lead to underestimating network size. Near a phase transition, where correlations diverge, such systems this may lead to finite size effects, which can hide criticality or rare region effects. Moreover, a correct estimate of connectivity is key to obtaining a faithful representation of the associated dynamic patterns’ dimensionality (Recanatesi et al., 2019). Moreover, while strong links may incorporate fundamental features of the system, weak links, often missed, particularly in experimental data analysis, may be needed to identify the system (Zanin et al., 2021; 2022), and failure to include them may lead to incorrect conclusions on network stability and robusteness to network dismantling (Csermely, 2004).

3.2.5 Mesoscopic structural principles

Any model of brain cortical structure should incorporate or account for general organisational properties of its anatomy and physiology. For instance, the cerebral cortex exhibits a layered organisation, with the number of layers varying across phylogenetically different cortices. Moreover, various cortical and subcortical regions have a topographic arrangement, whereby spatially adjacent stimuli are represented in adjacent cortical locations, as well as a columnar structure whereby neurons within a vertical column share similar functions and connections and are connected horizontally to constitute functional maps (Hoffman, 1989; Mendoza-Halliday et al., 2024).

In almost all cortical areas, a substantial part of the output targets its area of origin (Douglas and Martin, 2007; Barak, 2017). In recurrent structures, a given neuron can receive input from any other neuron in the network, blurring the concept of upstream or downstream activity, so that their activity is affected the network’s and not only by exogenous afferent input. Such a structural property enables networks to perform computations at time scales much larger than those of a single stimulus, e.g., working memory, decision-making (Douglas and Martin, 2007), recall through pattern completion (Marr, 1971; Treves and Rolls, 1992), or integration of sensory information with stored knowledge (Singer, 2021).

3.3 The node-link contact area

Perhaps the most overlooked aspect on brain network modelling is the node-link junction. Type of contact and location are typically stylised, even at neuronal level. Furthermore, in large-scale models, the effects of contacts are modelled as flows, and therefore implicitly thought of as excitatory.

3.3.1 Contact area and signal integration

Both anatomically and functionally, the area through which different brain units contact other is sometimes difficult to characterise, even at the single neuron level. On the one hand, many neurons do not connect via linear one-to-one connections, but form neurites with collaterals, or branches at distinct segments of the main axon, connecting with multiple synaptic targets or highly branched synaptic termination zones (Spead and Poulain, 2020). On the other hand, action potentials are in general thought to be initiated in a particular subregion of the axon along which they propagate promoting neurotransmitter release at synaptic terminals. However, in some cases, neurons may be morphologically and dynamically different, e.g., they may not have a genuine axon, and the cell’s basic functional aspects are undertaken by dendrites (Goaillard et al., 2020). Spikes can also be generated at dendrites, though their functional meaning is still poorly understood (Larkum et al., 2022). Furthermore, dendritic trees are often thought of as spatially extended systems consisting of passive cables, and electric current’s spreading is understood in terms of cable equations, but signal integration rules within such a system, how they influence synaptic input processing, interact with different forms of plasticity, and ultimately contribute to the brain’s computational power are still poorly known matters (Häusser and Mel, 2003). Moreover, evidence for the role of astrocytes in synaptic integration and processing, and for tripartite synapses, a configuration wherein astrocytes and neurons communicate bidirectionally (Perea et al., 2009), further complexifies contact area’s functional structure at single neuron scales. Finally, contact areas are more complex to delineate at meso- and macroscopic scales, where both node contours and links’ definition require context-dependent assumptions (Korhonen et al., 2021).

3.3.2 Inhibition

A key aspect of neural activity whose relationship with network structure remains difficult to incorporate is inhibition. Inhibition plays important roles at essentially all neural scales (see Supplementary Material A3). At the single neuron scale, inhibitory inputs from distinct sources target specific dendritic subdomains, from distal to proximal dendritic regions (Markram et al., 2004; Jadi et al., 2012). This region-specific targeting plays a key role in controlling dendritic processes (Larkum et al., 1999; Isaacson and Scanziani, 2011), in synchronising their activity (Vierling-Claassen et al., 2010), and in regulating plasticity (Sjöström et al., 2008). Moreover, while excitation and inhibition are not symmetric in the way they compete for spike generation, inhibitory synapses are associated with high information transfer between spike trains, which are usually exclusively ascribed to excitatory synapses. At meso- and macroscopic scales, inhibition plays a crucial role in synchronisation of neural systems (van Vreeswijk et al., 1994). Inhibitory control of excitatory loops (Bonifazi et al., 2009) constitutes a generic organisational principle of cortical functioning, which stabilises brain activity (Griffith, 1963). For instance, inhibitory feedback can decorrelate a network (Tetzlaff et al., 2012). Moreover, inhibitory neurons have been proposed to play an important role in controlling the cortical microconnectome (Kajiwara et al., 2021). On the other hand, while evidence suggests that excitatory neurons form networks with non-trivial structure, whose fine-scale specificity is determined by inhibitory cell type and connectivity (Yoshimura and Callaway, 2005), inhibitory interneuron connectivity tends to be locally all-to-all (Fino and Yuste, 2011).

Neurons’ collective dynamical regime, known as an asynchronous state (Renart et al., 2010), characterised by sporadic relatively uncorrelated firing with high temporal variability results from the interplay between excitatory and inhibitory forces (van Vreeswijk and Sompolinsky, 1996; 1998). Notably, such a balance relies on the role of glial cells, particularly astrocytes (Turkheimer et al., 2025).

Inhibition also constitutes an important ingredient for high-precision computation. The maintenance of an excitatory/inhibitory balance may allow cortical neurons to construct high-dimensional population codes and learn complex functions of their inputs through a spatially-extended mechanism far more precise than local Poisson rate codes (Denève and Machens, 2016).

How does inhibition affect network-related properties and the processes taking place on the network structure? First, inhibition plays an important role in routing (Wang and Yang, 2018). Second, it may affect network structure via plasticity mechanisms. In particular, interneurons contribute to the induction of long-term plasticity at excitatory synapses (Wigstrom and Gustafsson, 1985); conversely, excitatory transmission modulates inhibitory synaptic plasticity (Belan and Kostyuk, 2002). By modulating plasticity, inhibition, inhibitory plasticity and connectivity play important functional roles (Pulvermuller et al., 2021). For instance, inhibition controls the duration of sharp-wave ripples in hippocampal recurrent networks, which mediate learning (Vancura et al., 2023), while inhibitory plasticity supports replay generalisation in the hippocampus (Liao et al., 2024). Furthermore, inhibitory connectivity determines the shape of excitatory plasticity networks (Mongillo et al., 2018). On the other hand, while neural structure heterogeneity may locally affect the excitation/inhibition balance, the balanced state may be recovered through homeostatic mechanisms, which may themselves be regulated by inhibitory mechanisms (Pretel et al., 2024). Likewise, it has recently been shown that networks adapt to chronic alterations of excitatory-inhibitory compositions by balancing connectivity between these activities (Sukenik et al., 2021).

3.4 Multiscale and field-related properties

Up until now, we mentioned neural mechanisms which can be mapped onto particular regions of a network structure. However, other important neural mechanisms are not easily mapped onto local network structure. Arguably the two most prominent are neural mechanisms related to learning and adaptation and neuromodulation.

3.4.1 Learning, plasticity, and adaptation

Up until now, the focus has been on spatially local static or steady-state properties of neural activity. However, neural populations are able to change their properties in order to learn and adapt to new challenges from the environment. For instance, white matter is plastic and myelin can be modified in an activity-dependent way (Talidou et al., 2022). This has important implications for a number of neural parameters, e.g., delays, as white matter governs transmission speeds along axons (Pajevic et al., 2023). One important mechanism of brain plasticity is represented Hebbian learning, whereby the strength of connections between neurons increases when they are simultaneously activated (Hebb, 1949). Hebbian learning alone would lead to dynamic instability and runway excitation (Markram et al., 1997), and ultimately to complete circuit synchronisation (Zenke et al., 2017). Dynamic stability can be achieved in various ways, e.g., via homeostatic plasticity, through which neurons control their own excitability, ultimately regulating spike rates or stabilising network dynamics at various time scales (Turrigiano et al., 1998; Cirelli, 2017). Homeostasis can be implemented by various neurophysiological mechanisms, e.g., as synaptic scaling or efficacy redistribution (Turrigiano et al., 1998), membrane excitability adaptation (Davis, 2006; Pozo and Goda, 2010), or neuron-glial interactions (de Pittà et al., 2016). Synaptic plasticity may occur not only at synapses active during induction, but also at synapses not active during the induction. While these two mechanisms operate on the same time scales they have different computational properties and functional roles. The former mediates associative modifications of synaptic weights, while the latter counteracts runaway excitation associated with Hebbian plasticity and balances synaptic changes (Chistiakova et al., 2015).

Synaptic strength adjustment is only one among various possible homeostatic regulation mechanisms. A critical role in learning may also be played by suprathreshold activation of neurons and their integration. Neuronal activity is determined by excitatory and inhibitory synaptic input strength but also by intrinsic firing properties, which are regulated by the balance of inward and outward voltage-dependent conductances, respectively stabilising average neuronal firing rates and controlling shifts between synaptic input and firing rate (Turrigiano et al., 1998).

Plasticity has been associated with the generation of complex dynamical regimes in recurrent neural networks. For example, synaptic facilitation and depression promote regular and irregular network dynamics (Tsodyks et al., 1998). Plasticity at inhibitory synapses can stabilise irregular dynamics (Vogels et al., 2011), while synaptic plasticity based either on activity strength (de Arcangelis et al., 2006; Levina et al., 2007, 2009) or on spike timing (Meisel and Gross, 2009; Rubinov et al., 2011) can induce critical fluctuations and phase transitions from random subcritical to ordered supercritical dynamics (Rubinov et al., 2011). Although often thought of as a purely local phenomenon, which would therefore be best understood as pertaining to node-link contact area, there are still considerable knowledge gaps regarding the spatial and temporal scale at which Hebbian, homeostatic and other plasticity mechanisms actually interact as well as their exact functional role (Wen and Turrigiano, 2024).

3.4.2 Neuromodulation

Neuronal activity is regulated at various spatial and temporal scales by numerous chemical messengers, including neurotransmitters, neuromodulators, hormones. These systems are often thought of a pointwise as they originate in well-defined brainstem and forebrain nuclei, and their effect is studied as a generic perturbation of neuronal network. However, one should distinguish between the quasi pointwise structure of neuromodulatory nuclei and the network-like structure of neuromodulation’s consequences. these chemical messengers exert their effects through complex networks of diverging and converging pathways. For instance, different transmitters can act through the same network. Moreover, the effects of transmitters often depend on the presence of other transmitters and are characterised by higher-order functional phenomena such as metamodulation, whereby a modulator’s action is gated by that of another modulator (Katz and Edwards, 1999). Of interest are then not only the effects of each of these chemicals on the topological properties of the neural network, but also those of the complex high-order network of neuromodulators. How does global network dynamics and functional space result from the multidimensional input space of transmitters? Should neuromodulation be thought of as an extrinsic structure? Does it have a network structure of its own or should it be considered as a modulator of a system it is not part of? If so, how should this interaction be modelled?

Neuromodulators have long been known to shape neural circuits (Bargmann, 2012). More specifically, it has been proposed that neuromodulatory systems enable the brain to flexibly shift network topology (Shine et al., 2019; 2021) in a state and activity-dependent way (Ito and Schuman, 2008; Sakurai and Katz, 2009). However, whether and how various neuromodulatory systems interact with plasticity mechanisms to facilitate brain function is poorly understood. In particular, on what type of network, what network property, how and at what scales do neuromodulators act?

4 Fine-tuning network structure

In the previous part, we examined some neural properties that are seldom included in neural network modelling particularly at system-level scales. There is no clear picture of the information lost by network models simplifying brain structure and dynamics and, conversely, of the extent to which such network representations and the phenomenology that they may produce are robust to detail simplification.

Here we examine network structure that could explicitly incorporate and account for key neural properties. Understanding the brain as a networked system has at least two important conceptual aspects. Equipping a system with a network structure comes with a number of assumptions and corresponding limitations. The conditions for reducibility to network structure, including, discretisability, intrinsicality, structure preservation have been discussed at length elsewhere (Korhonen et al., 2021; Papo and Buldú, 2024). We provisionally assume that the system can adequately be described as a networked system at least at some level, but that the network structure used to model such a system may fail to incorporate important aspects of its anatomy, dynamics and physiology. This implies that network structure is “relevant” to some important aspect of the system, in particular to its dynamics and function. Conversely, understanding the brain as a particular network structure has implications for the system’s dynamics, for the process taking place on it, and ultimately for its function. For instance, the choice of a particular class of connectivity metrics induces a corresponding change in combinatorial, topological and geometrical properties of the associated network structure and phase space geometry and, therefore, different phenomenology and physics (del Genio et al., 2016).

The following questions are addressed: how can network structure be modified to incorporate network detail? What may be the phenomenological consequences of such changes in network structure? Can we account for more neural phenomenology by changing network structure? Is there experimental evidence for a given generalised structure? How robust is the system’s behaviour with respect to changes in its basic structure? Assuming a simple, undirected, unweighted and static network structure as the ground-level, there are essentially three ways in which unaccounted for neural properties can be addressed: 1) considering different properties of the original structure, e.g., properties of the connectivity matrix; 2) understanding the system as a network structure with different allowed constituent properties; 3) understanding the system as a qualitatively different network-based structure.

4.1 Roads less travelled in standard network neuroscience

4.1.1 Degrees of freedom

The general neuroscience problem of defining relevant neural units and relevant degrees of freedom is replicated when equipping the system with a network structure. At a network level, the discretisation process may in principle be predicated upon various properties, the identification of truly functional constituent units in real space being only one of them.

In real space representations, the system’s degrees of freedom are most often identified with nodes, irrespective of the scale at which a network structure is defined, but particularly at system level. On the other hand, in statistical mechanics, the system’s degrees of freedom are identified with links, whereas the number of particles play the role of volume in classical physical systems (Gabrielli et al., 2019). In the corresponding dual networks, nodes are turned into links and, conversely, links become nodes (Presigny and De Vico Fallani, 2022). While these two networks are in some sense equivalent, a link-based approach may for instance allow defining fine-grained vasculature data at all length scales and therefore also measuring blood flow conductance, current and inferring pressure differences for each link (Di Giovanna et al., 2018; Kirst et al., 2020).

Another important aspect of network structure is that, however defined, the degrees of freedom can have their own spatio-temporal dynamics in real space (Korhonen et al., 2021). Thus, at the level of dynamics and function, it may be appropriate to think of the system as a fluid structure where both nodes and links may be non-stationary (Solé et al., 2019). Nodes may appear or disappear, merge as a result of physiological or pathological conditions at various spatial and temporal scales or change their spatial location. For example, at subneural scales, spine motility (Bonhoeffer and Yuste, 2002) may be thought of in terms of moving nodes. Nodes may also constitute local subspaces in the codomain onto which they are projected but may result from non-local subspaces of the domain space. Similarly, the activity of neurons spiking at the same time can be identified with the nodes of a network whose links are the neurons themselves (Curto and Itskov, 2008; Morone and Makse, 2019; Morone et al., 2020).

4.1.2 Directionality and reciprocity

One property characterising neural activity seldom included but that can readily be accounted for in a standard simple network structure is flow directionality. Directionality may characterise both real and phase space neural activity. In the latter case, brain activity is thought of as discrete dynamical system, whose trajectories form a directed network in state space, wherein each node, representing a state, is the source of a link pointing to its dynamical successor. Directed networks qualitatively differ from their undirected counterparts in the system’s combinatorics and statistical mechanics (Boguñá and Serrano, 2025), but also in important aspects of the dynamical processes such as synchronisation (Muolo et al., 2022), pattern formation (Asllani et al., 2014), phase-transitions (Fruchart et al., 2021). The presence of asymmetric connectivity is associated with the emergence of some important features: on the one hand, spontaneous activity is characterised by time scales and corresponding oscillatory modes different with respect to those emerging from symmetric connectivity (Chen and Bialek, 2024). On the other hand, when perturbed, systems with asymmetric connectivity undergo complex transients, with time scales induced by different aspects of the connectivity matrix with respect to those of symmetric connectivity (Grela, 2017). Moreover, in the presence of asymmetric interactions, fluctuations can get locally enhanced before propagating through the system promoting collective qualitatively changes in large scale collective behaviour in globally ordered systems (Cavagna et al., 2017). This can be explained in terms of frustration, which arises when competing interactions prevent the system from finding a configuration that minimises the total energy leading to a complex disordered state (Vannimenus and Toulouse, 1977). Frustrated closed-loop motifs disrupt synchronous dynamics, allowing the coexistence of multiple metastable configurations (Gollo and Breakspear, 2014; Saberi et al., 2022).

More generally, directed links metrics induce different physics. While symmetric connectivity readily accounts for equilibrium systems, asymmetric coupling matrices are associated with open out-of-equilibrium systems, where detailed balance is broken (Nartallo-Kaluarachchi et al., 2024). Such systems exchange energy with the external environment, allowing effects such as gain and loss, and non-reciprocity (Bowick et al., 2022). In many-body systems, non-reciprocity leads to the dynamical recovery of spontaneously broken continuous symmetries (Fruchart et al., 2021). Conversely, non-reciprocal coupling per se usually implies non-zero energy and information flows (Loos and Klapp, 2020). This has not only theoretical but also methodological implications. For instances, choices associated with links and the way these are constructed, e.g., hybrid reconstruction with space- and time-varying properties, represent not only a technical but also a theoretical challenge, in that they induce spaces with non-trivial geometries and corresponding physics.

4.1.3 Beyond network topology

Brain modelling typically focuses on combinatorial and topological properties, neglecting neural networks’ physical aspects. The underlying space is treated as a topological space, i.e., a set of objects equipped with a set of neighborhood relations and no usual metric needs to be defined. It is therefore effectively treated as a non-metric space. In such models, nodes and links are treated as dimensionless entities. However, the physical size of neural material affects network geometry at all scales. The fact that physical wires cannot cross imposes limitations on the system’s structure (Bernal and Mason, 1960). In particular, the path chosen by neural fibres may be characterised by tortuosity as a function of node and link size and density (Dehmamy et al., 2018). Thus, the system’s structure is ultimately determined not only by operators associated with the connectivity matrix, but also by the network’s 3 D layout (Cohen and Havlin, 2010). For a given network adjacency matrix, there is an infinite number of configurations differing in node positions and wiring geometry, those of which that can be bijectively mapped into one another through continuous bending, and without link crossings forming isotopy classes (Liu et al., 2021). The geometry of connectivity may have an important impact on cortical dynamics and function (Knoblauch et al., 2016). For instance, lobal brain activity patterns may result from excitations of brain geometry’s resonant modes, which may better capture important properties of spontaneous and stimulus-induced activity with respect to connectivity-based models disregarding neural surface’s geometry (Pang et al., 2023). Thus, methods may be needed which can distinguish between topologically equivalent manifolds with different geometries (Chaudhuri et al., 2019).

The no-crossing condition also affects the system’s mechanical properties. While at low densities the system displays a solid-like response to stress, for high densities it behaves in a gel-like fashion (Dehmamy et al., 2018). The brain’s mechanical properties play a critical role in modulating brain anatomy, dynamics and ultimately function (Goriely et al., 2015). Due to its softness, brain tissue displays a range of mechanical features: it is essentially elastic for small deformations (Chatelin et al., 2010), but inelastic and deformation rate- and time-scale-dependent for large ones (Fallenstein et al., 1969).

Overall, numerous questions are still to be fully addressed: what’s the relationship between topology and geometry in anatomical networks? In particular, to what extent does topology determines geometry? How do the geometric constraints on wiring affect brain structure, dynamics, development, evolution, functional efficiency and robustness to various sources of damage?

4.1.4 Learning rules and adaptative networks

One way in which neural populations adapt to environmental challenges is by changing their configuration. At time scales longer than those of sensory-motor processes, this typically involves plasticity mechanisms. At the algorithmic level, homeostatic plasticity mechanisms constitute slow negative feedback loops (Zierenberg et al., 2018). Various studies incorporated simple plasticity mechanisms into large-scale network models, showing that this may affect network topology (Avalos-Gaytán et al., 2012; 2018) and give rise to rich dynamical phenomena including intermittency (Skardal et al., 2014), multistability (Chandrasekar et al., 2014) and criticality (Magnasco et al., 2009) or explosive synchronisation (Avalos-Gaytán et al., 2018).

From a network viewpoint, various questions are still unresolved: on what network aspect (including at what network scale) does plasticity operate? What algorithmic properties do plasticity mechanisms possess? Is the system adaptive, i.e., are there feedback mechanisms connecting topology to dynamics? If so, how do they affect function?

4.2 Beyond ground-level network structure

If the simple network structure fails to incorporate essential properties of neural anatomy and dynamics, its modelling power should be addressed by allowing structures with different properties and appreciating the changes that these may produce. Insofar as the simple network structure can be thought of as a ground level of network structure, new network classes can be obtained by relaxing some of its properties.

4.2.1 Recurrency and feedback loops

A set of anatomical properties of neural circuits generally not incorporated in standard system-level network representations is represented by recurrency. In terms of network properties this translates into self-loops, i.e., links connecting a node to itself, and cycles, i.e., closed paths with the same starting and ending node (Douglas and Martin, 2007; Fan et al., 2021). Recurrent interactions play a major role in dynamics, leading to chaotic dynamics (van Vreeswijk and Sompolinsky, 1996; Pernice et al., 2011; 2013). Moreover, feedback loops are an essential ingredient in both dynamics and computation (Alon, 2007; Zañudo and Albert, 2013). For instance, multistability and sustained oscillations respectively require positive and negative feedback loops (Thomas, 1981). Moreover, various neuromorphic devices (Marković et al., 2020) with recurrent connectivity, including liquid and solid state machines, echo-state networks, and general deep neural networks (Maass et al., 2002), and physical reservoir computing devices exploiting physical systems dynamics as devices (Tanaka et al., 2019) display information-processing capabilities. In such devices, multiple recurrently connected dynamical systems are used to implement nonlinear mappings of input signals into a high-dimensional state space using.

4.2.2 Higher-order network structure

Perhaps the most natural generalisation of network structure consists in changing its combinatorial properties by relaxing the pairwise (dyadic) character of connectivity (Lambiotte et al., 2019; Battiston et al., 2020; 2021; Bianconi, 2021; Bick et al., 2023). This may be done in various ways (See Supplementary Material A4). While in standard networks interactions are associated with links connecting two nodes, graphs can be generalised to include hyperlinks, i.e., links connecting more than two nodes (Ghoshal et al., 2009). Interactions could also in principle involve structures of different orders. Simplicial complexes allow defining interactions across orders (nodes, hyperlinks or simplices) (Giusti et al., 2016). In simplicial complexes, state variables used to describe the dynamical system can be associated with structure at any order. Thus, for instance, the state of a link can influence not only the state of its associated nodes, but also that of the higher-order interaction structures it belongs to, and such a system’s overall dynamics ultimately results from the time-varying interactions across all orders. Moreover, a node may regulate the interaction between two other nodes, either facilitating or inhibiting it (Sun et al., 2023; Niedostatek et al., 2024).

Higher-order topology has an important role in determining both dynamical and functional network properties. For instance, the dynamical ordered state has minima corresponding to single homology classes of the simplicial complex (Millán et al., 2020). Furthermore, coupled oscillator networks have fixed points consisting of two clusters of oscillators that become entrained at opposite phases and which can be thought of as configurations with information storage ability. Topology determines the small subset of the fixed points which are stable (Skardal and Arenas, 2020).

What experimental evidence is there for or against the existence of such a structure in neural systems? The structural aspects of higher-level interactions in the network structure of brain dynamics have long been addressed. Early studies suggested that real space neural activity may almost completely be explained in terms of pairwise correlations (Schneidman et al., 2006; Merchan and Nemenman, 2016). However, this experimental result could crucially hinge on the overall size of the considered cell population, and higher-level correlations may be necessary to account for larger populations’ neural activity (Yeh et al., 2010; Ganmor et al., 2011; Giusti et al., 2015; Reimann et al., 2017). Experimental evidence suggests that dynamical correlations between pairs of neurons are more significant when these belong to higher dimensional structure (Reimann et al., 2017), although recent results suggest that brain activity is dominated by pairwise interactions (Huang et al., 2017; Chung et al., 2025). In phase space, a higher-level structure may be induced by the intersection of place fields of neurons firing within the same theta frequencies cycle. Under certain conditions of the place fields, the homology of the simplicial complex induced by the intersections is equal to that of the underlying space, so that this structure effectively constitutes a faithful internal representation of the stimulus space ignoring finer phase-modulated spike timing effects (Curto and Itskov, 2008). Place field intersections also induce a metric providing relative distances between cell groups. This yields a faithful geometric representations of the external physical space somehow independent of the specific nature of the place fields.

On the other hand, the interactions between structure of different dimensions in principle afforded by a truly simplicial structure, have not been investigated in earnest yet. In a spatially embedded physiological context, this would almost necessarily involve cross-talk between dynamics at different spatial but also temporal scales.

Supposing that neural systems indeed present significant non-dyadic structure, for instance that higher-order dynamical systems do not result from some coordinate transformation of dyadic network dynamical systems (Bick et al., 2023), what is the neurophysiological meaning of such a class of structures? Can computation be performed in such structures? If so, which ones? How is it implemented by neural systems?

But what does this structure say about the mechanisms underlying its emergence and generating observed phenomenology? On the one hand, it has been pointed out that high order structure of the system’s emergent properties does not necessarily require high-order terms in the underlying dynamical law or in the Hamiltonian, and that even high-order methods relying on pairwise statistics (e.g., simplicial complexes built from a correlation matrix) may miss significant information only present in the joint probability distribution but not the pairwise marginals (Rosas et al., 2022; Robiglio et al., 2025). On the other hand, observed phenomena are not always a good proxy for the underlying generating mechanisms. In particular, the presence of statistical synergy does not imply genuinely non-decomposable interactions per se, as observable patterns may emerge from additive dynamics and pairwise interaction sequences, and even if complex collective behaviour can in principle involve irreducibility it often does not (Ji et al., 2023).

Structural descriptions based on higher-level generalisations face the standard problem in network modelling, i.e., mapping the network structure on appropriate aspects of the system, but, in spite of the restriction on the admissible contiguity laws, have an otherwise rather intuitive meaning, both in real and in phase space. On the other hand, dynamical descriptions are more problematic. For instance, while it is reasonable to assume that neural computation resorts to some form of discrete calculus and that it may integrate information across scales, it is not straightforward to understand neural dynamics and function in terms of standard exterior calculus and co-boundary operators. Furthermore, it is not clear to what extent brain dynamics presents meaningful interactions across structures of different dimensions.

4.2.3 Generalised interaction types

A further network structure generalisation consists in allowing multiple types of interactions between nodes (De Domenico et al., 2013; Boccaletti et al., 2014; 2023; Kivelä et al., 2014; Bianconi, 2018). In this class of structures, nodes may exist at different layers, with a connectivity structure in principle independent at each layer. Intra-layer links belong to the same layer and inter-layer links connect the projections of the nodes at different layers (See Supplementary Material A5). The layers of a multiplex network can account for different interaction phenomena such as information transfer or the ability to synchronise (De Domenico, 2017; Buldú and Porter, 2018). Moreover, at least prima facie, this class of structures appears as a natural representation of interdependencies among different systems (both within and without the brain) and can therefore be used to assess properties such as stability (Bonamassa et al., 2021), robustness and vulnerability (Buldyrev et al., 2010; Gao et al., 2011; De Domenico et al., 2014), or to understand the nature of interactions, e.g., competition (Danziger et al., 2019). Such a structure can highlight the role of connectivity, particularly of connector nodes in the modulation of bare dynamics or of processes unfolding on the network (Aguirre et al., 2013; 2014; Buldú et al., 2016). Not only does the interaction of a given subgraph with other nodes in the network affect whether that subgraph corresponds to a fixed-point support (Morrison and Curto, 2019), but the type of node (peripheral or central) acting as connector between subnetworks affects dynamics and processes in each of them (Aguirre et al., 2013; 2014). Note that this construct is not different from that of a standard network but rather a change in the way the relation matrix is segmented.

Unpacking information that may be hidden in standard collapsed representations (Cardillo et al., 2013; Zanin, 2015) may better account for interdependencies of interacting units within single network units and may reveal structural and dynamical properties of biological networks, for instance synchronisation properties which may be opposite to those operating in isolated networks (Aguirre et al., 2014).

Multilayer networks may naturally account for the layered structure of cerebral and cerebellar cortices (Huber et al., 2021) but also for interactions between neural populations in the cerebral cortex and separable subsystems such as the neuromodulatory system (Brezina and Weiss, 1997; Brezina, 2010), as well as the relationship between neural and extrinsic systems, e.g., the heart or the breathing system, as in a network physiology approach (Bashan et al., 2012). A multilayer (and multiplex) interaction structure has also been associated with the interactions between brain regions at different frequency bands, each band corresponding to a different layer of a multiplex/multilayer network undetected when averaging activity across layers (De Domenico, 2017; Buldú and Porter, 2018). Furthermore, various results point to the possibility of using multilayer brain networks as biomarkers of brain degenerative diseases such as Parkinson’s disease, mild cognitive impairment of Alzheimer’s disease (De Domenico et al., 2016; Echegoyen et al., 2021; see (Papo and Buldú, 2025a for a full discussion on network structure’s role in disease).

A subclass of multilayer networks is represented by temporal networks, wherein each layer corresponds to the structure at a particular time step, and layers are connected through unidirectional time-ordered links (Holme and Saramäki, 2012). In temporal networks, nodes are related to each other via causal or time-respecting paths (Holme, 2015), and dynamic interactions’ complex temporal structure may lead to history-dependent paths with long-term memory (Scholtes et al., 2014). Higher-order dependencies between nodes imply that causal paths can be more complex than those induced by static and aggregated networks and can affect topological network properties, e.g., node centrality (Scholtes et al., 2016), or community structure (Rosvall et al., 2014; Peixoto and Rosvall, 2017), dynamical processes, e.g., diffusion and dynamical processes (Ghosh et al., 2022) and the controllability (Zhang et al., 2021; Li et al., 2017).

At long time scales, brain fluctuations are characterised by non-trivial dynamical and statistical properties such as intermittency, scale invariance and long-range temporal correlations (Novikov et al., 1997; Allegrini et al., 2010; Fraiman and Chialvo, 2012; Papo, 2014). Multilayer temporal networks may capture non-trivial higher-order cross-order interactions, including cross-memory among neural populations, with complex fluctuating dynamics and nucleation or coalescence of neuronal populations (Gallo et al., 2024). However, whether such a symmetry breaking is present in brain activity and its functional meaning is still poorly understood.

4.3 Beyond single networks

Relaxing simple network properties gives rise to generalised possibly associated with profoundly different phenomenology but in some sense similar networks. However, the brain could be endowed with a structure that does not stem from property relaxation, and that may be qualitatively different from that of a simple network, ultimately changing the very essence of brain networkness.

4.3.1 From single networks to network ensembles and sequences

In essence, most network models of brain activity constitute field theories studying the time evolution of relevant variables measured at each point in time and on a finite number of points in space (Mikaberidze et al., 2025). Insofar as the relevant field variables are inherently fluctuating quantities, it is natural to describe the probability of field states in terms of ensembles incorporating the uncertainty about the system’s state or, equivalently, describing the system’s possible states and their structure.

Rather than focussing on the relational structure to learn about the topological and geometrical network properties of neural systems, a useful representation may highlight the statistical properties of relations. Networked structures are then described by statistical models that specify a probability distribution over a set of graphs, e.g., a probability of observing a given set of relations (Dichio and De Vico Fallani, 2022) and the quantities of interest are the set of properties of such spaces (Kahle, 2014) (See Supplementary Material A6). The frequency with which topological properties appear and their significance are explained in terms of probability distributions. Thus, the system is characterised not only in terms of topological invariants but also of their scaling properties, e.g., with system size or dimension. This framework’s dynamical counterpart is represented by the path-integral approach, where the system’s dynamics is represented by weighted sums of all possible paths the system can take. In a conceptually similar approach, each node can be understood as a superposition of multiple states (Ghavasieh and De Domenico, 2022).

The shift between single network to network ensembles highlights various aspects corresponding to different cuts into the relevant space. First, the relevant structure is not that of single realisations of a process (or of averaged or steady-state equivalents) or of a specific scale or scale range. These structures induce an effective thermodynamics, whose thermodynamic potentials and their non-analytical points identify corresponding phase transitions (Meshulam and Bialek, 2025). Second, proper brain structure and function representations may contain a relationship between these representations. This can be thought of in various ways, e.g., in terms of the minimum and maximum coupling levels, which act as energy levels in Hamiltonian systems (Santos et al., 2019), below and above which topological invariants vanish (Santos and Coutinho-Filho, 2009) or as the limit of a sequence of graphs, e.g., a graphon (Lovász and Szegedy, 2006) and effectively treated as a dynamical system (Bick and Sclosa, 2024).

4.3.2 Models of network models

A more fundamental way to understand relationships across scales consists in conceiving of the network structure as an effective field theory of brain structure and dynamics, i.e., a description of a system’s physics at a given scale up to a certain level of accuracy, using a finite number of variables that parametrise unspecified information in a useful way (Georgi, 1993) (See Supplementary Material A7). Indeed, in both anatomical and dynamical brain network representations, nodes and links, which constitute the microscopic scale of a network representation at a given scale, can always be understood as resulting from renormalisation of neurophysiological properties at lower scales and each degree of freedom effectively constitutes a kinetic model of phenomena at lower scales. Particularly at meso- and macroscopic neural scales, each node is then an asymptotically stable invariant subset of the phase space (in the simplest case a fixed point) in a renormalisation flow across scales.

Renormalisation theory shifts the focus from the investigation of the outcomes of a given model to the analysis of models themselves, by relating models of the same system at different scales along a renormalisation trajectory or grouping models of different systems sharing the same critical behavior and exhibiting the same large scale behaviour (See Supplementary Material A7). The renormalisation framework highlights scale-dependence of interactions in a systematic way and allows investigating the level at which non-random structure emerges, the relationship of such structure with the one present at other levels and ultimately the possible ways in which spatio-temporal patterns are converted into macroscopic dynamics and function.

Network sequences induce corresponding spaces with rich non-random structure. At each renormalisation flow stage, one may consider the coarse-grained structure emerging above the level of individual nodes in the system’s hierarchical organisation, whose nodes correspond in some sense to communities, and whose links represent members shared by two communities (Pollner et al., 2006). The presence of structure at various scales (not all of which ought to be functionally meaningful) reflects a property of biological systems, which may present different out-of-equilibrium properties at different scales, or equilibrium properties at certain scales but not at others (Cugliandolo et al., 1997; Egolf, 2000).

The renormalisation flow can be understood as dynamics on the space of field models, and it is important to understand the extent to which it operates in a functorial, structure-preserving manner, linking different field models and their properties, i.e., how it preserves properties not only of its space components but also of maps between them (Ghrist, 2014) (See Supplementary Material A8).

Note that brain network renormalisation typically dispenses with the treatment of infinities involved in the transition between essentially continuous anatomical or dynamical fields, and network structure. This mapping is dealt with through discretionary steps the consequences of which are poorly understood (Korhonen et al., 2021).

4.3.3 Emergence of network structure and function

It is straightforward to understand network ensembles and sequences in terms of structure emergence. The structure’s statistical properties emerge naturally from constrained entropy maximisation, each constraint giving rise to different models (Radicchi et al., 2020) a reasonable model at long, e.g., evolutionary time scales (See Supplementary Material A9). In this vein, for example, mean-field representations can be thought of as maximum entropy models for the topology of direct interactions, whereas network models as that of paths (Lambiotte et al., 2019). More generally, a system’s organising principles can be thought of as the result of underlying non-equilibrium growth and development mechanisms (Betzel and Bassett, 2017).

Renormalisation, perhaps the most general conceptual representation of emergence at any scale, is usually understood as an analytical tool to highlight neural structure (See Supplementary Material A9). In this context, coarse-graining has two contrasting effects: on the one hand, it is necessarily associated with information loss. On the other hand, it reduces noise and increases the strength of relationships, so that structure may emerge far from the micro-scale, where macro-states have stronger dependencies (Hoel et al., 2013). Emergent behaviour can be transient, context-dependent and non-local in real space or time (Varley, 2022), and behaviour at one scale may not be well predicted by behaviour at a finer scale (Wolpert and MacReady, 2000). Moreover, causality may be circular, with large scale behaviour dictating the one at lower scales (Haken, 2006b) (See Supplementary Material A9). The following important questions are often addressed: what makes a neuronal unit reducible to a node? Can coarse-graining allow neglecting hardware heterogeneity, e.g., glial cells? What structure does the renormalisation flow preserve? To what extent is functionally relevant information retained or lost in coarse-graining?

Renormalisation can also be thought of as a genuinely functional neural process. In this sense, network structure emergence can be distinguished from the emergence of function. Function emerges from one particular coarse-graining procedure (which may not necessarily correspond to real space renormalisation) (Bradde and Bialek, 2017). For instance, in the sensory domain, network structure constitutes the nerve covering induced by boundary conditions emerging from dynamical annealed disorder associated with neuronal populations’ receptor fields (Curto, 2017). In this framework, nodes and links are emergent properties, rather than a structure a priori (See Supplementary Material A9). Likewise, geometry can be seen as an emerging property of single neurons’ physiology and of the functional architecture through which these local properties are renormalised. Whether the emerging structure is fundamental or a manifestation of a more primitive, pre-geometric reality (Bianconi et al., 2015) depends on whether it has functional value or not.

The corresponding questions are: how does behaviour emerge from its spatio-temporal dynamics? If renormalisation represents how function emerges, to what extent do appropriate representations depend on the specific renormalisation process?

5 Pathways for network neuroscience

In addressing possible future avenues for a network understanding of the brain great emphasis is typically put on improving experimental techniques, for instance, electron microscopy reconstruction is expected to significantly improve accuracy and scope with respect to traditional electrophysiological techniques and may help constraining computational models (Litwin-Kumar and Turaga, 2019) and on mathematical and physical modelling and in data analysis techniques (Goodfellow et al., 2022). However, advances may come from better knowledge concerning fundamental aspects of brain functioning and, from changes in some conceptual aspects of the network-brain association at the heart of network neuroscience.

Here we discuss three main conceptual axes along which network neuroscience may evolve: (i) the use of function to gauge brain models; (ii) how network theory should help in advancing neuroscientific knowledge and conceptual apparatus; (iii) how phenomenology is explained.

5.1 Gauging structure through function

One of the most fundamental endeavours of network neuroscience is to understand how network structure is related to brain function (Ito et al., 2020). Thus, network structure can be associated with some fitness for specific tasks. While not all observed structure has functional meaning, and not all observed features optimise function but may instead be a byproduct of the way the network evolved (Solé and Valverde, 2020), without a proper theory of its function it is in general arduous to explain observed anatomical and dynamical structure and generative models are underdetermined both at experimental and at longer time scales (Doyle and Csete, 2011). Brain networks and the dynamical processes occurring on them are to a large extent the result of evolutionary, learning and adaptation processes, through which the brain solves computational problems necessary for survival, which in turn arbitrate trade-offs among available resources. Classical statistical physics approaches do not incorporate the notion of function, partly due to the fact that large non-biological disordered systems such as glasses do not arise through evolutionary processes (Advani et al., 2013).

The relationship between structural properties, e.g., topology, and function may suggest features essential to appropriate phenomenological models. For instance, if function and functional dynamics are respectively associated with some structural universality class and topological phase transitions, i.e., qualitative changes in topology, then this should be accounted for and a corresponding physiological mechanism should be found. While altering the microscopic scale affects the resulting physics, the question is not only whether the associated phenomenology constitutes a good descriptor of brain dynamics and function but also whether there are elements suggesting its plausibility.

The relationship between brain structure, dynamics and function is in many ways a complex one. First, the properties of a networked dynamical system do not trivially stem from either local dynamics or network structure alone, but from the interaction of the two (Curto and Morrison, 2019). Second, while deterministic macroscopic order can govern function, e.g., learning, such structure can arise in ways that are independent of the details of network heterogeneity (Advani et al., 2013). Third, while tens of neurons may be sufficient to identify the network’s dominant variability modes (Williamson et al., 2016), a given network’s function can vary in a context-dependent way (Biswas and Fitzgerald, 2022), although different structures tend to be optimal for different tasks. For instance, information flow and response diversity are optimised by different circuits (Ghavasieh and De Domenico, 2024). Conversely, different networks can give rise to similar function. Overall, how network structure contributes to neural networks’ dynamical and functional properties such as sloppiness and degeneracy (Gutenkunst et al., 2007; Machta et al., 2013) is still poorly understood (See Supplementary Material A10). Fourth, structural complexity does not necessarily lead to functional complexity, e.g., to heterogeneous responses to perturbations. Functional heterogeneity is in general a genuine emergent property which cannot be deduced from the system’s structural properties e.g., structural heterogeneity or dynamical properties (Ghavasieh and De Domenico, 2022), and which can give rise to rather non-trivial phase space configurations (Stadler and Stadler, 2006) (See Supplementary Material A10). Ultimately, which aspects and properties of network structure are necessary in a brain model and, as a result, which methods should be summoned to represent them (Giusti et al., 2015; Curto, 2017) depend on the properties of such mapping.

Finally, while a network’s function can help understanding both its structure and complex dynamics (Chen et al., 2006; Lau et al., 2007; Sterling and Laughlin, 2015), function itself may not always be obvious a priori and may have a non-trivial relationship with bare dynamics (Papo, 2019a). The general dearth of neural, particularly functional stylised facts induces a circularity of functional brain networks: the incomplete knowledge of the algorithmic and implementation aspects of neural computation, even at single neuron scales (Moore et al., 2024), biases the segmentation at microscopic scales, giving rise to network structure that is not necessarily functionally meaningful. sometimes, models contain mechanisms which incorporate properties which appear plausible. For instance, network structure dynamics could be understood as emerging from a non-equilibrium dynamics similar to that of the network geometry with flavour growth model, where the flavour parameter may appear a good candidate model for neuromodulation (Bianconi and Rahmede, 2015; Bianconi, 2015). Often models also aim at replicating some of the system’s ostensible generic statistical or dynamical properties, e.g., its scaling of fluctuations. However, often these can arise in rather different ways (Morrell et al., 2021), and their functional properties are in general not directly tested but only inferred based on prior knowledge.

5.1.1 Universality

In some sense, understanding how robust network structure is with respect to both biological detail and network specification is a question germane to the issues of neural functional equivalence and switching. Indeed, universality constitutes a form of robustness (Lesne, 2008). Moreover, from a functional viewpoint, an important question is the extent to which function is robust to changes in structure. A nested question is related to the scale-dependence of such relationship, i.e., the scales at which the structure-function map induces qualitative changes.

A fundamental issue in the determination of network structure’s role in the brain is the extent to which dynamical emergence is a property of network structure, e.g., topology, independently of the specific properties of node dynamics. On the one hand, emergent dynamics is not necessarily inherited from intrinsically oscillating nodes or induced by the characteristics of forcing stimuli but may arise from the coupling structure (Morrison et al., 2024). This is for instance the case of threshold-linear networks (Morrison and Curto, 2019). On the other hand, empirically observed fluctuation scaling properties can be achieved by imposing specific nodal properties, e.g., a particular type of neuron excitability (Buendía et al., 2021). This could for instance be implemented by neural apparatus in which the global coupling strength would be normalised by the average coupling strength per node, so that the dynamics would be invariant under scaling of the adjacency matrix, sterilising the role of the network from the specific properties of the nodes (Nishikawa and Motter, 2016).

In a statistical physics sense, universality reflects the fact that many systems possibly differing in their microscopic properties, can nonetheless be classified into a small number of universality classes defined by their scaling exponents, which quantify a system’s relationship between different scales (See Supplementary Material A7). Universal relations arise when the changes caused by modification of microscopic parameters are effectively summarised by a small number of phenomenological parameters (Goldenfeld et al., 1989). Complex systems such as the brain may exhibit instability of renormalisation, i.e., may fail to converge to a stable fixed point, within a topological class, comprising systems or states sharing the same fundamental topological properties, even for stationary combinatorics (Martens and Winckler, 2016).

While the temporal structure of avalanches shows signs of universality (Friedman et al., 2012), one important question is that of determining how network properties may contribute to its emergence. In correlated inhomogeneous structures, universal behaviour is comparable to the one characterising continuous field theories of system with non-integer dimension, and the relevant control parameter for universal behaviour on inhomogeneous structures is the spectral dimension (Millán et al., 2021).

5.2 A neuro-inspired network science

What does network structure tell us about fundamental properties of brain dynamics and function? Can we express how efficiently the brain carries out its functions or how it can withstand environmental challenges, possibly changing as a result of them, in terms of network structure?

For many systems it is natural to relate properties such as robustness and efficiency to the topological properties of its network structure (Ma et al., 2009; Estrada et al., 2012; Faci-Lázaro et al., 2022). However, there is no guarantee that the way efficiency and robustness (or, equivalently, resilience and vulnerability) are usually defined (Kitano, 2002; 2007; Lesne, 2008; Liu et al., 2022; Schwarze et al., 2024) is actually a good indicator of functional robustness (Papo and Buldú, 2025a). Furthermore, there is little knowledge of the topological properties that may covary with functional robustness and of the relationship between robustness, degeneracy and evolvability in the brain (Wagner, 2008; Masel and Trotter, 2010; Whitacre and Bender, 2010; Whitacre, 2012). Future research should quantify properties such as robustness and efficiency in a way that is functionally meaningful (Levit-Binnun and Golland, 2012; Papo and Buldú, 2025a). This will imply a conceptual effort and perhaps the adoption of meaningful metaphors.

5.3 From causal to topological explanations

A fundamental question not often addressed is whether network properties can be used to explain neural function.

Causal explanations are thought of as essential to the scientific method (Livneh, 2023). Causal explanations account for observed process or performed function in terms of chains of causal factors or interactions bound by spatio-temporal continuity and statistical relevance (Van Fraassen, 1977). However, in systems with a great number of non-linearly and non-locally interacting units such as the brain, causal chains may be difficult both to observe and to define, as global parameters emerging from the intrinsic interactions among the individual parts of the system may in turn govern their behaviour (Haken, 2006a; b). Alternative types of explanation are often thought of as satisfying accounts of observed phenomena. For instance, mechanistic explanations aim at highlighting neurophysiological mechanisms i.e., “entities and activities organised in such a way that they are responsible for the phenomenon” (Illari and Williamson, 2012).

Results from research fields ranging from condensed matter physics (Thouless et al., 1982; Bowick and Giomi, 2009), to quantum computing (Collins, 2006) and data mining (Rasetti and Merelli, 2015) indicate that complex system’s phenomenology can be explained in topological terms (Huneman, 2010; Kostić, 2018; Tozzi and Papo, 2020). Topological descriptions may help establishing under what conditions network structure, and under which conditions network structure is relevant, and predicting and acting upon brain activity (Papo, 2019b). While various factors, including function and energetics may account for structure and dynamics, a still rather poorly understood question is the extent to which structure, particularly topology, explains or constrains function, or rather constitutes a mere by-product of it.

6 Concluding remarks

We discussed on the one hand ways to add detail to neural structure, which is typically drastically simplified in standard network neuroscience models of brain anatomy and dynamics and, on the other hand, possible alternative network structure classes and the potential benefits that these may offer in terms of ability to account for known neural phenomenology or to reveal as yet unknown one, distinguishing between computing different properties of a standard network structure and changing the structure itself.

In essence we addressed two dual questions: to what extent does adding biological detail qualitatively change network models of brain activity? How universal is network structure? Appropriate phenomenological descriptions of a system always contain a universal part and a few detail-sensitive constants (Goldenfeld et al., 1989). The quest for the appropriate level of detail characterises the study of most complex biological systems. In some sense, understanding the brain as a networked system boils down to determining whether a statistical mechanics approach makes sense and at which scales details matter (Cavagna et al., 2018). In network neuroscience, it is important to understand to what extent network structure not explicitly incorporating the important neural properties mentioned hitherto nonetheless recovers good approximations of brain structure, dynamics and ultimately function.

What can generalised structures change with respect to standard, network models? Over and above different phenomenology, generalising network structure can provide with different ways to conceive of brain anatomy, dynamics and function and, more fundamentally, to explain neurophysiological phenomena. However, generalised structures face the same fundamental issues related to intrinsicality, universality (intended as robustness to changes in neurophysiological detail), and functional meaningfulness of standard network models: is structure intrinsic? If so, how does it allow the system to carry out the functions it is assigned? What aspect of the neural system’s network structure is functionally meaningful? To what extent is such a structure universal? How can we decide whether a given structure is a mere extrinsic description or can be thought of as part of its intrinsic modus operandi? While whether there exists an appropriate structure representing a given dynamical system may be a question of context (Bick et al., 2023). Answering these fundamental questions will require incorporating function but also a better characterisation of neurophysiological stylised facts and of the structure-dynamics-function relationship.

Finally, throughout, we mainly discussed the extent to which a network representation reflects the way the system may work, rather than how such a structure allows investigating it theoretically or experimentally. The method used to investigate a system (e.g., the process used to explore a network) and the functions that the system actually implements are somehow intertwined and often equated, and so are a given structure’s information content and the dynamical aspects that this structure supports. For instance, a given space parametrisation may be expedient in a particular context, but may not reflect the system’s underlying functional geometry, affording an extrinsic embedding-dependent view of the true underlying space (Pennec, 2006; Lenglet et al., 2006). In the space underlying a given representation, the allowed operations may also not reflect the computations performed by the system. Likewise, while a given structure may be associated with a certain amount of information, that does not entail that such information is actually transferred or computed or that it is functionally relevant.

Author contributions

DP: Writing – review and editing, Writing – original draft, Conceptualization. JB: Conceptualization, Writing – review and editing, Writing – original draft.

Funding

The author(s) declare that financial support was received for the research and/or publication of this article. JB acknowledges support from Ministerio de Ciencia e Innovación under grants PID2020-113737GB-I00 and PID2023-147827NB-I00.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

The author(s) declared that they were an editorial board member of Frontiers, at the time of submission. This had no impact on the peer review process and the final decision.

Generative AI statement

The author(s) declare that no Generative AI was used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fnetp.2025.1667656/full#supplementary-material

References

Advani, M., Lahiri, S., and Ganguli, S. (2013). Statistical mechanics of complex neural systems and high dimensional data. J. Stat. Mech. Theory Exp. 2013, P03014. doi:10.1088/1742-5468/2013/03/p03014

CrossRef Full Text | Google Scholar

Aguirre, J., Papo, D., and Buldú, J. M. (2013). Successful strategies for competing networks. Nat. Phys. 9, 230–234. doi:10.1038/nphys2556

CrossRef Full Text | Google Scholar

Aguirre, J., Sevilla-Escoboza, R., Gutiérrez, R., Papo, D., and Buldú, J. M. (2014). Synchronization of interconnected networks: the role of connector nodes. Phys. Rev. Lett. 112, 248701. doi:10.1103/PhysRevLett.112.248701

PubMed Abstract | CrossRef Full Text | Google Scholar

Albert, R., and Barabási, A. L. (2002). Statistical mechanics of complex networks. Rev. Mod. Phys. 74, 47–97. doi:10.1103/revmodphys.74.47

CrossRef Full Text | Google Scholar

Allegrini, P., Paradisi, P., Menicucci, D., and Gemignani, A. (2010). Fractal complexity in spontaneous EEG metastable-state transitions: new vistas on integrated neural dynamics. Fron. Physiol. 1, 128. doi:10.3389/fphys.2010.00128

PubMed Abstract | CrossRef Full Text | Google Scholar

Alon, U. (2007). Network motifs: theory and experimental approaches. Nat. Rev. Genet. 8, 450–461. doi:10.1038/nrg2102

PubMed Abstract | CrossRef Full Text | Google Scholar

Appeltant, L., Soriano, M. C., Van der Sande, G., Danckaert, J., Massar, S., Dambre, J., et al. (2011). Information processing using a single dynamical node as complex system. Nat. Commun. 2, 468. doi:10.1038/ncomms1476

PubMed Abstract | CrossRef Full Text | Google Scholar

Araque, A., Carmignoto, G., Haydon, P. G., Oliet, S. H., Robitaille, R., and Volterra, A. (2014). Gliotransmitters travel in time and space. Neuron 81, 728–739. doi:10.1016/j.neuron.2014.02.007

PubMed Abstract | CrossRef Full Text | Google Scholar

Asllani, M., Challenger, J. D., Pavone, F. S., Sacconi, L., and Fanelli, D. (2014). The theory of pattern formation on directed networks. Nat. Commun. 5, 4517. doi:10.1038/ncomms5517

PubMed Abstract | CrossRef Full Text | Google Scholar

Atay, F. M., Jost, J., and Wende, A. (2004). Delays, connection topology, and synchronization of coupled chaotic maps. Phys. Rev. Lett. 92, 144101. doi:10.1103/PhysRevLett.92.144101

PubMed Abstract | CrossRef Full Text | Google Scholar

Avalos-Gaytán, V., Almendral, J. A., Papo, D., Schaeffer, S. E., and Boccaletti, S. (2012). Assortative and modular networks are shaped by adaptive synchronization processes. Phys. Rev. E 86, 015101. doi:10.1103/PhysRevE.86.015101

PubMed Abstract | CrossRef Full Text | Google Scholar

Avalos-Gaytán, V., Almendral, J. A., Leyva, I., Battiston, F., Nicosia, V., Latora, V., et al. (2018). Emergent explosive synchronization in adaptive complex networks. Phys. Rev. E 97, 042301. doi:10.1103/PhysRevE.97.042301

PubMed Abstract | CrossRef Full Text | Google Scholar

Balasubramanian, V. (2015). Heterogeneity and efficiency in the brain. Proc. IEEE 103, 1346–1358. doi:10.1109/jproc.2015.2447016

CrossRef Full Text | Google Scholar

Balbinot, G., Milosevic, M., Morshead, C. M., Iwasa, S. N., Zariffa, J., Milosevic, L., et al. (2025). The mechanisms of electrical neuromodulation. J. Physiol. 603, 247–284. doi:10.1113/JP286205

PubMed Abstract | CrossRef Full Text | Google Scholar

Barak, O. (2017). Recurrent neural networks as versatile tools of neuroscience research. Curr. Opin. Neurobiol. 46, 1–6. doi:10.1016/j.conb.2017.06.003

PubMed Abstract | CrossRef Full Text | Google Scholar

Bargmann, C. I. (2012). Beyond the connectome: how neuromodulators shape neural circuits. Bioessays 34, 458–465. doi:10.1002/bies.201100185

PubMed Abstract | CrossRef Full Text | Google Scholar

Bashan, A., Bartsch, R. P., Kantelhardt, J., Havlin, S., and Ivanov, P. C. (2012). Network physiology reveals relations between network topology and physiological function. Nat. Commun. 3, 702. doi:10.1038/ncomms1705

PubMed Abstract | CrossRef Full Text | Google Scholar

Bastos, A. M., Vezoli, J., and Fries, P. (2015). Communication through coherence with interareal delays. Curr. Opin. Neurobiol. 31, 173–180. doi:10.1016/j.conb.2014.11.001

PubMed Abstract | CrossRef Full Text | Google Scholar

Battiston, F., Cencetti, G., Iacopini, I., Latora, V., Lucas, M., Patania, A., et al. (2020). Networks beyond pairwise interactions: structure and dynamics. Phys. Rep. 874, 1–92. doi:10.1016/j.physrep.2020.05.004

CrossRef Full Text | Google Scholar

Battiston, F., Amico, E., Barrat, A., Bianconi, G., Ferraz de Arruda, G., Franceschiello, B., et al. (2021). The physics of higher-order interactions in complex systems. Nat. Phys. 17, 1093–1098. doi:10.1038/s41567-021-01371-4

CrossRef Full Text | Google Scholar

Belan, P. V., and Kostyuk, P. G. (2002). Glutamate-receptor-induced modulation of GABAergic synaptic transmission in the hippocampus. Pflugers Arch. 444, 26–37. doi:10.1007/s00424-002-0783-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Bernal, J. D., and Mason, J. (1960). Packing of spheres: co-ordination of randomly packed spheres. Nature 188, 910–911. doi:10.1038/188910a0

CrossRef Full Text | Google Scholar

Betzel, R. F., and Bassett, D. S. (2017). Generative models for network neuroscience: prospects and promise. J. R. Soc. Interface 14, 20170623. doi:10.1098/rsif.2017.0623

PubMed Abstract | CrossRef Full Text | Google Scholar

Bianconi, G. (2015). Interdisciplinary and physics challenges of network theory. EPL Europhys. (Lett.) 111, 56001. doi:10.1209/0295-5075/111/56001

CrossRef Full Text | Google Scholar

Bianconi, G. (2018). Multilayer networks: structure and function. Oxford: Oxford University Press.

Google Scholar

Bianconi, G. (2021). Higher order networks: an introduction to simplicial complexes. Cambridge: Cambridge University Press.

Google Scholar

Bianconi, G., and Rahmede, C. (2015). Complex quantum network manifolds in dimension d > 2 are scale-free. Sci. Rep. 5, 13979. doi:10.1038/srep13979

PubMed Abstract | CrossRef Full Text | Google Scholar

Bianconi, G., Rahmede, C., and Wu, Z. (2015). Complex quantum network geometries: evolution and phase transitions. Phys. Rev. E 92, 022815. doi:10.1103/PhysRevE.92.022815

PubMed Abstract | CrossRef Full Text | Google Scholar

Bick, C., and Sclosa, D. (2024). Dynamical systems on graph limits and their symmetries. J. Dyn. Partial Differ. Equ. 37, 1171–1206. doi:10.1007/s10884-023-10334-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Bick, C., Gross, E., Harrington, H. A., and Schaub, M. T. (2023). What are higher-order networks? SIAM Rev. 65, 686–731. doi:10.1137/21m1414024

CrossRef Full Text | Google Scholar

Biswas, T., and Fitzgerald, J. E. (2022). Geometric framework to predict structure from function in neural networks. Phys. Rev. Res. 4, 023255. doi:10.1103/physrevresearch.4.023255

PubMed Abstract | CrossRef Full Text | Google Scholar

Boccaletti, S., Latora, V., Moreno, Y., Chavez, M., and Hwang, D. U. (2006). Complex networks: structure and dynamics. Phys. Rep. 424, 175–308. doi:10.1016/j.physrep.2005.10.009

CrossRef Full Text | Google Scholar

Boccaletti, S., Bianconi, G., Criado, R., Del Genio, C. I., Gómez-Gardeñes, J., Romance, M., et al. (2014). The structure and dynamics of multilayer networks. Phys. Rep. 544, 1–122. doi:10.1016/j.physrep.2014.07.001

PubMed Abstract | CrossRef Full Text | Google Scholar

Boccaletti, S., De Lellis, P., del Genio, C. I., Alfaro-Bittner, K., Criado, R., Jalan, S., et al. (2023). The structure and dynamics of networks with higher order interactions. Phys. Rep. 1018, 1–64. doi:10.1016/j.physrep.2023.04.002

CrossRef Full Text | Google Scholar

Boguñá, M., and Serrano, M. Á. (2025). Statistical mechanics of directed networks. Entropy 27, 86. doi:10.3390/e27010086

PubMed Abstract | CrossRef Full Text | Google Scholar

Bollobás, B. (1986). Combinatorics: set systems, hypergraphs, families of vectors, and combinatorial probability. Cambridge: Cambridge University Press.

Google Scholar

Bonamassa, I., Gross, B., and Havlin, S. (2021). Interdependent couplings map to thermal, higher-order interactions. arXiv:2110.08907.

Google Scholar

Bonhoeffer, T., and Yuste, R. (2002). Spine motility: phenomenology, mechanisms, and function. Neuron 35, 1019–1027. doi:10.1016/s0896-6273(02)00906-6

PubMed Abstract | CrossRef Full Text | Google Scholar

Bonifazi, P., Goldin, M., Picardo, M. A., Jorquera, I., Cattani, A., Bianconi, G., et al. (2009). GABAergic hub neurons orchestrate synchrony in developing hippocampal networks. Science 326, 1419–1424. doi:10.1126/science.1175509

PubMed Abstract | CrossRef Full Text | Google Scholar

Bowick, M. J., and Giomi, L. (2009). Two-dimensional matter: order, curvature and defects. Adv. Phys. 58, 449–563. doi:10.1080/00018730903043166

CrossRef Full Text | Google Scholar

Bowick, M. J., Fakhri, N., Marchetti, M. C., and Ramaswamy, S. (2022). Symmetry, thermodynamics, and topology in active matter. Phys. Rev. X 12, 010501. doi:10.1103/physrevx.12.010501

CrossRef Full Text | Google Scholar

Bradde, S., and Bialek, W. (2017). PCA meets RG. J. Stat. Phys. 167, 462–475. doi:10.1007/s10955-017-1770-6

PubMed Abstract | CrossRef Full Text | Google Scholar

Breakspear, M., and Terry, J. R. (2002). Nonlinear interdependence in neural systems: motivation, theory, and relevance. Int. J. Neurosci. 112, 1263–1284. doi:10.1080/00207450290026193

PubMed Abstract | CrossRef Full Text | Google Scholar

Brezina, V. (2010). Beyond the wiring diagram: signalling through complex neuromodulator networks. Philos. Trans. R. Soc. B 365, 2363–2374. doi:10.1098/rstb.2010.0105

PubMed Abstract | CrossRef Full Text | Google Scholar

Brezina, V., and Weiss, K. R. (1997). Analyzing the functional consequences of transmitter complexity. Trends Neurosci. 20, 538–543. doi:10.1016/s0166-2236(97)01120-x

PubMed Abstract | CrossRef Full Text | Google Scholar

Buendía, V., Villegas, P., Burioni, R., and Muñoz, M. A. (2021). Hybrid-type synchronization transitions: where incipient oscillations, scale-free avalanches, and bistability live together. Phys. Rev. Res. 3, 023224. doi:10.1103/physrevresearch.3.023224

CrossRef Full Text | Google Scholar

Buldú, J. M., and Porter, M. A. (2018). Frequency-based brain networks: from a multiplex framework to a full multilayer description. Netw. Neurosci. 2, 418–441. doi:10.1162/netn_a_00033

PubMed Abstract | CrossRef Full Text | Google Scholar

Buldú, J. M., Sevilla-Escoboza, R., Aguirre, J., Papo, D., and Gutiérrez, R. (2016). “Interconnecting networks: the role of connector links,” in Interconnected networks. Editor A. Garas 77 (Springer), 61.

Google Scholar

Buldyrev, S. V., Parshani, R., Paul, G., Stanley, H. E., and Havlin, S. (2010). Catastrophic cascade of failures in interdependent networks. Nature 464, 1025–1028. doi:10.1038/nature08932

PubMed Abstract | CrossRef Full Text | Google Scholar

Bullmore, E., and Sporns, O. (2009). Complex brain networks: graph theoretical analysis of structural and functional systems. Nat. Rev. Neurosci. 10, 186–198. doi:10.1038/nrn2575

PubMed Abstract | CrossRef Full Text | Google Scholar

Cabral, J., Castaldo, F., Vohryzek, J., Litvak, V., Bick, C., Lambiotte, R., et al. (2022). Metastable oscillatory modes emerge from synchronization in the brain spacetime connectome. Nat. Commun. Phys. 5, 184. doi:10.1038/s42005-022-00950-y

PubMed Abstract | CrossRef Full Text | Google Scholar

Cardillo, A., Gomez-Gardeñes, J., Zanin, M., Romance, M., Papo, D., del Pozo, F., et al. (2013). Emergence of network features from multiplexity. Sci. Rep. 3, 1344. doi:10.1038/srep01344

PubMed Abstract | CrossRef Full Text | Google Scholar

Cavagna, A., Giardina, I., Jelic, A., Melillo, S., Parisi, L., Silvestri, E., et al. (2017). Nonsymmetric interactions trigger collective swings in globally ordered systems. Phys. Rev. Lett. 118, 138003. doi:10.1103/PhysRevLett.118.138003

PubMed Abstract | CrossRef Full Text | Google Scholar

Cavagna, A., Giardina, I., Mora, T., and Walczak, A. M. (2018). Physical constraints in biological collective behaviour. Curr. Opin. Syst. Biol. 9, 49–54. doi:10.1016/j.coisb.2018.03.002

CrossRef Full Text | Google Scholar

Chandrasekar, V. K., Sheeba, J. H., Subash, B., Lakshmanan, M., and Kurths, J. (2014). Adaptive coupling induced multi-stable states in complex networks. Phys. D. 267, 36–48. doi:10.1016/j.physd.2013.08.013

CrossRef Full Text | Google Scholar

Chatelin, S., Constantinesco, A., and Willinger, R. (2010). Fifty years of brain tissue mechanical testing: from in vitro to in vivo investigations. Biorheology 47, 255–276. doi:10.3233/BIR-2010-0576

PubMed Abstract | CrossRef Full Text | Google Scholar

Chaudhuri, R., Gerçek, B., Pandey, B., Peyrache, A., and Fiete, I. (2019). The intrinsic attractor manifold and population dynamics of a canonical cognitive circuit across waking and sleep. Nat. Neurosci. 22, 1512–1520. doi:10.1038/s41593-019-0460-x

PubMed Abstract | CrossRef Full Text | Google Scholar

Chen, X., and Bialek, W. (2024). Searching for long timescales without fine tuning. Phys. Rev. E 110, 034407. doi:10.1103/PhysRevE.110.034407

PubMed Abstract | CrossRef Full Text | Google Scholar

Chen, B. L., Hall, D. H., and Chklovskii, D. B. (2006). Wiring optimization can relate neuronal structure and function. Proc. Natl. Acad. Sci. U.S.A. 110, 4723–4728. doi:10.1073/pnas.0506806103

PubMed Abstract | CrossRef Full Text | Google Scholar

Chistiakova, M., Bannon, N. M., Chen, J. Y., Bazhenov, M., and Volgushev, M. (2015). Homeostatic role of heterosynaptic plasticity: models and experiments. Front. Comput. Neurosci. 9, 89. doi:10.3389/fncom.2015.00089

PubMed Abstract | CrossRef Full Text | Google Scholar

Chow, C. C., and Karimipanah, Y. (2020). Before and beyond the Wilson–Cowan equations. J. Neurophysiol. 123, 1645–1656. doi:10.1152/jn.00404.2019

PubMed Abstract | CrossRef Full Text | Google Scholar

Chung, M. K., El-Yaagoubi, A. B., Qiu, A., and Ombao, H. (2025). From density to void: why brain networks fail to reveal complex higher-order structures. arXiv:2503.14700. doi:10.48550/arXiv.2503.14700

CrossRef Full Text | Google Scholar

Cirelli, C. (2017). Sleep, synaptic homeostasis and neuronal firing rates. Curr. Opin. Neurobiol. 44, 72–79. doi:10.1016/j.conb.2017.03.016

PubMed Abstract | CrossRef Full Text | Google Scholar

Cohen, R., and Havlin, S. (2010). Complex networks: structure, robustness and function. Cambridge: Cambridge University Press.

Google Scholar

Collins, G. P. (2006). Computing with quantum knots. Sci. Am. 294, 56–63. doi:10.1038/scientificamerican0406-56

PubMed Abstract | CrossRef Full Text | Google Scholar

Csermely, P. (2004). Strong links are important, but weak links stabilize them. Trends biochem. Sci. 29, 331–334. doi:10.1016/j.tibs.2004.05.004

PubMed Abstract | CrossRef Full Text | Google Scholar

Cugliandolo, L. F., Dean, D. S., and Kurchan, J. (1997). Fluctuation-dissipation theorems and entropy production in relaxational systems. Phys. Rev. Lett. 79, 2168–2171. doi:10.1103/physrevlett.79.2168

CrossRef Full Text | Google Scholar

Curto, C. (2017). What can topology tell us about the neural code? Bull. New. Ser. Am. Math. Soc. 54, 63–78. doi:10.1090/bull/1554

CrossRef Full Text | Google Scholar

Curto, C., and Itskov, V. (2008). Cell groups reveal structure of stimulus space. PLoS Comput. Biol. 4, e1000205. doi:10.1371/journal.pcbi.1000205

PubMed Abstract | CrossRef Full Text | Google Scholar

Curto, C., and Morrison, K. (2019). Relating network connectivity to dynamics: opportunities and challenges for theoretical neuroscience. Curr. Opin. Neurobiol. 58, 11–20. doi:10.1016/j.conb.2019.06.003

PubMed Abstract | CrossRef Full Text | Google Scholar

Danziger, M. M., Bonamassa, I., Boccaletti, S., and Havlin, S. (2019). Dynamic interdependence and competition in multilayer networks. Nat. Phys. 15, 178–185. doi:10.1038/s41567-018-0343-1

CrossRef Full Text | Google Scholar

Davis, G. W. (2006). Homeostatic control of neural activity: from phenomenology to molecular design. Annu. Rev. Neurosci. 29, 307–323. doi:10.1146/annurev.neuro.28.061604.135751

PubMed Abstract | CrossRef Full Text | Google Scholar

Davis, Z. W., Benigno, G. B., Fletterman, C., Desbordes, T., Steward, C., Sejnowski, T. J., et al. (2021). Spontaneous traveling waves naturally emerge from horizontal fiber time delays and travel through locally asynchronous-irregular states. Nat. Commun. 12, 6057. doi:10.1038/s41467-021-26175-1

PubMed Abstract | CrossRef Full Text | Google Scholar

De Arcangelis, L., Perrone-Capano, C., and Herrmann, H. J. (2006). Self-organized criticality model for brain plasticity. Phys. Rev. Lett. 96, 028107. doi:10.1103/PhysRevLett.96.028107

PubMed Abstract | CrossRef Full Text | Google Scholar

Deco, G., Jirsa, V. K., McIntosh, A. R., Sporns, O., and Kötter, R. (2009). Key role of coupling, delay, and noise in resting brain fluctuations. Proc. Natl. Acad. Sci. U.S.A. 106, 10302–10307. doi:10.1073/pnas.0901831106

PubMed Abstract | CrossRef Full Text | Google Scholar

Dehmamy, N., Milanlouei, S., and Barabási, A. L. (2018). A structural transition in physical networks. Nature 563, 676–680. doi:10.1038/s41586-018-0726-6

PubMed Abstract | CrossRef Full Text | Google Scholar

del Genio, C. I., Gómez-Gardeñes, J., Bonamassa, I., and Boccaletti, S. (2016). Synchronization in networks with multiple interaction layers. Sci. Adv. 2, e1601679. doi:10.1126/sciadv.1601679

PubMed Abstract | CrossRef Full Text | Google Scholar

Denève, S., and Machens, C. K. (2016). Efficient codes and balanced networks. Nat. Neurosci. 19, 375–382. doi:10.1038/nn.4243

PubMed Abstract | CrossRef Full Text | Google Scholar

M. De Pittà, and H. Berry (2019). Computational glioscience. Springer series in computational neuroscience (Cham, Switzerland: Springer).

Google Scholar

De Pittà, M., and Brunel, N. (2022). Multiple forms of working memory emerge from synapse–astrocyte interactions in a neuron–glia network model. Proc. Natl. Acad. Sci. U.S.A. 119, e2207912119. doi:10.1073/pnas.2207912119

PubMed Abstract | CrossRef Full Text | Google Scholar

De Pittà, M., Brunel, N., and Volterra, A. (2016). Astrocytes: orchestrating synaptic plasticity? Neuroscience 323, 43–61. doi:10.1016/j.neuroscience.2015.04.001

PubMed Abstract | CrossRef Full Text | Google Scholar

DeVille, L., and Lerman, E. (2015). Modular dynamical systems on networks. J. Eur. Math. Soc. 17, 2977–3013. doi:10.4171/jems/577

CrossRef Full Text | Google Scholar

Devinsky, O., Vezzani, A., Najjar, S., De Lanerolle, N. C., and Rogawski, M. A. (2013). Glia and epilepsy: excitability and inflammation. Trends Neurosci. 36, 174–184. doi:10.1016/j.tins.2012.11.008

PubMed Abstract | CrossRef Full Text | Google Scholar

Dichio, V., and Fallani, F. D. V. (2022). Statistical models of complex brain networks: maximum entropy approach. Rep. Prog. Phys. 86, 102601.

Google Scholar

Di Giovanna, A. P., Tibo, A., Silvestri, L., Müllenbroich, M. C., Costantini, I., Allegra Mascaro, A. L., et al. (2018). Whole-brain vasculature reconstruction at the single capillary level. Sci. Rep. 8, 12573. doi:10.1038/s41598-018-30533-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Dobson, T., Malnič, A., and Marušič, D. (2022). Symmetry in graphs, 198. Cambridge: Cambridge University Press.

Google Scholar

De Domenico, M. (2017). Multilayer modeling and analysis of human brain networks. GigaScience 6, 1–8. doi:10.1093/gigascience/gix004

PubMed Abstract | CrossRef Full Text | Google Scholar

De Domenico, M., Solé-Ribalta, A., Cozzo, E., Kivelä, M., Moreno, Y., Porter, M. A., et al. (2013). Mathematical formulation of multilayer networks. Phys. Rev. X 3, 041022. doi:10.1103/physrevx.3.041022

CrossRef Full Text | Google Scholar

De Domenico, M., Solé-Ribalta, A., Gómez, S., and Arenas, A. (2014). Navigability of interconnected networks under random failures. Proc. Natl. Acad. Sci. U.S.A. 111, 8351–8356. doi:10.1073/pnas.1318469111

PubMed Abstract | CrossRef Full Text | Google Scholar

De Domenico, M., Sasai, S., and Arenas, A. (2016). Mapping multiplex hubs in human functional brain networks. Front. Neurosci. 10, 326. doi:10.3389/fnins.2016.00326

PubMed Abstract | CrossRef Full Text | Google Scholar

Douglas, R. J., and Martin, K. A. (2007). Recurrent neuronal circuits in the neocortex. Curr. Biol. 17, R496–R500. doi:10.1016/j.cub.2007.04.024

PubMed Abstract | CrossRef Full Text | Google Scholar

Doyle, J. C., and Csete, M. (2011). Architecture, constraints, and behavior. Proc. Natl. Acad. Sci. U.S.A. 108, 15624–15630. doi:10.1073/pnas.1103557108

PubMed Abstract | CrossRef Full Text | Google Scholar

Echegoyen, I., López-Sanz, D., Maestú, F., and Buldú, J. M. (2021). From single layer to multilayer networks in mild cognitive impairment and Alzheimer’s disease. J. Phys. Complex. 2, 045020. doi:10.1088/2632-072x/ac3ddd

CrossRef Full Text | Google Scholar

Egolf, D. A. (2000). Equilibrium regained: from nonequilibrium chaos to statistical mechanics. Science 287, 101–104. doi:10.1126/science.287.5450.101

PubMed Abstract | CrossRef Full Text | Google Scholar

Ernst, U., Pawelzik, K., and Geisel, T. (1995). Synchronization induced by temporal delays in pulse-coupled oscillators. Phys. Rev. Lett. 74, 1570–1573. doi:10.1103/PhysRevLett.74.1570

PubMed Abstract | CrossRef Full Text | Google Scholar

Estrada, E., Hatano, N., and Benzi, M. (2012). The physics of communicability in complex networks. Phys. Rep. 514, 89–119. doi:10.1016/j.physrep.2012.01.006

CrossRef Full Text | Google Scholar

Faci-Lázaro, S., Lor, T., Ródenas, G., Mazo, J. J., Soriano, J., and Gómez-Gardeñes, J. (2022). Dynamical robustness of collective neuronal activity upon targeted damage in interdependent networks. Eur. Phys. J. Spec. Top. 231, 195–201. doi:10.1140/epjs/s11734-021-00411-7

CrossRef Full Text | Google Scholar

Fallenstein, G. T., Hulce, V. D., and Melvin, J. W. (1969). Dynamic mechanical properties of human brain tissue. J. Biomech. 2, 217–226. doi:10.1016/0021-9290(69)90079-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Fan, T., Lü, L., Shi, D., and Zhou, T. (2021). Characterizing cycle structure in complex networks. Commun. Phys. 4, 272. doi:10.1038/s42005-021-00781-3

CrossRef Full Text | Google Scholar

Fino, E., and Yuste, R. (2011). Dense inhibitory connectivity in neocortex. Neuron 69, 1188–1203. doi:10.1016/j.neuron.2011.02.025

PubMed Abstract | CrossRef Full Text | Google Scholar

Fischer, I., Vicente, R., Buldú, J. M., Peil, M., Mirasso, C. R., Torrent, M. C., et al. (2006). Zero-lag long-range synchronization via dynamical relaying. Phys. Rev. Lett. 97, 123902. doi:10.1103/PhysRevLett.97.123902

PubMed Abstract | CrossRef Full Text | Google Scholar

Fraiman, D., and Chialvo, D. R. (2012). What kind of noise is brain noise: anomalous scaling behavior of the resting brain activity fluctuations. Front. Physiol. 3, 307. doi:10.3389/fphys.2012.00307

PubMed Abstract | CrossRef Full Text | Google Scholar

Friedman, N., Ito, S., Brinkman, B. A., Shimono, M., DeVille, R. L., Dahmen, K. A., et al. (2012). Universal critical dynamics in high resolution neuronal avalanche data. Phys. Rev. Lett. 108, 208102. doi:10.1103/PhysRevLett.108.208102

PubMed Abstract | CrossRef Full Text | Google Scholar

Fruchart, M., Hanal, R., Littlewood, P. B., and Vitelli, V. (2021). Non-reciprocal phase transitions. Nature 592, 363–369. doi:10.1038/s41586-021-03375-9

PubMed Abstract | CrossRef Full Text | Google Scholar

Gabrielli, A., Mastrandrea, R., Caldarelli, G., and Cimini, G. (2019). Grand canonical ensemble of weighted networks. Phys. Rev. E 99, 030301. doi:10.1103/PhysRevE.99.030301

PubMed Abstract | CrossRef Full Text | Google Scholar

Gallo, L., Lacasa, L., Latora, V., and Battiston, F. (2024). Higher-order correlations reveal complex memory in temporal hypergraphs. Nat. Commun. 15, 4754. doi:10.1038/s41467-024-48578-6

PubMed Abstract | CrossRef Full Text | Google Scholar

Ganmor, E., Segev, R., and Schneidman, E. (2011). Sparse low-order interaction network underlies a highly correlated and learnable neural population code. Proc. Natl. Acad. Sci. U.S.A. 108, 9679–9684. doi:10.1073/pnas.1019641108

PubMed Abstract | CrossRef Full Text | Google Scholar

Gao, J., Buldyrev, S. V., Havlin, S., and Stanley, H. E. (2011). Robustness of a network of networks. Phys. Rev. Lett. 107, 195701. doi:10.1103/PhysRevLett.107.195701

PubMed Abstract | CrossRef Full Text | Google Scholar

Gast, R., Solla, S. A., and Kennedy, A. (2024). Neural heterogeneity controls computations in spiking neural networks. Proc. Natl. Acad. Sci. U.S.A. 121, e2311885121. doi:10.1073/pnas.2311885121

PubMed Abstract | CrossRef Full Text | Google Scholar

Georgi, H. (1993). Effective field theory. Annu. Rev. Nucl. Part. Sci. 43, 209–252. doi:10.1146/annurev.ns.43.120193.001233

CrossRef Full Text | Google Scholar

Ghavasieh, A., and De Domenico, M. (2022). Statistical physics of network structure and information dynamics. J. Phys. Complex 3, 011001. doi:10.1088/2632-072x/ac457a

CrossRef Full Text | Google Scholar

Ghavasieh, A., and De Domenico, M. (2024). Diversity of information pathways drives sparsity in real-world networks. Nat. Phys. 20, 512–519. doi:10.1038/s41567-023-02330-x

CrossRef Full Text | Google Scholar

Ghosh, D., Frasca, M., Rizzo, A., Majhi, S., Rakshit, S., Alfaro-Bittner, K., et al. (2022). The synchronized dynamics of time-varying networks. Phys. Rep. 949, 1–63. doi:10.1016/j.physrep.2021.10.006

CrossRef Full Text | Google Scholar

Ghoshal, G., Zlatić, V., Caldarelli, G., and Newman, M. E. (2009). Random hypergraphs and their applications. Phys. Rev. E 79, 066118. doi:10.1103/PhysRevE.79.066118

PubMed Abstract | CrossRef Full Text | Google Scholar

Ghrist, R. (2014). Elementary applied topology (Vol. 1). Seattle: Createspace.

Google Scholar

Gidon, A., and Segev, I. (2012). Principles governing the operation of synaptic inhibition in dendrites. Neuron 75, 330–341. doi:10.1016/j.neuron.2012.05.015

PubMed Abstract | CrossRef Full Text | Google Scholar

Giusti, C., Pastalkova, E., Curto, C., and Itskov, V. (2015). Clique topology reveals intrinsic geometric structure in neural correlations. Proc. Natl. Acad. Sci. U.S.A. 112, 13455–13460. doi:10.1073/pnas.1506407112

PubMed Abstract | CrossRef Full Text | Google Scholar

Giusti, C., Ghrist, R., and Bassett, D. S. (2016). Two’s company, three (or more) is a simplex: algebraic-topological tools for understanding higher-order structure in neural data. J. Comput. Neurosci. 41, 1–14. doi:10.1007/s10827-016-0608-6

PubMed Abstract | CrossRef Full Text | Google Scholar

Goaillard, J. M., Moubarak, E., Tapia, M., and Tell, F. (2020). Diversity of axonal and dendritic contributions to neuronal output. Front. Cell. Neurosci. 13, 570. doi:10.3389/fncel.2019.00570

PubMed Abstract | CrossRef Full Text | Google Scholar

Goldenfeld, N., Martin, O., and Oono, Y. (1989). Intermediate asymptotics and renormalization group theory. J. Sci. Comput. 4, 355–372. doi:10.1007/bf01060993

CrossRef Full Text | Google Scholar

Gollo, L. L., and Breakspear, M. (2014). The frustrated brain: from dynamics on motifs to communities and networks. Philos. Trans. R. Soc. B 369, 20130532. doi:10.1098/rstb.2013.0532

PubMed Abstract | CrossRef Full Text | Google Scholar

Golubitsky, M., and Stewart, I. (2002). “Patterns of oscillation in coupled cell systems,” in Geometry, mechanics, and dynamics (New York, NY: Springer), 243–286.

CrossRef Full Text | Google Scholar

Goodfellow, M., Andrzejak, R. G., Masoller, C., and Lehnertz, K. (2022). What models and tools can contribute to a better understanding of brain activity. Front. Netw. Physiol. 2, 907995. doi:10.3389/fnetp.2022.907995

PubMed Abstract | CrossRef Full Text | Google Scholar

Goriely, A., Budday, S., and Kuhl, E. (2015). “Neuromechanics: from neurons to brain,”. Advances in applied mechanics. Editors S. P. A. Bordas, and D. S. Balint, 48, 79–139.

Google Scholar

Grela, J. (2017). What drives transient behavior in complex systems? Phys. Rev. E 96, 022316. doi:10.1103/PhysRevE.96.022316

PubMed Abstract | CrossRef Full Text | Google Scholar

Griffith, J. S. (1963). On the stability of brain-like structures. Biophys. J. 3, 299–308. doi:10.1016/s0006-3495(63)86822-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Gross, T., and Blasius, B. (2008). Adaptive coevolutionary networks: a review. J. R. Soc. Interface 5, 259–271. doi:10.1098/rsif.2007.1229

PubMed Abstract | CrossRef Full Text | Google Scholar

Gutenkunst, R. N., Waterfall, J. J., Casey, F. P., Brown, K. S., Myers, C. R., and Sethna, J. P. (2007). Universally sloppy parameter sensitivities in systems biology models. PLoS Comput. Biol. 3, 1871–1878. doi:10.1371/journal.pcbi.0030189

PubMed Abstract | CrossRef Full Text | Google Scholar

Gutiérrez, R., Sendiña-Nadal, I., Zanin, M., Papo, D., and Boccaletti, S. (2012). Targeting the dynamics of complex networks. Sci. Rep. 2, 396. doi:10.1038/srep00396

PubMed Abstract | CrossRef Full Text | Google Scholar

Gutiérrez, R., Materassi, M., Focardi, S., and Boccaletti, S. (2020). Steering complex networks toward desired dynamics. Sci. Rep. 10, 20744. doi:10.1038/s41598-020-77663-1

PubMed Abstract | CrossRef Full Text | Google Scholar

Haken, H. (2006a). Brain dynamics: synchronization and activity patterns in pulse-coupled neural nets with delays and noise. Berlin: Springer.

Google Scholar

Haken, H. (2006b). Synergetics of brain function. Int. J. Psychophysiol. 60, 110–124. doi:10.1016/j.ijpsycho.2005.12.006

PubMed Abstract | CrossRef Full Text | Google Scholar

Häusser, M., and Mel, B. (2003). Dendrites: bug or feature? Curr. Opin. Neurobiol. 13, 372–383. doi:10.1016/s0959-4388(03)00075-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Hebb, D. O. (1949). The organization of behavior. New York: Wiley and Sons.

Google Scholar

Herculano-Houzel, S. (2014). The glia/neuron ratio: how it varies uniformly across brain structures and species and what that means for brain physiology and evolution. Glia 62, 1377–1391. doi:10.1002/glia.22683

PubMed Abstract | CrossRef Full Text | Google Scholar

Hilgetag, C. C., and Zikopoulos, B. (2022). The highways and byways of the brain. PLoS Biol. 20, e3001612. doi:10.1371/journal.pbio.3001612

PubMed Abstract | CrossRef Full Text | Google Scholar

Hoel, E. P., Albantakis, L., and Tononi, G. (2013). Quantifying causal emergence shows that macro can beat micro. Proc. Natl. Acad. Sci. U.S.A. 110, 19790–19795. doi:10.1073/pnas.1314922110

PubMed Abstract | CrossRef Full Text | Google Scholar

Hoffman, W. C. (1989). The visual cortex is a contact bundle. Appl. Math. Comput. 32, 137–167. doi:10.1016/0096-3003(89)90091-x

CrossRef Full Text | Google Scholar

Holme, P. (2015). Modern temporal network theory: a colloquium. Eur. Phys. J. B 88, 234. doi:10.1140/epjb/e2015-60657-4

CrossRef Full Text | Google Scholar

Holme, P., and Saramäki, J. (2012). Temporal networks. Phys. Rep. 519, 97–125. doi:10.1016/j.physrep.2012.03.001

CrossRef Full Text | Google Scholar

Huang, X., Xu, K., Chu, C., Jiang, T., and Yu, S. (2017). Weak higher-order interactions in macroscopic functional networks of the resting brain. J. Neurosci. 37, 10481–10497. doi:10.1523/JNEUROSCI.0451-17.2017

PubMed Abstract | CrossRef Full Text | Google Scholar

Huber, L., Finn, E. S., Chai, Y., Goebel, R., Stirnberg, R., Stöcker, T., et al. (2021). Layer-dependent functional connectivity methods. Prog. Neurobiol. 207, 101835. doi:10.1016/j.pneurobio.2020.101835

PubMed Abstract | CrossRef Full Text | Google Scholar

Huneman, P. (2010). Topological explanations and robustness in biological sciences. Synthese 177, 213–245. doi:10.1007/s11229-010-9842-z

CrossRef Full Text | Google Scholar

Hutt, A., Rich, S., Valiante, T. A., and Lefebvre, J. (2023). Intrinsic neural diversity quenches the dynamic volatility of neural networks. Proc. Natl. Acad. Sci. U.S.A. 120, e2218841120. doi:10.1073/pnas.2218841120

PubMed Abstract | CrossRef Full Text | Google Scholar

Ikeda, K., and Matsumoto, K. (1987). High-dimensional chaotic behavior in systems with time-delayed feedback. Phys. D. 29, 223–235. doi:10.1016/0167-2789(87)90058-3

CrossRef Full Text | Google Scholar

Illari, P. M., and Williamson, J. (2012). What is a mechanism? Thinking about mechanisms across the sciences. Eur. J. Philos. Sci. 2, 119–135. doi:10.1007/s13194-011-0038-2

CrossRef Full Text | Google Scholar

Isaacson, J. S., and Scanziani, M. (2011). How inhibition shapes cortical activity. Neuron 72, 231–243. doi:10.1016/j.neuron.2011.09.027

PubMed Abstract | CrossRef Full Text | Google Scholar

Ito, H. T., and Schuman, E. M. (2008). Frequency-dependent signal transmission and modulation by neuromodulators. Front. Neurosci. 2, 138–144. doi:10.3389/neuro.01.027.2008

PubMed Abstract | CrossRef Full Text | Google Scholar

Ito, T., Hearne, L., Mill, R., Cocuzza, C., and Cole, M. W. (2020). Discovering the computational relevance of brain network organization. Trends Cogn. Sci. 24, 25–38. doi:10.1016/j.tics.2019.10.005

PubMed Abstract | CrossRef Full Text | Google Scholar

Jadi, M., Polsky, J., Schiller, J., and Mel, B. W. (2012). Location-dependent effects of inhibition on local spiking in pyramidal neuron dendrites. PLoS Comput. Biol. 8, e1002550. doi:10.1371/journal.pcbi.1002550

PubMed Abstract | CrossRef Full Text | Google Scholar

Ji, P., Wang, Y., Peron, T., Li, C., Nagler, J., and Du, J. (2023). Structure and function in artificial, zebrafish and human neural networks. Phys. Life Rev. 45, 74–111. doi:10.1016/j.plrev.2023.04.004

PubMed Abstract | CrossRef Full Text | Google Scholar

Jirsa, V. K., and Kelso, J. A. S. (2000). Spatiotemporal pattern formation in neural systems with heterogeneous connection topologies. Phys. Rev. E 62, 8462–8465. doi:10.1103/physreve.62.8462

PubMed Abstract | CrossRef Full Text | Google Scholar

Kahle, M. (2014). Topology of random simplicial complexes: a survey. AMS Contemp. Math. 620, 201–222. doi:10.1090/conm/620/12367

CrossRef Full Text | Google Scholar

Kajiwara, M., Nomura, R., Goetze, F., Kawabata, M., Isomura, Y., Akutsu, T., et al. (2021). Inhibitory neurons exhibit high controlling ability in the cortical microconnectome. PLoS Comput. Biol. 17, e1008846. doi:10.1371/journal.pcbi.1008846

PubMed Abstract | CrossRef Full Text | Google Scholar

Kanner, S., Goldin, M., Galron, R., Ben Jacob, E., Bonifazi, P., and Barzilai, A. (2018). Astrocytes restore connectivity and synchronization in dysfunctional cerebellar networks. Proc. Natl. Acad. Sci. U.S.A. 115, 8025–8030. doi:10.1073/pnas.1718582115

PubMed Abstract | CrossRef Full Text | Google Scholar

Katz, P. S., and Edwards, D. H. (1999). “Metamodulation: the control and modulation of neuromodulation,” in Beyond neurotransmission: neuromodulation and its importance for information processing. Editor P. S. Katz (Oxford: Oxford University Press), 349–381.

CrossRef Full Text | Google Scholar

Kirst, C., Skriabine, S., Vieites-Prado, A., Topilko, T., Bertin, P., Gerschenfeld, G., et al. (2020). Mapping the fine-scale organization and plasticity of the brain vasculature. Cell 180, 780–795. doi:10.1016/j.cell.2020.01.028

PubMed Abstract | CrossRef Full Text | Google Scholar

Kitano, H. (2002). Computational systems biology. Nature 420, 206–210. doi:10.1038/nature01254

PubMed Abstract | CrossRef Full Text | Google Scholar

Kitano, H. (2007). Towards a theory of biological robustness. Mol. Syst. Biol. 3, 137. doi:10.1038/msb4100179

PubMed Abstract | CrossRef Full Text | Google Scholar

Kivelä, M., Arenas, A., Barthelemy, M., Gleeson, J. P., Moreno, Y., and Porter, M. A. (2014). Multilayer networks. J. Complex Netw. 2, 203–271. doi:10.1093/comnet/cnu016

CrossRef Full Text | Google Scholar

Knoblauch, K., Ercsey-Ravasz, M., Kennedy, H., and Toroczkai, Z. (2016). “The brain in space,” in Micro-, meso-and macro-connectomics of the brain (Cham: Springer), 45–74.

Google Scholar

Koch, C., Poggio, T., and Torre, V. (1982). Retinal ganglion cells: a functional interpretation of dendritic morphology. Philos. Trans. R. Soc. B 298, 227–263. doi:10.1098/rstb.1982.0084

PubMed Abstract | CrossRef Full Text | Google Scholar

Korhonen, O., Zanin, M., and Papo, D. (2021). Principles and open questions in functional brain network reconstruction. Hum. Brain Mapp. 42, 3680–3711. doi:10.1002/hbm.25462

PubMed Abstract | CrossRef Full Text | Google Scholar

Kostić, D. (2018). Mechanistic and topological explanations: an introduction. Synthese 195, 1–10. doi:10.1007/s11229-016-1257-z

CrossRef Full Text | Google Scholar

Lambiotte, R., Krapivsky, P. L., Bhat, U., and Redner, S. (2016). Structural transitions in densifying networks. Phys. Rev. Lett. 117, 218301. doi:10.1103/PhysRevLett.117.218301

PubMed Abstract | CrossRef Full Text | Google Scholar

Lambiotte, R., Rosvall, M., and Scholtes, I. (2019). From networks to optimal higher-order models of complex systems. Nat. Phys. 15, 313–320. doi:10.1038/s41567-019-0459-y

PubMed Abstract | CrossRef Full Text | Google Scholar

Larkum, M. E., Zhu, J. J., and Sakmann, B. (1999). A new cellular mechanism for coupling inputs arriving at different cortical layers. Nature 398, 338–341. doi:10.1038/18686

PubMed Abstract | CrossRef Full Text | Google Scholar

Larkum, M. E., Wu, J., Duverdin, S. A., and Gidon, A. (2022). The guide to dendritic spikes of the mammalian cortex in vitro and in vivo. Neuroscience 489, 15–33. doi:10.1016/j.neuroscience.2022.02.009

PubMed Abstract | CrossRef Full Text | Google Scholar

Lau, K. Y., Ganguli, S., and Tang, C. (2007). Function constrains network architecture and dynamics: a case study on the yeast cell cycle Boolean network. Phys. Rev. E 75, 051907. doi:10.1103/PhysRevE.75.051907

PubMed Abstract | CrossRef Full Text | Google Scholar

Lenglet, C., Rousson, M., Deriche, R., and Faugeras, O. (2006). Statistics on the manifold of multivariate normal distributions: theory and application to diffusion tensor MRI processing. J. Math. Imaging Vis. 25, 423–444. doi:10.1007/s10851-006-6897-z

CrossRef Full Text | Google Scholar

Lesne, A. (2008). Robustness: confronting lessons from physics and biology. Biol. Rev. 83, 509–532. doi:10.1111/j.1469-185X.2008.00052.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Levina, A., Herrmann, J. M., and Geisel, T. (2007). Dynamical synapses causing self-organized criticality in neural networks. Nat. Phys. 3, 857–860. doi:10.1038/nphys758

CrossRef Full Text | Google Scholar

Levina, A., Herrmann, J. M., and Geisel, T. (2009). Phase transitions towards criticality in a neural system with adaptive interactions. Phys. Rev. Lett. 102, 118110. doi:10.1103/PhysRevLett.102.118110

PubMed Abstract | CrossRef Full Text | Google Scholar

Levit-Binnun, N., and Golland, Y. (2012). Finding behavioral and network indicators of brain vulnerability. Front. Hum. Neurosci. 6, 10. doi:10.3389/fnhum.2012.00010

PubMed Abstract | CrossRef Full Text | Google Scholar

Li, A., Cornelius, S. P., Liu, Y. Y., Wang, L., and Barabási, A. L. (2017). The fundamental advantages of temporal networks. Science 358, 1042–1046. doi:10.1126/science.aai7488

PubMed Abstract | CrossRef Full Text | Google Scholar

Li, W., Li, J., Li, J., Wei, C., Laviv, T., Dong, M., et al. (2024). Boosting neuronal activity-driven mitochondrial DNA transcription improves cognition in aged mice. Science 386, eadp6547. doi:10.1126/science.adp6547

PubMed Abstract | CrossRef Full Text | Google Scholar

Liao, Z., Terada, S., Raikov, I. G., Hadjiabadi, D., Szoboszlay, M., Soltesz, I., et al. (2024). Inhibitory plasticity supports replay generalization in the hippocampus. Nat. Neurosci. 27, 1987–1998. doi:10.1038/s41593-024-01745-w

PubMed Abstract | CrossRef Full Text | Google Scholar

Linden, D. J. (1999). The return of the spike: postsynaptic action potentials and the induction of LTP and LTD. Neuron 22, 661–666. doi:10.1016/s0896-6273(00)80726-6

PubMed Abstract | CrossRef Full Text | Google Scholar

Litwin-Kumar, A., and Turaga, S. C. (2019). Constraining computational models using electron microscopy wiring diagrams. Cur. Opin. Neurobiol. 58, 94–100. doi:10.1016/j.conb.2019.07.007

PubMed Abstract | CrossRef Full Text | Google Scholar

Liu, Y. Y., and Barabási, A. L. (2016). Control principles of complex systems. Rev. Mod. Phys. 88, 035006. doi:10.1103/revmodphys.88.035006

CrossRef Full Text | Google Scholar

Liu, Y., Dehmamy, N., and Barabási, A. L. (2021). Isotopy and energy of physical networks. Nat. Phys. 17, 216–222. doi:10.1038/s41567-020-1029-z

CrossRef Full Text | Google Scholar

Liu, X., Li, D., Ma, M., Szymanski, B. K., Stanley, H. E., and Gao, J. (2022). Network resilience. Phys. Rep. 971, 1–108. doi:10.1016/j.physrep.2022.04.002

CrossRef Full Text | Google Scholar

Livneh, Y. (2023). FENS-Kavli Network of Excellence: bridging levels and model systems in neuroscience: challenges and solutions. Eur. J. Neurosci. 58, 4460–4465. doi:10.1111/ejn.15990

PubMed Abstract | CrossRef Full Text | Google Scholar

Loos, S. A., and Klapp, S. H. (2020). Irreversibility, heat and information flows induced by non-reciprocal interactions. New J. Phys. 22, 123051. doi:10.1088/1367-2630/abcc1e

CrossRef Full Text | Google Scholar

Lovász, L., and Szegedy, B. (2006). Limits of dense graph sequences. J. Comb. Theory Ser. B 96, 933–957. doi:10.1016/j.jctb.2006.05.002

CrossRef Full Text | Google Scholar

Lubenov, E. V., and Siapas, A. G. (2009). Hippocampal theta oscillations are travelling waves. Nature 459, 534–539. doi:10.1038/nature08010

PubMed Abstract | CrossRef Full Text | Google Scholar

Ma, W., Trusina, A., El-Samad, H., Lim, W. A., and Tang, C. (2009). Defining network topologies that can achieve biochemical adaptation. Cell 138, 760–773. doi:10.1016/j.cell.2009.06.013

PubMed Abstract | CrossRef Full Text | Google Scholar

Maass, W., Natschläger, T., and Markram, H. (2002). Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Comput. 14, 2531–2560. doi:10.1162/089976602760407955

PubMed Abstract | CrossRef Full Text | Google Scholar

Machta, B. B., Chachra, R., Transtrum, M. K., and Sethna, J. P. (2013). Parameter space compression underlies emergent theories and predictive models. Science 342, 604–607. doi:10.1126/science.1238723

PubMed Abstract | CrossRef Full Text | Google Scholar

Magnasco, M. O., Piro, O., and Cecchi, G. A. (2009). Self-tuned critical anti-Hebbian networks. Phys. Rev. Lett. 102, 258102. doi:10.1103/PhysRevLett.102.258102

PubMed Abstract | CrossRef Full Text | Google Scholar

Markov, N. T., Misery, P., Falchier, A., Lamy, C., Vezoli, J., Quilodran, R., et al. (2011). Weight consistency specifies regularities of macaque cortical networks. Cereb. Cortex 21, 1254–1272. doi:10.1093/cercor/bhq201

PubMed Abstract | CrossRef Full Text | Google Scholar

Markov, N. T., Ercsey-Ravasz, M., Van Essen, D. C., Knoblauch, K., Toroczkai, Z., and Kennedy, H. (2013). Cortical high-density counterstream architectures. Science 342, 1238406. doi:10.1126/science.1238406

PubMed Abstract | CrossRef Full Text | Google Scholar

Marković, D., Mizrahi, A., Querlioz, D., and Grollier, J. (2020). Physics for neuromorphic computing. Nat. Rev. Phys. 2, 499–510. doi:10.1038/s42254-020-0208-2

CrossRef Full Text | Google Scholar

Markram, H., Lübke, J., Frotscher, M., and Sakmann, B. (1997). Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs. Science 275, 213–215. doi:10.1126/science.275.5297.213

PubMed Abstract | CrossRef Full Text | Google Scholar

Markram, H., Toledo-Rodriguez, M., Wang, Y., Gupta, A., Silberberg, G., and Wu, C. (2004). Interneurons of the neocortical inhibitory system. Nat. Rev. Neurosci. 5, 793–807. doi:10.1038/nrn1519

PubMed Abstract | CrossRef Full Text | Google Scholar

Marr, D. (1971). Simple memory: a theory for archicortex. Philos. Trans. Roy. Soc. Lond. B 262, 23–81. doi:10.1098/rstb.1971.0078

PubMed Abstract | CrossRef Full Text | Google Scholar

Martens, M., and Winckler, B. (2016). Instability of renormalization. arXiv:1609.04473. doi:10.48550/arXiv.1609.04473

CrossRef Full Text | Google Scholar

Martinez-Banaclocha, M. (2018). Ephaptic coupling of cortical neurons: possible contribution of astroglial magnetic fields? Neuroscience 370, 37–45. doi:10.1016/j.neuroscience.2017.07.072

PubMed Abstract | CrossRef Full Text | Google Scholar

Masel, J., and Trotter, M. V. (2010). Robustness and evolvability. Trends Genet. 26, 406–414. doi:10.1016/j.tig.2010.06.002

PubMed Abstract | CrossRef Full Text | Google Scholar

Masuda, N., Porter, M. A., and Lambiotte, R. (2017). Random walks and diffusion on networks. Phys. Rep. 716, 1–58. doi:10.1016/j.physrep.2017.07.007

CrossRef Full Text | Google Scholar

Meisel, C., and Gross, T. (2009). Adaptive self-organization in a realistic neural network model. Phys. Rev. E 80, 061917. doi:10.1103/PhysRevE.80.061917

PubMed Abstract | CrossRef Full Text | Google Scholar

Mendoza-Halliday, D., Major, A. J., Lee, N., Lichtenfeld, M. J., Carlson, B., Mitchell, B., et al. (2024). A ubiquitous spectrolaminar motif of local field potential power across the primate cortex. Nat. Neurosci. 27, 547–560. doi:10.1038/s41593-023-01554-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Merchan, L., and Nemenman, I. (2016). On the sufficiency of pairwise interactions in maximum entropy models of networks. J. Stat. Phys. 162, 1294–1308. doi:10.1007/s10955-016-1456-5

CrossRef Full Text | Google Scholar

Meshulam, L., and Bialek, W. (2025). Statistical mechanics for networks of real neurons. Rev. Mod. Phys. In press. doi:10.1103/jcrn-3nrc

CrossRef Full Text | Google Scholar

Mikaberidze, G., Artime, O., Díaz-Guilera, A., and D’Souza, R. M. (2025). Multiscale field theory for network flows. Phys. Rev. X 15, 021044. doi:10.1103/physrevx.15.021044

CrossRef Full Text | Google Scholar

Millán, A. P., Torres, J. J., and Bianconi, G. (2020). Explosive higher-order Kuramoto dynamics on simplicial complexes. Phys. Rev. Lett. 124, 218301. doi:10.1103/PhysRevLett.124.218301

PubMed Abstract | CrossRef Full Text | Google Scholar

Millán, A. P., Gori, G., Battiston, F., Enss, T., and Defenu, N. (2021). Complex networks with tuneable spectral dimension as a universality playground. Phys. Rev. Res. 3, 023015. doi:10.1103/physrevresearch.3.023015

CrossRef Full Text | Google Scholar

Mongillo, G., Rumpel, S., and Loewenstein, Y. (2018). Inhibitory connectivity defines the realm of excitatory plasticity. Nat. Neurosci. 21, 1463–1470. doi:10.1038/s41593-018-0226-x

PubMed Abstract | CrossRef Full Text | Google Scholar

Moore, J. J., Genkin, A., Tournoy, M., Pughe-Sanford, J. L., de Ruyter van Steveninck, R. R., and Chklovskii, D. B. (2024). The neuron as a direct data-driven controller. Proc. Natl. Acad. Sci. U.S.A. 121, e2311893121. doi:10.1073/pnas.2311893121

PubMed Abstract | CrossRef Full Text | Google Scholar

Morone, F., and Makse, H. A. (2019). Symmetry group factorization reveals the structure-function relation in the neural connectome of Caenorhabditis elegans. Nat. Commun. 10, 4961. doi:10.1038/s41467-019-12675-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Morone, F., Leifer, I., and Makse, H. A. (2020). Fibration symmetries uncover the building blocks of biological networks. Proc. Natl. Acad. Sci. U.S.A. 117, 8306–8314. doi:10.1073/pnas.1914628117

PubMed Abstract | CrossRef Full Text | Google Scholar

Morrell, M. C., Sederberg, A. J., and Nemenman, I. (2021). Latent dynamical variables produce signatures of spatiotemporal criticality in large biological systems. Phys. Rev. Lett. 126, 118302. doi:10.1103/PhysRevLett.126.118302

PubMed Abstract | CrossRef Full Text | Google Scholar

Morrison, K., and Curto, C. (2019). “Predicting neural network dynamics via graphical analysis,” in Algebraic and combinatorial computational biology (London, United Kingdom: Academic Press), 241–277.

Google Scholar

Morrison, K., Degeratu, A., Itskov, V., and Curto, C. (2024). Diversity of emergent dynamics in competitive threshold-linear networks. SIAM J. Appl. Dyn. Syst. 23, 855–884. doi:10.1137/22m1541666

CrossRef Full Text | Google Scholar

Muller, L., Piantoni, G., Koller, D., Cash, S. S., Halgren, E., and Sejnowski, T. J. (2016). Rotating waves during human sleep spindles organize global patterns of activity that repeat precisely through the night. eLife 5, e17267. doi:10.7554/eLife.17267

PubMed Abstract | CrossRef Full Text | Google Scholar

Muolo, R., Gallo, L., Latora, V., Frasca, M., and Carletti, T. (2022). Turing patterns in systems with high-order interactions. Chaos Solit. Fractals 166, 112912. doi:10.1016/j.chaos.2022.112912

CrossRef Full Text | Google Scholar

Nartallo-Kaluarachchi, R., Asllani, M., Goriely, A., Lambiotte, R., and Lambiotte, R. (2024). Broken detailed balance and entropy production in directed networks. Phys. Rev. E 110, 034313. doi:10.1103/PhysRevE.110.034313

PubMed Abstract | CrossRef Full Text | Google Scholar

Niedostatek, M., Baptista, A., Yamamoto, J., MacArthur, B., Kurths, J., Garcia, R. S., et al. (2024). Mining higher-order triadic interactions. doi:10.48550/arXiv.2404.14997

CrossRef Full Text | Google Scholar

Nishikawa, T., and Motter, A. E. (2016). Network-complement transitions, symmetries, and cluster synchronization. Chaos 26, 094818. doi:10.1063/1.4960617

PubMed Abstract | CrossRef Full Text | Google Scholar

Novikov, E., Novikov, A., Shannahoff-Khalsa, D., Schwartz, B., and Wright, J. (1997). Scale-similar activity in the brain. Phys. Rev. E 56, R2387–R2389. doi:10.1103/physreve.56.r2387

CrossRef Full Text | Google Scholar

Pajevic, S., Plenz, D., Basser, P. J., and Fields, R. D. (2023). Oligodendrocyte-mediated myelin plasticity and its role in neural synchronization. eLife 12, e81982. doi:10.7554/eLife.81982

PubMed Abstract | CrossRef Full Text | Google Scholar

Pang, J. C., Aquino, K. M., Oldehinkel, M., Robinson, P. A., Fulcher, B. D., Breakspear, M., et al. (2023). Geometric constraints on human brain function. Nature 618, 566–574. doi:10.1038/s41586-023-06098-1

PubMed Abstract | CrossRef Full Text | Google Scholar

Papo, D. (2014). Functional significance of complex fluctuations in brain activity: from resting state to cognitive neuroscience. Front. Syst. Neurosci. 8, 112. doi:10.3389/fnsys.2014.00112

PubMed Abstract | CrossRef Full Text | Google Scholar

Papo, D. (2019a). Gauging functional brain activity: from distinguishability to accessibility. Front. Physiol. 10, 509. doi:10.3389/fphys.2019.00509

PubMed Abstract | CrossRef Full Text | Google Scholar

Papo, D. (2019b). Neurofeedback: principles, appraisal, and outstanding issues. Eur. J. Neurosci. 49, 1454–1469. doi:10.1111/ejn.14312

PubMed Abstract | CrossRef Full Text | Google Scholar

Papo, D., and Buldú, J. M. (2024). Does the brain behave like a (complex) network? I. Dynamics. Phys. Life Rev. 48, 47–98. doi:10.1016/j.plrev.2023.12.006

PubMed Abstract | CrossRef Full Text | Google Scholar

Papo, D., and Buldú, J. M. (2025a). A complex network perspective on brain disease. Biol. Rev. (in press). ArXiv preprint.

Google Scholar

Papo, D., and Buldú, J. M. (2025b). Brain networkness: from dynamics to function and dysfunction. Reply to comments on: “Does the brain behave like a (complex) network? I. Dynamics”. Phys. Life Rev. 55, 41–45. doi:10.1016/j.plrev.2025.08.001

PubMed Abstract | CrossRef Full Text | Google Scholar

Papo, D., and Buldú, J. M. (2025c). Does the brain behave like a (complex) network? II. Function. (In preparation).

Google Scholar

Papo, D., Zanin, M., Pineda, J. A., Boccaletti, S., and Buldú, J. M. (2014). Functional brain networks: great expectations, hard times and the big leap forward. Philos. Trans. R. Soc. B 369, 20130525. doi:10.1098/rstb.2013.0525

CrossRef Full Text | Google Scholar

Peixoto, T. P., and Rosvall, M. (2017). Modelling sequences and temporal networks with dynamic community structures. Nat. Commun. 8, 582. doi:10.1038/s41467-017-00148-9

PubMed Abstract | CrossRef Full Text | Google Scholar

Pennec, X. (2006). Intrinsic statistics on Riemannian manifolds: basic tools for geometric measurements. J. Math. Imaging Vis. 25, 127–154. doi:10.1007/s10851-006-6228-4

CrossRef Full Text | Google Scholar

Perea, G., Navarrete, M., and Araque, A. (2009). Tripartite synapses: astrocytes process and control synaptic information. Trends Neurosci. 32, 421–431. doi:10.1016/j.tins.2009.05.001

PubMed Abstract | CrossRef Full Text | Google Scholar

Pernice, V., Staude, B., Cardanobile, S., and Rotter, S. (2011). How structure determines correlations in neuronal networks. PLoS Comput. Biol. 7, e1002059. doi:10.1371/journal.pcbi.1002059

PubMed Abstract | CrossRef Full Text | Google Scholar

Pernice, V., Deger, M., Cardanobile, S., and Rotter, S. (2013). The relevance of network micro-structure for neural dynamics. Front. Comput. Neurosci. 7, 72. doi:10.3389/fncom.2013.00072

PubMed Abstract | CrossRef Full Text | Google Scholar

Petkoski, S., and Jirsa, V. K. (2019). Transmission time delays organize the brain network synchronization. Philos. Trans. R. Soc. A 377, 20180132. doi:10.1098/rsta.2018.0132

PubMed Abstract | CrossRef Full Text | Google Scholar

Petkoski, S., and Jirsa, V. K. (2022). Normalizing the brain connectome for communication through synchronization. Netw. Neurosci. 6, 722–744. doi:10.1162/netn_a_00231

PubMed Abstract | CrossRef Full Text | Google Scholar

Petkoski, S., Spiegler, A., Proix, T., Aram, P., Temprado, J.-J., and Jirsa, V. K. (2016). Heterogeneity of time delays determines synchronization of coupled oscillators. Phys. Rev. E 94, 012209. doi:10.1103/PhysRevE.94.012209

PubMed Abstract | CrossRef Full Text | Google Scholar

Pinder, I., Nelson, M. R., and Crofts, J. J. (2024). Network structure and time delays shape synchronization patterns in brain network models. Chaos 34, 123123. doi:10.1063/5.0228813

PubMed Abstract | CrossRef Full Text | Google Scholar

Pollner, P., Palla, G., and Vicsek, T. (2006). Preferential attachment of communities: the same principle, but a higher level. Europhys. Lett. 73, 478–484. doi:10.1209/epl/i2005-10414-6

CrossRef Full Text | Google Scholar

Pozo, K., and Goda, Y. (2010). Unraveling mechanisms of homeostatic synaptic plasticity. Neuron 66, 337–351. doi:10.1016/j.neuron.2010.04.028

PubMed Abstract | CrossRef Full Text | Google Scholar

Presigny, C., and De Vico Fallani, F. (2022). Colloquium: multiscale modeling of brain network organization. Rev. Mod. Phys. 94, 031002. doi:10.1103/revmodphys.94.031002

CrossRef Full Text | Google Scholar

Pretel, J., Buendía, V., Torres, J. J., and Muñoz, M. A. (2024). From asynchronous states to Griffiths phases and back: structural heterogeneity and homeostasis in excitatory-inhibitory networks. Phys. Rev. Res. 6, 023018. doi:10.1103/PhysRevResearch.6.023018

CrossRef Full Text | Google Scholar

Pribram, K. H. (1999). Quantum holography: is it relevant to brain function? Inf. Scie. 115, 97–102. doi:10.1016/s0020-0255(98)10082-8

CrossRef Full Text | Google Scholar

Pulvermuller, F., Tomasello, R., Henningsen-Schomers, M. R., and Wennekers, T. (2021). Biological constraints on neural network models of cognitive function. Nat. Rev. Neurosci. 22, 488–502. doi:10.1038/s41583-021-00473-5

PubMed Abstract | CrossRef Full Text | Google Scholar

Radicchi, F., Krioukov, D., Hartle, H., and Bianconi, G. (2020). Classical information theory of networks. J. Phys. Complex. 1, 025001. doi:10.1088/2632-072x/ab9447

CrossRef Full Text | Google Scholar

Radulescu, E., Vértes, P. E., Han, S., Hyde, T. M., Kleinman, J. E., Bullmore, E. T., et al. (2025). Genes with disrupted connectivity in the architecture of schizophrenia gene co-expression networks highlight atypical neuronal-glial interactions. bioRxiv 2025-02. doi:10.1101/2025.02.03.635993

CrossRef Full Text | Google Scholar

Ramón y Cajal, S. (1909). Histologie du système nerveux de l’homme et des vertébrés. Paris: Maloine.

Google Scholar

Rasetti, M., and Merelli, E. (2015). The topological field theory of data: a program towards a novel strategy for data mining through data language. J. Phys. Conf. Ser. 626, 012005. doi:10.1088/1742-6596/626/1/012005

CrossRef Full Text | Google Scholar

Recanatesi, S., Ocker, G. K., Buice, M. A., and Shea-Brown, E. (2019). Dimensionality in recurrent spiking networks: global trends in activity and local origins in connectivity. PLoS Comput. Biol. 15, e1006446. doi:10.1371/journal.pcbi.1006446

PubMed Abstract | CrossRef Full Text | Google Scholar

Reimann, M. W., Nolte, M., Scolamiero, M., Turner, K., Perin, R., Chindemi, G., et al. (2017). Cliques of neurons bound into cavities provide a missing link between structure and function. Front. Comput. Neurosci. 11, 48. doi:10.3389/fncom.2017.00048

PubMed Abstract | CrossRef Full Text | Google Scholar

Renart, A., De La Rocha, J., Bartho, P., Hollender, L., Parga, N., Reyes, A., et al. (2010). The asynchronous state in cortical circuits. Science 327, 587–590. doi:10.1126/science.1179850

PubMed Abstract | CrossRef Full Text | Google Scholar

Roberts, J. A., Gollo, L. L., Abeysuriya, R. G., Roberts, G., Mitchell, P. B., Woolrich, M. W., et al. (2019). Metastable brain waves. Nat. Commun. 10, 1056. doi:10.1038/s41467-019-08999-0

PubMed Abstract | CrossRef Full Text | Google Scholar

Robiglio, T., Neri, M., Coppes, D., Agostinelli, C., Battiston, F., Lucas, M., et al. (2025). Synergistic signatures of group mechanisms in higher-order systems. Phys. Rev. Lett. 134, 137401. doi:10.1103/PhysRevLett.134.137401

PubMed Abstract | CrossRef Full Text | Google Scholar

Rosas, F. E., Mediano, P. A. M., Luppi, A. I., Varley, T. F., Lizier, J. T., Stramaglia, S., et al. (2022). Disentangling high-order mechanisms and high-order behaviours in complex systems. Nat. Phys. 18, 476–477. doi:10.1038/s41567-022-01548-5

CrossRef Full Text | Google Scholar

Rosen, B. Q., and Halgren, E. (2022). An estimation of the absolute number of axons indicates that human cortical areas are sparsely connected. PLoS Biol. 20, e3001575. doi:10.1371/journal.pbio.3001575

PubMed Abstract | CrossRef Full Text | Google Scholar

Rosvall, M., Esquivel, A. V., Lancichinetti, A., West, J. D., and Lambiotte, R. (2014). Memory in network flows and its effects on spreading dynamics and community detection. Nat. Commun. 5, 4630. doi:10.1038/ncomms5630

PubMed Abstract | CrossRef Full Text | Google Scholar

Rubinov, M., Sporns, O., Thivierge, J. P., and Breakspear, M. (2011). Neurobiologically realistic determinants of self-organized criticality in networks of spiking neurons. PLoS Comput. Biol. 7, e1002038. doi:10.1371/journal.pcbi.1002038

PubMed Abstract | CrossRef Full Text | Google Scholar

Saberi, M., Khosrowabadi, R., Khatibi, A., Misic, B., and Jafari, G. (2022). Pattern of frustration formation in the functional brain network. Netw. Neurosci. 6, 1334–1356. doi:10.1162/netn_a_00268

PubMed Abstract | CrossRef Full Text | Google Scholar

Sakurai, A., and Katz, P. S. (2009). State-timing-and pattern-dependent neuromodulation of synaptic strength by a serotonergic interneuron. J. Neurosci. 29, 268–279. doi:10.1523/JNEUROSCI.4456-08.2009

PubMed Abstract | CrossRef Full Text | Google Scholar

Santos, F. A. N., and Coutinho-Filho, M. D. (2009). Topology, symmetry, phase transitions, and noncollinear spin structures. Phys. Rev. E 80, 031123. doi:10.1103/PhysRevE.80.031123

PubMed Abstract | CrossRef Full Text | Google Scholar

Santos, F. A. N., Raposo, E. P., Coutinho-Filho, M. D., Copelli, M., Stam, C. J., and Douw, L. (2019). Topological phase transitions in functional brain networks. Phys. Rev. E 100, 032414. doi:10.1103/PhysRevE.100.032414

PubMed Abstract | CrossRef Full Text | Google Scholar

Savtchouk, I., and Volterra, A. (2018). Gliotransmission: beyond black-and-white. J. Neurosci. 38, 14–25. doi:10.1523/JNEUROSCI.0017-17.2017

PubMed Abstract | CrossRef Full Text | Google Scholar

Schneidman, E., Berry, M. J., Segev, R., and Bialek, W. (2006). Weak pairwise correlations imply strongly correlated network states in a neural population. Nature 440, 1007–1012. doi:10.1038/nature04701

PubMed Abstract | CrossRef Full Text | Google Scholar

Schöll, E., and Schuster, H. G. (2008). Handbook of chaos control. 2nd edition. Weinheim, Germany: Wiley.

Google Scholar

Scholtes, I., Wider, N., Pfitzner, R., Garas, A., Tessone, C. J., and Schweitzer, F. (2014). Causality-driven slow-down and speed-up of diffusion in non-Markovian temporal networks. Nat. Commun. 5, 5024. doi:10.1038/ncomms6024

PubMed Abstract | CrossRef Full Text | Google Scholar

Scholtes, I., Wider, N., and Garas, A. (2016). Higher-order aggregate networks in the analysis of temporal networks: path structures and centralities. Eur. Phys. J. B 89, 61. doi:10.1140/epjb/e2016-60663-0

CrossRef Full Text | Google Scholar

Schwarze, A. C., Jiang, J., Wray, J., and Porter, M. A. (2024). Structural robustness and vulnerability of networks. doi:10.48550/arXiv.2409.07498

CrossRef Full Text | Google Scholar

Shine, J. M., Breakspear, M., Bell, P. T., Ehgoetz Martens, K. A., Shine, R., Koyejo, O., et al. (2019). Human cognition involves the dynamic integration of neural activity and neuromodulatory systems. Nat. Neurosci. 22, 289–296. doi:10.1038/s41593-018-0312-0

PubMed Abstract | CrossRef Full Text | Google Scholar

Shine, J. M., Müller, E. J., Munn, B., Cabral, J., Moran, R. J., and Breakspear, M. (2021). Computational models link cellular mechanisms of neuromodulation to large-scale neural dynamics. Nat. Neurosci. 24, 765–776. doi:10.1038/s41593-021-00824-6

PubMed Abstract | CrossRef Full Text | Google Scholar

Singer, W. (2021). Recurrent dynamics in the cerebral cortex: integration of sensory evidence with stored knowledge. Proc. Natl. Acad. Sci. U.S.A. 118, e2101043118. doi:10.1073/pnas.2101043118

PubMed Abstract | CrossRef Full Text | Google Scholar

Sjöström, P. J., Rancz, E. A., Roth, A., and Häusser, M. (2008). Dendritic excitability and synaptic plasticity. Physiol. Rev. 88, 769–840. doi:10.1152/physrev.00016.2007

PubMed Abstract | CrossRef Full Text | Google Scholar

Skardal, P. S., and Arenas, A. (2020). Memory selection and information switching in oscillator networks with higher-order interactions. J. Phys. Complex 2, 015003. doi:10.1088/2632-072x/abbd4c

CrossRef Full Text | Google Scholar

Skardal, P. S., Taylor, D., and Restrepo, J. G. (2014). Complex macroscopic behavior in systems of phase oscillators with adaptive coupling. Phys. D. 267, 27–35. doi:10.1016/j.physd.2013.01.012

CrossRef Full Text | Google Scholar

Solé, R. V., and Valverde, S. (2020). Evolving complexity: how tinkering shapes cells, software and ecological networks. Philos. Trans. R. Soc. B 375, 20190325. doi:10.1098/rstb.2019.0325

PubMed Abstract | CrossRef Full Text | Google Scholar

Solé, R., Moses, M., and Forrest, S. (2019). Liquid brains, solid brains. Philos.Trans. R. Soc. B 374, 20190040. doi:10.1098/rstb.2019.0040

PubMed Abstract | CrossRef Full Text | Google Scholar

Spead, O., and Poulain, F. E. (2020). Trans-axonal signaling in neural circuit wiring. Int. J. Mol. Sci. 21, 5170. doi:10.3390/ijms21145170

PubMed Abstract | CrossRef Full Text | Google Scholar

Stadler, P. F., and Stadler, B. M. (2006). Genotype-phenotype maps. Biol. Theory 1, 268–279. doi:10.1162/biot.2006.1.3.268

CrossRef Full Text | Google Scholar

Sterling, P., and Laughlin, S. (2015). Principles of neural design. Cambridge, Massachusetts: MIT Press.

Google Scholar

Stuart, G., Spruston, N., Sakmann, B., and Häusser, M. (1997). Action potential initiation and backpropagation in neurons of the mammalian CNS. Trends Neurosci. 20, 125–131. doi:10.1016/s0166-2236(96)10075-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Sukenik, N., Vinogradov, O., Weinreb, E., Segal, M., Levina, A., and Moses, E. (2021). Neuronal circuits overcome imbalance in excitation and inhibition by adjusting connection numbers. Proc. Natl. Acad. Sci. U.S.A. 118, e2018459118. doi:10.1073/pnas.2018459118

PubMed Abstract | CrossRef Full Text | Google Scholar

Sun, H., Radicchi, F., Kurths, J., and Bianconi, G. (2023). The dynamic nature of percolation on networks with triadic interactions. Nat. Commun. 14, 1308. doi:10.1038/s41467-023-37019-5

PubMed Abstract | CrossRef Full Text | Google Scholar

Talidou, A., Frankland, P. W., Mabbott, D., and Lefebvre, J. (2022). Homeostatic coordination and up-regulation of neural activity by activity-dependent myelination. Nat. Comput. Sci. 2, 665–676. doi:10.1038/s43588-022-00315-z

PubMed Abstract | CrossRef Full Text | Google Scholar

Tanaka, G., Yamane, T., Héroux, J. B., Nakane, R., Kanazawa, N., Takeda, S., et al. (2019). Recent advances in physical reservoir computing: a review. Neural Netw. 115, 100–123. doi:10.1016/j.neunet.2019.03.005

PubMed Abstract | CrossRef Full Text | Google Scholar

Tetzlaff, T., Helias, M., Einevoll, G. T., and Diesmann, M. (2012). Decorrelation of neural-network activity by inhibitory feedback. PLoS Comput. Biol. 8, e1002596. doi:10.1371/journal.pcbi.1002596

PubMed Abstract | CrossRef Full Text | Google Scholar

Thomas, R. (1981). On the relation between the logical structure of systems and their ability to generate multiple steady states and sustained oscillations. Ser. Synerg. 9, 180–193. doi:10.1007/978-3-642-81703-8_24

CrossRef Full Text | Google Scholar

Thouless, D. J., Kohmoto, M., Nightingale, M. P., and den Nijs, M. (1982). Quantized Hall conductance in a two-dimensional periodic potential. Phys. Rev. Lett. 49, 405–408. doi:10.1103/physrevlett.49.405

CrossRef Full Text | Google Scholar

Tozzi, A., and Papo, D. (2020). Projective mechanisms subtending real world phenomena wipe away cause effect relationships. Prog. Biophys. Mol. Biol. 151, 1–13. doi:10.1016/j.pbiomolbio.2019.12.002

PubMed Abstract | CrossRef Full Text | Google Scholar

Treves, A., and Rolls, E. T. (1992). Computational constraints suggest the need for two distinct input systems to the hippocampal CA3 network. Hippocampus 2, 189–199. doi:10.1002/hipo.450020209

PubMed Abstract | CrossRef Full Text | Google Scholar

Tsodyks, M., Pawleslik, K., and Markram, H. (1998). Neural networks with dynamic synapses. Neural Comput. 10, 821–835. doi:10.1162/089976698300017502

PubMed Abstract | CrossRef Full Text | Google Scholar

Turkheimer, F., Hancock, F., Rosas, F., Vernon, A., Srivastava, D., and Luppi, A. (2025). The neurobiological complexity of brain dynamics: the neurogliovascular unit and its relevance for psychiatry. doi:10.20944/preprints202502.0149.v1

CrossRef Full Text | Google Scholar

Turrigiano, G. G., Leslie, K. R., Desai, N. S., Rutherford, L. C., and Nelson, S. B. (1998). Activity-dependent scaling of quantal amplitude in neocortical neurons. Nature 391, 892–896. doi:10.1038/36103

PubMed Abstract | CrossRef Full Text | Google Scholar

Vancura, B., Geiller, T., Grosmark, A., Zhao, V., and Losonczy, A. (2023). Inhibitory control of sharp-wave ripple duration during learning in hippocampal recurrent networks. Nat. Neurosci. 26, 788–797. doi:10.1038/s41593-023-01306-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Van Fraassen, B. C. (1977). The pragmatics of explanation. Am. Philos. Q. 14, 143e150.

Google Scholar

Vannimenus, J., and Toulouse, G. (1977). Theory of the frustration effect. II. Ising spins on a square lattice. J. Phys. C 10, L537–L542. doi:10.1088/0022-3719/10/18/008

CrossRef Full Text | Google Scholar

Varley, T. F. (2022). Flickering emergences: the question of locality in information-theoretic approaches to emergence. Entropy 25, 54. doi:10.3390/e25010054

PubMed Abstract | CrossRef Full Text | Google Scholar

Verkhratsky, A., and Nedergaard, M. (2018). Physiology of astroglia. Physiol. Rev. 98, 239–389. doi:10.1152/physrev.00042.2016

PubMed Abstract | CrossRef Full Text | Google Scholar

Vetter, P., Roth, A., and Häusser, M. (2001). Propagation of action potentials in dendrites depends on dendritic morphology. J. Neurophysiol. 85, 926–937. doi:10.1152/jn.2001.85.2.926

PubMed Abstract | CrossRef Full Text | Google Scholar

Vierling-Claassen, D., Cardin, J. A., Moore, C. I., and Jones, S. R. (2010). Computational modeling of distinct neocortical oscillations driven by celltype selective optogenetic drive: separable resonant circuits controlled by low-threshold spiking and fast-spiking interneurons. Front. Hum. Neurosci. 4, 198. doi:10.3389/fnhum.2010.00198

PubMed Abstract | CrossRef Full Text | Google Scholar

Vogels, T. P., Sprekeler, H., Zenke, F., Clopath, C., and Gerstner, W. (2011). Inhibitory plasticity balances excitation and inhibition in sensory pathways and memory networks. Science 334, 1569–1573. doi:10.1126/science.1211095

PubMed Abstract | CrossRef Full Text | Google Scholar

Voges, N., and Perrinet, L. (2010). Phase space analysis of networks based on biologically realistic parameters. J. Physiol. Paris 104, 51–60. doi:10.1016/j.jphysparis.2009.11.004

PubMed Abstract | CrossRef Full Text | Google Scholar

van Vreeswijk, C., and Sompolinsky, H. (1996). Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science 274, 1724–1726. doi:10.1126/science.274.5293.1724

PubMed Abstract | CrossRef Full Text | Google Scholar

van Vreeswijk, C., and Sompolinsky, H. (1998). Chaotic balanced state in a model of cortical circuits. Neural Comput. 10, 1321–1371. doi:10.1162/089976698300017214

PubMed Abstract | CrossRef Full Text | Google Scholar

van Vreeswijk, C., Abbott, L. F., and Ermentrout, G. B. (1994). When inhibition not excitation synchronizes neural firing. J. Comput. Neurosci. 1, 313–321. doi:10.1007/BF00961879

PubMed Abstract | CrossRef Full Text | Google Scholar

Wagner, A. (2008). Robustness and evolvability: a paradox resolved. Proc. R. Soc. B Biol. 275, 91–100. doi:10.1098/rspb.2007.1137

PubMed Abstract | CrossRef Full Text | Google Scholar

Wang, X. J., and Kennedy, H. (2016). Brain structure and dynamics across scales: in search of rules. Curr. Opin. Neurobiol. 37, 92–98. doi:10.1016/j.conb.2015.12.010

PubMed Abstract | CrossRef Full Text | Google Scholar

Wang, X. J., and Yang, G. R. (2018). A disinhibitory circuit motif and flexible information routing in the brain. Curr. Opin. Neurobiol. 49, 75–83. doi:10.1016/j.conb.2018.01.002

PubMed Abstract | CrossRef Full Text | Google Scholar

Wen, W., and Turrigiano, G. G. (2024). Keeping your brain in balance: homeostatic regulation of network function. Annu. Rev. Neurosci. 47, 41–61. doi:10.1146/annurev-neuro-092523-110001

PubMed Abstract | CrossRef Full Text | Google Scholar

Whitacre, J. M. (2012). Biological robustness: paradigms, mechanisms, and systems principles. Front. Genet. 3, 67. doi:10.3389/fgene.2012.00067

PubMed Abstract | CrossRef Full Text | Google Scholar

Whitacre, J. M., and Bender, A. (2010). Degeneracy: a design principle for achieving robustness and evolvability. J. Theor. Biol. 263, 143–153. doi:10.1016/j.jtbi.2009.11.008

PubMed Abstract | CrossRef Full Text | Google Scholar

Wigstrom, H., and Gustafsson, B. (1985). Facilitation of hippocampal long-lasting potentiation by GABA antagonists. Acta Physiol. Scand. 125, 159–172. doi:10.1111/j.1748-1716.1985.tb07703.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Williams, N., Ojanpera, A., Siebenhuhner, F., Toselli, B., Palva, S., Arnulfo, G., et al. (2023). The influence of inter-regional delays in generating large-scale brain networks of phase synchronization. Neuroimage 279, 120318. doi:10.1016/j.neuroimage.2023.120318

PubMed Abstract | CrossRef Full Text | Google Scholar

Williamson, R. C., Cowley, B. R., Litwin-Kumar, A., Doiron, B., Kohn, A., Smith, M. A., et al. (2016). Scaling properties of dimensionality reduction for neural populations and network models. PLoS Comput. Biol. 12, e1005141. doi:10.1371/journal.pcbi.1005141

PubMed Abstract | CrossRef Full Text | Google Scholar

Wilting, J., and Priesemann, V. (2018). Inferring collective dynamical states from widely unobserved systems. Nat. Commun. 9, 2325. doi:10.1038/s41467-018-04725-4

PubMed Abstract | CrossRef Full Text | Google Scholar

Witthaut, D., Rohden, M., Zhang, X., Hallerberg, S., and Timme, M. (2016). Critical links and nonlocal rerouting in complex supply networks. Phys. Rev. Lett. 116, 138701. doi:10.1103/PhysRevLett.116.138701

PubMed Abstract | CrossRef Full Text | Google Scholar

Wolpert, D. H., and Macready, W. (2000). “Self-dissimilarity: an empirically observable complexity measure,” in Unifying themes in complex systems, Boca Raton: New England complex systems institute. Editor Y. Bar-Yam 626–643.

Google Scholar

Yeh, F. C., Tang, A., Hobbs, J. P., Hottowy, P., Dabrowski, W., Sher, A., et al. (2010). Maximum entropy approaches to living neural networks. Entropy 12, 89–106. doi:10.3390/e12010089

CrossRef Full Text | Google Scholar

Yoshimura, Y., and Callaway, E. (2005). Fine-scale specificity of cortical networks depends on inhibitory cell type and connectivity. Nat. Neurosci. 8, 1552–1559. doi:10.1038/nn1565

PubMed Abstract | CrossRef Full Text | Google Scholar

Zanin, M. (2015). Can we neglect the multi-layer structure of functional networks? Phys. A 430, 184–192. doi:10.1016/j.physa.2015.02.099

CrossRef Full Text | Google Scholar

Zanin, M., Ivanoska, I., Güntekin, B., Yener, G., Loncar-Turukalo, T., Jakovljevic, N., et al. (2021). A fast transform for brain connectivity difference evaluation. Neuroinformatics 20, 285–299. doi:10.1007/s12021-021-09518-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Zanin, M., Güntekin, B., Aktürk, T., Yıldırım, E., Yener, G., Kiyi, I., et al. (2022). Telling functional networks apart using ranked network features stability. Sci. Rep. 12, 2562. doi:10.1038/s41598-022-06497-w

PubMed Abstract | CrossRef Full Text | Google Scholar

Zañudo, J. G., and Albert, R. (2013). An effective network reduction approach to find the dynamical repertoire of discrete dynamic networks. Chaos 23, 025111. doi:10.1063/1.4809777

PubMed Abstract | CrossRef Full Text | Google Scholar

Zenke, F., Gerstner, W., and Ganguli, S. (2017). The temporal paradox of Hebbian learning and homeostatic plasticity. Curr. Opin. Neurobiol. 43, 166–176. doi:10.1016/j.conb.2017.03.015

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhang, Y., Garas, A., and Scholtes, I. (2021). Higher-order models capture changes in controllability of temporal networks. J. Phys. Complex 2, 015007. doi:10.1088/2632-072x/abcc05

CrossRef Full Text | Google Scholar

Zierenberg, J., Wilting, J., and Priesemann, V. (2018). Homeostatic plasticity and external input shape neural network dynamics. Phys. Rev. X 8, 031018. doi:10.1103/physrevx.8.031018

CrossRef Full Text | Google Scholar

Keywords: brain dynamics, brain topology, functional networks, hypergraphs, intrinsical properties, multilayer networks, multiplex networks, renormalisation group

Citation: Papo D and Buldú JM (2025) Biological detail and graph structure in network neuroscience. Front. Netw. Physiol. 5:1667656. doi: 10.3389/fnetp.2025.1667656

Received: 16 July 2025; Accepted: 15 September 2025;
Published: 03 October 2025.

Edited by:

Nishant Sinha, University of Pennsylvania, United States

Reviewed by:

Rachel Nicks, University of Nottingham, United Kingdom
Chris Kang, University of Calgary, Canada

Copyright © 2025 Papo and Buldú. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: David Papo, ZGF2aWQucGFwb0BpaXQuaXQ=

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.