Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Netw. Physiol., 08 August 2025

Sec. Networks of Dynamical Systems

Volume 5 - 2025 | https://doi.org/10.3389/fnetp.2025.1589566

This article is part of the Research TopicSelf-Organization of Complex Physiological Networks: Synergetic Principles and Applications — In Memory of Hermann HakenView all 7 articles

On the robustness of the emergent spatiotemporal dynamics in biophysically realistic and phenomenological whole-brain models at multiple network resolutions

Cristiana Dimulescu,&#x;Cristiana Dimulescu1,2Ronja Strmsdrfer,
&#x;Ronja Strömsdörfer1,3*Agnes Flel,&#x;Agnes Flöel4,5Klaus Obermayer,,Klaus Obermayer1,2,3
  • 1Neural Information Processing Group, Fakultät IV, Technische Universität Berlin, Berlin, Germany
  • 2Bernstein Center for Computational Neuroscience, Berlin, Germany
  • 3Einstein Center for Neuroscience, Berlin, Germany
  • 4Department of Neurology, University Medicine, Greifswald, Germany
  • 5German Center for Neurodegenerative Diseases, Greifswald, Germany

The human brain is a complex dynamical system which displays a wide range of macroscopic and mesoscopic patterns of neural activity, whose mechanistic origin remains poorly understood. Whole-brain modelling allows us to explore candidate mechanisms causing the observed patterns. However, it is not fully established how the choice of model type and the networks’ spatial resolution influence the simulation results, hence, it remains unclear, to which extent conclusions drawn from these results are limited by modelling artefacts. Here, we compare the dynamics of a biophysically realistic, linear-nonlinear cascade model of whole-brain activity with a phenomenological Wilson-Cowan model using three structural connectomes based on the Schaefer parcellation scheme with 100, 200, and 500 nodes. Both neural mass models implement the same mechanistic hypotheses, which specifically address the interaction between excitation, inhibition, and a slow adaptation current which affects the excitatory populations. We quantify the emerging dynamical states in detail and investigate how consistent results are across the different model variants. Then we apply both model types to the specific phenomenon of slow oscillations, which are a prevalent brain rhythm during deep sleep. We investigate the consistency of model predictions when exploring specific mechanistic hypotheses about the effects of both short- and long-range connections and of the antero-posterior structural connectivity gradient on key properties of these oscillations. Overall, our results demonstrate that the coarse-grained dynamics is robust to changes in both model type and network resolution. In some cases, however, model predictions do not generalize. Thus, some care must be taken when interpreting model results.

1 Introduction

The human brain is a complex dynamical system. It exhibits a rich variety of spatiotemporally organized activity, where different patterns correspond to different functionalities and mechanisms in human cognitive processes. Rasch and Born (2013) state that slow oscillations (SOs), that travel as plane waves in an anterior-posterior direction (Massimini et al., 2004), play a crucial role in memory consolidation during non-rapid eye movement (non-REM) sleep, and Muller et al. (2016) identified a dominant rotational temporal-parietal-frontal directionality of spindle oscillations that accompany SOs. Beyond spatiotemporal patterns during sleep, Das et al. (2024) showed that spatial modes that regulate plane waves are absent in navigational memory tasks in humans while in verbal memory tasks, they observed different clusters of traveling waves depending on the letters that appear in words. Hence, an indicator of the functionality of a rhythm is its spatiotemporal organization (see further, Breakspear et al. (2003); Mohan et al. (2024)). While reductionist approaches to the temporal dynamics of activity patterns have been widely researched to understand the functionality of the more local dynamics in the brain, most recently, neuroscientific research has shown an increasing interest to include the identification of the spatial dynamics, especially on a larger scale (see Pessoa, 2022; Sporns, 2022).

In-silico methods can support these investigations by computational modeling of specific brain activity for the evaluation of candidate mechanisms, as well as supporting clinical studies using personalized whole-brain models such as the Virtual Brain Twins (see Hashemi et al., 2025; Jirsa et al., 2023; Wang et al., 2024). Fousek et al. (2024) introduced a principled framework which provides a mechanistic description for resting state activity, suggesting a fundamental pathway for the generalisation of large-scale models. Additionally, in silico methods have been applied to surface EEG measurements (Sanchez-Vives et al., 2017; Cakan et al., 2022), and to intracranially recorded activity in humans (Deco et al., 2017; Das et al., 2024; Mohan et al., 2024; Muller et al., 2016), rodents (see Bhattacharya et al., 2022; Liang et al., 2023; Dasilva et al., 2021), and other species (Muller et al., 2014). Intracranial recording methods measure activity of higher spatial and temporal resolutions, hence, in silico methods require an adjustment to spatially denser models. On a smaller scale (i.e., not the whole brain), Capone et al. (2023) showed that different granularity of the recorded space changed the measured density of SO wave velocity in mice, where faster waves were neglected on a lower spatial resolution. On a larger scale, Popovych et al. (2021) found that the fit of simulated activity to empirical functional connectivity depends both on parcellation schemes and spatial resolution, and Proix et al. (2016) shows that the parcellation size affects the dynamics of a whole-brain model whereas it was challenging to identify a consistent type of change.

Key to the emergence of different types of spatiotemporal patterns is the dynamical landscape of a computational model that can be decomposed into different regions of interest by the different types of stability a dynamical system experiences. Sanchez-Vives et al. (2017) showed that bistability is required for the organization of neocortical SOs both in silico, as well as empirically. Cakan et al. (2022) identified a temporal destabilization of a stable high-activity state (up state) by a fatigue mechanism (spike-frequency adaptation) for transitioning into a low-activity state (down state) which is interrupted by noise to ultimately alternate at a low frequency (<2Hz). These SO wavefronts propagate as global plane waves. For the formation of more complex patterns, the presence of multi- or metastability is required (see Kelso, 2012). These types of stability have been shown to play a crucial role in enabling elaborate spatiotemporal organizations in computational models (see, Roberts et al. (2019); Kelso (2012)) with hallmarks of them being present in the human brain (see, Freyer et al. (2009); Freyer et al. (2011)).

Different types of instability can also enable the formation of complex local patterns. Townsend and Gong (2018) applied methods from the analysis of turbulent flows to determine velocity vector fields over empirically recorded brain activity of mice. In those velocity vector fields, outward (sources) or inward (sinks) rotational waves emerge from unstable, or stable foci, respectively. Analogously for empirical data of humans, Das et al. (2024) investigated the organization of sinks and sources and their role for different memory tasks, showing that in spatial tasks more sources, in verbal memory tasks more sinks were detected. Deco et al. (2021a) found that large-scale models with regional heterogeneity of excitatory-inhibitory receptors are capable of reproducing the spatiotemporal structure of empirical functional connectivity reliably. Along the line of diversifying connectivity profiles (but without the inclusion of fine grained receptor-dynamics) Breakspear et al. (2003) emphasized the importance of balance between local short-range versus long-range connections1 for the transition from independent, locally appearing oscillations to chaotic synchronization to global patterns. Liang et al. (2023) supported this observation when investigating the spatiotemporal patterns in awake and anesthetized rodents. They not only emphasized the presence of complex local patterns during wakefulness but also showed, with computational modeling, that the coherence in low frequency bands is enhanced by stronger long-range connections between cortical areas further apart. Information processing has also been shown to be crucially affected by long-range connections by Deco et al. (2021b), where the authors compared two whole-brain models, one with connections which exponentially decayed with distance and one with additional sparse long-range connections that deviated from that rule. They investigated complex brain activity that is functionally beneficial for the transmission of information between cortical regions and found the information cascade, i.e., the flow of brain activity across different spatial scales, to be significantly improved by the presence of these long-range connections. Studies such as the above, where brain activity is simulated with networks equipped with empirically informed structure, have been shown to reliably predict empirically observed patterns. Cakan et al. (2022), for example, showed that the observed direction of SOs can be implicated by the antero-posterior structural connectivity gradient that decreases in connectivity strength from the anterior to the posterior direction.

Given the large number of computational modeling studies which investigate the spatiotemporal organization of neural activity on larger scales, we are left with the question in how far results generalize across the different whole-brain modeling approaches. Here, we specifically investigate whether, and how strongly, the specific choice of the dynamical system and of the spatial resolution changes the observed patterns, and how the connectivity profiles affect the emergent dynamics beyond empirically observed variability. We compare the emergent dynamics of whole-brain models based on the biophysically realistic adaptive linear-nonlinear cascade (aLN) model (Augustin et al., 2017; Cakan and Obermayer, 2020; Cakan et al., 2022) and the phenomenological Wilson-Cowan model (Wilson and Cowan, 1972), both equipped with spike-frequency adaptation as a fatigue mechanism. To identify the role of spatial density in the models, we show the results for three network parcellations based on the Schaefer local-global parcellation schemes (Schaefer et al., 2018) with 100, 200, and 500 nodes. We find that the coarse-grained dynamical landscape remains robust across models and network resolutions. However, results may not generalize when exploring specific dynamical states.

2 Materials and methods

2.1 Data

2.1.1 Participants

We used diffusion tensor imaging (DTI) data and anatomical T1 scans which were acquired at the Universitätsmedizin Greifswald from 27 participants (15 females; age range = 50–78 years, mean age = 63.55 years). Prior to participating in the study, all participants gave a written informed consent and were subsequently reimbursed for participation. The study was approved by the local ethics committee at the Universitätsmedizin Greifswald and was conducted in accordance with the Declaration of Helsinki.

2.1.2 Data acquisition and preprocessing

The acquisition parameters and preprocessing of the DTI and anatomical T1 scans were identical to those described in Cakan et al. (2022).

We defined the anatomical regions according to the Schaefer cortical parcellation scheme (Schaefer et al., 2018) with 100, 200, and 500 nodes, respectively. We employed the same probabilistic tractography algorithm with 5,000 randomly sampled streamlines per voxel, which yielded one structural connectivity matrix and one fiber length matrix per participant. One participant was excluded because the tractography procedure at the highest network resolution failed. Following probabilistic tractography, we normalized the resulting structural connectivity matrix for each participant by dividing the connection probability Cij from seed region i to target region j by 5,000 (number of streamlines per voxel) x n (number of voxels in the seed region i). As probabilistic tractography contains no directional information, we estimated Cij by averaging the connection probabilities from i to j and j to i (Cabral et al., 2012).

In addition to the individual connectomes, we constructed average structural connectivity matrices C and average fiber length matrices D for each parcellation.

2.2 Whole-brain network models

We used whole-brain networks that consist of N{100,200,500} nodes following the parcellation schemes described in Section 2.1.2. Each node represents a brain region and consists of an excitatory (E) and an inhibitory (I) population of model neurons. The nodes are connected by edges with the connections strengths given by the connectivity matrices C. Each excitatory population is equipped with an activity-dependent adaptation mechanism (A) that acts as a hyperpolarising feedback current.

2.2.1 The aLN model

The adaptive linear-nonlinear (aLN) model is a mean-field neural mass model of a network of coupled adaptive exponential integrate-and-fire (AdEx) neurons. It was developed in Augustin et al. (2017) and validated against simulations of spiking neural networks in Cakan and Obermayer (2020). We used the neurolib framework introduced in Cakan et al. (2021) for the numerical simulations. The dynamics of each node (Cakan et al., 2022) is summarized by the following equations:

ταdμαdt=μαsynt+μαextt+μαoutμαt,μαsynt=JαEs̄αEt+JαIs̄αIt,σα2t=βE,I2Jαβ2σs,αβ2tτs,βτm1+rαβtτm+τs.β+σext,α2ds̄αβdt=τs,β11s̄αβttrαβts̄αβt,dσs,α,β2dt=τs,β11s̄αβt2ραβt+ραβt2τs,βrαβt+1σs,αβ2t, for α,βE,I,(1)

where s̄αβ represents the mean and σs,αβ2 the variance of the fraction of active synapses. Means and variances are computed across all neurons within each population. Given μα, the mean membrane current, its standard deviation σα, and a set of nonlinear transfer functions Φγ(μα,σα),γ{τ,V,r}, the mean membrane potentials V̄α=ΦV(μα,σα) and the population firing rate rα=Φr(μα,σα) can be calculated from the Fokker-Plank equations as in Richardson (2007). The time constant τα is input-dependent with τα=Φτ(μα,σα). The values for V̄E,τα, and rα are evaluated at every time step with precomputed functions such that the effective input rate from population β to α is determined by the mean rαβ and the variance ραβ with

rαβt=cαβJαβτs,βKβrβtdα+δαβEKglj=0NCijrβtDijραβt=cαβ2Jαβτs,β2Kβrβtdα+δαβEKglj=0NCij2rβtDij,(2)

where Dij represents the fiber length between nodes i and j divided by the global signal speed vgl.

The mean adaptation current ĪA is given by

dĪAdt=τA1aV̄EtEAĪA+brEt.(3)

All parameters not explained above are given and explained in Table 1. Values were chosen as in Cakan et al. (2022) with the global coupling strength Kgl fixed to one value for all parcellations, see Table 1. For the determination of units for the parameters, see Cakan et al. (2022).

Table 1
www.frontiersin.org

Table 1. Parameter values used for the aLN model. Values are taken from Cakan et al. (2022).

2.2.2 The Wilson-Cowan model

The Wilson-Cowan model (Wilson and Cowan, 1972) describes the dynamics of the proportions of excitatory (rE(t)) and inhibitory (rI(t)) neurons firing per unit time (Kilpatrick, 2013). Even though the aLN and Wilson-Cowan models represent neuronal firing somewhat differently, we denote both dynamical variables with rk{E,I} for brevity. The framework in Cakan et al. (2021) provides an implementation of the original model equations including a refractory term. Since the refractory time only rescales the solutions rE(t), and rI(t) but has no qualitative effect on the dynamics (Pinto et al., 1996), we omitted it for this study. Furthermore, a spike-frequency adaptation current is considered. The dynamics in each node is thus determined by the following equations:

τEdrE,jdt=rE,jt+FEwEErE,jtwEIrI,jt+μEext+Ijexttajt+μEouτIdrI,jdt=rI,jt+FIwIErE,jtwIIrI,jt+μIext+μIouτadajdt=ajt+bFArE,jt.(4)

Ijext(t), the input from other nodes to the excitatory population of node j, is determined by the connectivity matrix C={Cjk} and the delay matrix D={Djk}, and scaled by a global coupling strength KglR0+:

Ijextt=Kglk=1NCjkrE,ktDjk.(5)

To simplify Equation 4, we consider a mean external input μαext for α{E,I} to each node, which is constant across nodes. The transfer functions Fα(),α{E,I,A}, are chosen to be sigmoidal:

Fαx=11+expaαxνα.

A description for each parameter can be found in Table 2. These parameter values were chosen, because they give rise to a dynamical landscape which is similar to other systems that also reliably produce SOs (Cakan and Obermayer, 2020; Cakan et al., 2022). The parameter setting required minor adjustments compared to previous studies that used the Wilson-Cowan model to simulate various types of spatiotemporal patterns (Levenstein et al., 2019; Papadopoulos et al., 2020; Torao-Angosto et al., 2021).

Table 2
www.frontiersin.org

Table 2. Parameter values used for the Wilson-Cowan model.

2.2.3 Noise

For the investigation of simulated sleep SOs, noise input to each population α{E,I} in both models was considered. Noise is described by an Ornstein-Uhlenbeck process

dμαoutdt=μαouτou+σouξt,

where ξ(t) is sampled from a normal distribution with zero mean and unit variance and τou is the time constant set to 5 m for both models. The variance σou, also referred to as noise strength, is different for each model and given in Tables 1, 2.

2.2.4 Numerical simulations

We use Euler-integration to conduct the numerical simulations. To compare computation times, we tracked the duration necessary to simulate both models at all resolutions on a single core of an AMD EPYC 7662 64-core processor. The simulation of 10 s (100.000 time steps, dt = 0.1 m) of activity with the aLN model (Wilson-Cowan model) takes 8.45 s (3.50 s) for 100, 16.65 s (8.32 s) for 200, and 95.46 s (44.91 s) for 500 nodes.

2.3 Analysis

2.3.1 State space analysis

The analysis of the state space was conducted numerically in the absence of noise. We randomly initialized and simulated the model for 101 x 101 parameter values (10,201 simulations in total) for the mean external inputs to the E and I populations for a duration of 30 s. The duration was extended to 1 min for the Wilson-Cowan model with adaptation, as in some cases rE needed a longer time to return to baseline after the application of the positive stimulus, see paragraph below.

For every point in state space, we applied a negative, but increasing, followed by a positive, but decaying stimulus. Subsequently, we computed the difference between the average rE over the last 2 s of simulation and the 2 s prior to the application of the positive stimulus. As in Cakan et al. (2022), the point was classified as bistable, if this difference was larger than 10 Hz for the aLN and larger than 0.1 for the Wilson-Cowan model for at least one node in the network. These thresholds were chosen because the bistable states displayed differences larger than these values across the entire state space in both models for the chosen parameterizations, detailed in Section 2.2.

Furthermore, we computed the difference between the maximum and minimum value of rE over the last 2 s of simulation. We classified each point as oscillating if this value was larger than 10 Hz for the aLN and 0.1 for the Wilson-Cowan model for at least one node in the network.

Supplementary Figure SA1 shows the single-node bifurcation diagrams for both models with and without adaptation obtained using the procedures described above.

2.3.2 State classification

To characterize the temporal dynamics of each point in the oscillatory regions (identified as described in Section 2.3.1), we used the procedure summarized in Figure 1. For each point in the slice of state space spanned by the external input currents to the excitatory and inhibitory populations, μEext and μIext, we simulated network activity in the absence of noise over a period of 2 min and for 100 random initializations. The first minute of activity was discarded to account for transient effects. Next, for each initialization, we computed recurrence plots with entries:

Rt,t=1,if xtxtϵ0,otherwise,(6)

where x(t), x(t) contain the values of rE at time points t and t’ across all nodes. ϵ is the recurrence threshold, and denotes the Euclidean norm. To account for different amplitudes of rE, which could lead to different results if a fixed threshold ϵ were to be used across initializations and parametrizations, we adjusted the recurrence threshold ϵ until the recurrence rate (defined as the proportion of non-zero entries in the resulting recurrence plot) of 0.1 (Zbilut et al., 2002) was reached.

Figure 1
Flowchart depicting a neural network analysis process. Panel A: depiction of a random network initialization. Panel B: visualization of network activity over time. Panel C: recurrence plots illustrating pattern formations. Panel D: heatmaps showing determinism and cluster numbers based on inputs to excitatory (E) and inhibitory (I) neurons. Panel E: final state classification, color-coded for types of oscillatory behavior labeled as no oscillations, multi-, or unistable, slow, or fast metastable.

Figure 1. Summary of the procedure used to classify network states into unistable, multistable, fast metastable, and slow metastable. (A) For each model (aLN or Wilson-Cowan) and each parcellation (100, 200, or 500 nodes) we conducted 100 randomly initialized simulations of 2 min duration for each point in the slice of parameter space spanned by varying the external excitatory (μEext) and inhibitory (μIext) input currents. (B) We discarded the first minute of activity to eliminate transient effects and used the last minute of network activity to compute the recurrence plots (C). (D) Based on these, we computed the maximum determinism value across all 100 seeds, and we clustered the recurrence plots using the DBSCAN algorithm. (E) Combining the information from these two sources, we classified each point into one of the four states mentioned above.

For each parametrization, we clustered the resulting recurrence matrices using the DBSCAN algorithm (Ester et al., 1996). Additionally, we computed the determinism value DET,

DET=l=lminNlPll=1NlPl,(7)

for each initialization, where P(l) is the fraction of the diagonal lines with length l in the recurrence plot, and lmin specifies a minimum diagonal length. DET is a measure which quantifies the predictability or the regularity of the dynamics of a system. As described above, recurrence plots indicate when a system revisits a previous state. Diagonal lines in recurrence plots indicate segments where the system evolves in a similar way over time, i.e., longer diagonal lines contribute to higher values of DET (closer to 1) and show more deterministic (i.e., predictable) behavior. DET is the ratio between the recurrence points that belong to diagonal lines of length lmin and all recurrence points forming any diagonal line (and excluding the main diagonal, as this identity recurrence is always present and does not reflect dynamic similarity between distinct time points). A DET value close to 1 indicates a highly predictable system, whereas a value close to 0 suggests random or chaotic behavior.

We used the number of clusters to classify each state in the limit cycle as either unistable (if the number of clusters was equal to 1), multistable (if the number of clusters was 30), or metastable (if the number of clusters was >30). The thresholds were determined based on the visual inspection of the number of clusters per point in the oscillatory regimes, as exemplified in Figure 1 (panel in the fourth column, bottom plot, depicting the number of clusters in the oscillatory region of state space). This led to a clear boundary between metastable versus multi- and unistable regions (panel in the fourth column, bottom plot of Figure 1, dark red versus multicolored regions). We further distinguished between fast and slow metastable states by the maximum determinism value across the 100 initializations. Fast metastable states are characterized by values 0.35, with short state durations, while slow metastable states are characterized by values >0.35, with longer state durations (state durations were additionally determined based on the inter-hemispheric cross-correlation described in Section 2.3.4, as in Roberts et al. (2019)). We opted for the maximum determinism value as this allowed us to identify the presence of at least one slow metastable state across the 100 initializations. The value of 0.35 was chosen as the threshold based on the visual inspection of the determinism values computed for all state space locations in the oscillatory region. This showed clusters of regimes with determinism values >0.35, across the state space (see example in Figure 1 in the panel in the fourth column, top plot, showing the maximum determinism value in the oscillatory region of the state space). Additionally, the choice was confirmed through the visual inspection of the interhemispheric cross-correlation (see Section 2.3.4) for several points in the state space. This allowed us to visually confirm the difference in state duration between slow and fast metastable states.

2.3.3 Kuramoto order parameter

Using the simulation data described in Section 2.3.1, we computed the Kuramoto order parameter R(t),

Rt=|1Nn=1Neiθnt|,(8)

where θn(t) denotes the instantaneous phase obtained from the Hilbert transform of the time series rE for each node n, and N {100, 200, 500} denotes the total number of nodes in the network.

Subsequently, we summarized the results for each model and each network resolution using the mean and the standard deviation of R(t). High values of the mean indicate a synchronous solution, whereas low values indicate an asynchronous solution. With respect to the standard deviation, high values are indicative of metastability and low values correspond to solutions which remain stable over time.

2.3.4 Interhemispheric cross-correlation

To investigate spatial properties of oscillatory states, we computed the sliding-window time-lagged cross-correlation as in Roberts et al. (2019). We calculated the intrahemispheric Kuramoto order parameter for each hemisphere. Subsequently, the windowed time-lagged cross-correlation between the two parameters was determined with a window of length W of 100 m and 90% overlap between consecutive windows, and with a lag l of 50 m, as follows:

CRLRRtl=1Wi=0W1RLt+iR̄LtRRt+i+lR̄Rt+l(9)

where R̄L(t) and R̄R(t+l) denote the mean values of the left and right Kuramoto order parameters over their respective windows.

2.3.5 Singular value decomposition

To conduct a singular value decomposition (SVD), we firstly computed the velocity vector fields (cf. Roberts et al., 2019). For each node n, we used the instantaneous Hilbert transform of rE to determine the phase θn, after which we calculated the velocity vn using its spatial and temporal derivatives:

vn=|θnt|/θn2θn.(10)

The spatial derivative was calculated using the constrained natural element method (Illoul and Lorong, 2011), as described in Roberts et al. (2019). This method allows for the calculation of the components of the gradient vector without the need for interpolation to and from a 3D grid.

The SVD was then performed for the velocity vector fields v={vn}n=1N, according to the method used by Liang et al. (2023) and introduced in Townsend and Gong (2018). Briefly, for each of the two models and for every network resolution, we concatenated the time series v(t) of the velocity vector fields across all four state types identified in Section 2.3.2 to obtain a matrix W (time steps and state types in rows and nodes in columns). This matrix was decomposed using SVD as:

W=UΣVT,(11)

where the columns of U represent the left and the rows of VT represent the right singular vectors of W. Hence, the rows of VT represent the spatial modes of W, the columns of U their time course and the diagonal elements of Σ the eigenvalues σ in descending order of magnitude. The variance explained by each mode is given by σk2/iσi2.

We then projected the spatial modes identified on the concatenated data onto the individual vector velocity fields of each parametrization and quantified the proportion of explained variance by each projected spatial mode m onto the n-th velocity vector field as Mm,n2/iMi,n2, where M denotes the projection matrix.

2.3.6 Structural gradient manipulation

To investigate the effect of the structural gradient on the propagation of SOs, we used the sleep model parametrization introduced in Cakan et al. (2022) for the aLN model, with minor adjustments of the adaptation parameters (see Supplementary Table SA4). The adjustment was necessary because the parcellations of higher resolution had stronger pairwise connectivity strengths compared to the 100 node case, which caused the model to be in the up state for prolonged intervals of time due to a shift in state boundaries. The manual increase of the adaptation parameters ensured that the model visually displayed SOs (Cakan et al., 2022). For the Wilson-Cowan model with 100 nodes, we conducted an evolutionary optimization in neurolib, with resting-state functional connectivity and functional connectivity dynamics and with power spectrum of EEG in sleep stage N3 as optimization objectives (full procedure described in Cakan et al. (2022)). As the evolutionary optimization was computationally not feasible for the networks with 200 and 500 nodes, we manually adjusted the adaptation parameters obtained for the network with 100 nodes (see Supplementary Table SA4) in the same manner as described above for the aLN model. To compare our results with previous work, compare dynamical landscapes across models and resolutions, we used the parameters given in Tables 1, and 2. For the sleep models, we modified a small number of parameters to place the model in a regime, where realistic SOs are produced.

The antero-posterior structural connectivity gradient defined as the slope of the linear regression between the node degree and its coordinate along the antero-posterior axis of the brain (Cakan et al., 2022) is shown in Supplementary Figure SA18 for the three parcellations. A more negative slope indicates that more anterior nodes have lower node degrees compared to more posterior nodes.

To manipulate the antero-posterior gradient, we introduced a parameter p, which represents the maximum percentage change in connection strengths applied to the most anterior node. We then computed N equidistant and increasing values pi,i 1, …, N (where N denotes the number of nodes in the network) ranging from -p% to +p%. Each value represents the percentage by which the connections of a node are adjusted. In the next step, we rank-ordered all nodes according to their coordinate along the antero-posterior axis, sorting them from most anterior to most posterior. For each node i 1, …, N, we scaled each of its connection strengths (i.e., its row in the connectivity matrix) by the corresponding percentage value pi, i 1, …, N. This scaling is multiplicative (e.g., a value of +10% increases each of the weights of a node by 10%). We modified the connection strengths based on percentages rather than absolute values to ensure that no negative values were introduced in the structural connectivity matrix.

To isolate the effect of the gradient from other network properties, we constructed control models with gradient values similar to the networks described above (within a small tolerance limit). In these models, we preserved the total sum of connection strengths (i.e., the sum of all entries in the structural connectivity matrix), but destroyed the relationship between the connection strength and the corresponding fiber length. This was achieved by randomly permuting the entries of the structural connectivity matrix and calculating the antero-posterior gradient value as described above; this was repeated until the gradient was similar (within a tolerance) to the target value. Since the corresponding fiber length matrix was kept fixed, any relation between connection strength and distance was destroyed. Hence stronger connection strengths no longer corresponded to longer fibers on average.

To determine the direction of propagation of SO up/down state transitions along the antero-posterior axis, we first computed the proportion of regions in the down state as a function of time. The down states were identified by thresholding the excitatory rE(t)θmax(rE(t)), with θ = 0.01 for the aLN and θ = 0.2 for the Wilson-Cowan model at every time step. Subsequently, we applied a 0.5–2 Hz bandpass filter to the resulting time series, computed the Hilbert transform, and identified the transition phase of a node as the phase of the Hilbert transform at the time point at which the node transitioned from the up (down) to the down (up) state. Phases were averaged across all transitions of each node. We then computed the Pearson correlation coefficient between the average transition phase and the node coordinate along the antero-posterior axis. Positive (negative) values of the correlation between the up-to-down transition phases and the node coordinates indicate a preferential antero-posterior (postero-anterior) direction of propagation, and vice versa for the down-to-up transitions. Note, the Equations 13, 511 here are provided for completeness of the methods.

2.4 Manipulation of short- vs. long-range connection strengths

We collected all pairs (n,ñ), n,ñ=1,,N,N{100,200,500}, of indices of nodes connected by short-range connections in set SN, and of nodes connected by long-range connections in set LN (see Figure 2, panel on the top left). A connection was marked as short range, if the corresponding element Dnñ of the delay matrix D was smaller than 50mm.

Figure 2
Diagram illustrating a methodology for analyzing connectivity matrices. Top: diagrams showing definitions of short- and long-range connections based on fiber lengths, with empirical and artificially modified matrices. Middle: equations for computing correlation coefficients between modes. Bottom: histograms depicting fiber lengths versus normalized fiber strength across different conditions. Arrows indicate methodological steps. Text provides explanations and instructions for each process involved.

Figure 2. Summary of the procedure used to manipulate and investigate the effect of weaker versus stronger long-range connection strengths on the network dynamics. For an explanation, see text.

We identified the subjects with the weakest short- and the strongest long-range connections

minsubjectn,ñSNCnñsubject,maxsubjectn,ñLNCnñsubject,

and retained the corresponding connectivity matrices Cweak−longemp, Cstrong−longemp (see Figure 2, panel in the middle of the top row).

To artificially manipulate two matrices beyond the empirically observed variability (see Figure 2, panel in the top right), we used the factors α=0.1 and γ=α|SN||LN|, where || denotes the cardinality, to manipulate the connectivity strengths into a biophysically exaggerated disproportion by

Cstrong−longart=CαCshort+γClongCweak−longart=C+αCshortγClong.

Cshort (Clong) denotes the connectivity matrix between nodes connected by short-range (long-range) connections and with the strength for nodes connected by long-range (short-range) connections set to zero. For the non-zero entries, we used the corresponding elements of the averaged connectivity matrix Cnñ, i.e.,

Cshort=Cnñ,for n,ñSN0,otherwise,Clong=Cnñ,for n,ñLN0,otherwise.

Thus we ensured that the total sum of connections strengths remained constant, i.e., n,ñ=1NCnñ=n,ñ=1NCnñart. Figure 2 (plots on the bottom right) shows the correlations between fiber-length and -strength for the empirical and for the manipulated connectivity matrices. There was no qualitative change. We also ensured that there was no qualitative change in the distribution of node degrees (not shown).

Furthermore, we individually inspected the total sum over the short- and long-range connections of Cstrong−longart to confirm that long-range connectiones were strengthened, that short-range connections were weakened, and that the difference between the two sums was enhanced (i.e., (n,ñ)SNNCnñ,stronglongart(n,ñ)LNNCnñ,stronglongart>(n,ñ)SNNCnñ,stronglongempn,ñLNNCnñ,stronglongemp, where || denotes the absolute value). A similar construction was conducted for Cweak−longart.

2.4.1 Correlation coefficient between spatial modes

We conducted numerical simulations for four locations in the state space covering unistability, multistability, fast and slow metastable patterns (see Supplementary Figure SA13). Simulations were performed for both models with and without adaptation, for the averaged connectivity matrix C, for the four connectivity matrices Cstrong−longemp, Cweak−longemp, Cstrong−longart, and Cweak−longart, and for all resolutions. Then, we computed the velocity vector fields for each resultant activity, concatenated them per setting, and applied SVD as described in Section 2.3.5. This resulted in five matrices V,Vstrong−longemp,Vweak−longemp,Vstrong−longart, and Vweak−longart of the spatial modes per setting. To identify the similarity between spatial modes, we computed the Person correlation coefficient between each row of VT and each row of the matrices Vstrong−longemp,Vweak−longemp,Vstrong−longart, and Vweak−longart:

CorrV,Vstrengthtype for typeemp,art,strengthstronglong,weaklong.

This was done for each selected state, with and without adaptation, for the aLN and the Wilson-Cowan models, and for all three parcellations. The resulting correlation coefficient matrices have values ranging between 1 and 1. Values close to zero indicate little to no similarity, while values closer to 1, 1 indicate high similarity.

2.4.2 Coherence values

As in Section 2.3.6, we conducted numerical simulations of SOs for the aLN (parameters, see Supplementary Table SA4) and the Wilson-Cowan model (parameters, see Supplementary Table SA5) using the average connectivity matrix C, as well as the modified matrices Cstrengthtype for type{emp,art},strength{stronglong,weaklong}.

For each numerical simulation, we computed, analogously to Liang et al. (2023), the magnitude-squared coherence

cohnñf=Pnñf2PnfPñf, with cohnñf0,1,

where Pn(f) and Pñ(f) are the power spectra over temporal frequencies of the firing rates of the excitatory population for the nodes n and ñ and Pnñ(f) is the corresponding cross-power spectrum. A value close to one indicates high correspondence between nodes (i.e., the nodes are highly correlated) for frequency f and vice versa for values close to zero.

We separately consider the coherence between nodes connected with a short range (i.e., all pairs (n,ñ) of nodes from SN) versus nodes connected with a long range (i.e., all pairs (n,ñ) of nodes from LN) connection.

3 Results

3.1 State space

Figures 3, 4 show the results of the state space analysis for the whole-brain aLN and Wilson-Cowan models. In line with previous results for the aLN model (Cakan et al., 2022), we identify several dynamical regimes: a down-state, where all network nodes display no or low activity; an up-state, characterized by constant high firing rate; an oscillatory region LCEI, where the activity oscillates between a minimum and a maximum value with frequencies >10 Hz (see dominant frequencies in Supplementary Figure SA3 for the aLN; Supplementary Figure SA4 for the Wilson-Cowan model); a bistable regime between up- and down-states; and a slow oscillatory region LCEA with frequencies <2 Hz (see bottom panels in Supplementary Figure SA3 for the aLN; Supplementary Figure SA4 for the Wilson-Cowan model) in the case with adaptation. Similar to Cakan et al. (2022), we observe a very small bistable region for the aLN model where an up-state and the fast LCEI coexist. We also find a small bistable region where an up-state and the slow LCEA coexist. For both whole-brain models, these states are “inherited” from the single-node models (shown in Supplementary Figure SA1), although only very few points displaying bistability between oscillatory and up states can be identified here (purple arrow in Supplementary Figure SA1).

Figure 3
Comparison of six heat maps showing bifurcations in neural networks. Top three maps, labeled

Figure 3. Slice of state space of the whole-brain aLN model without (b = 0 pA; top row) and with (b = 20 pA; bottom row) adaptation for a brain network with 100 (left column), 200 (middle column), and 500 (right column) nodes spanned by the external input currents to the E and I populations. In every panel, the horizontal axis shows the external input current to the excitatory population (μEext) and the vertical axis shows the external input current to the inhibitory population (μIext). The heatmap shows the maximum excitatory firing rate rE (Hz) across all nodes in the network. State transition boundaries are indicated by solid white lines for the fast (LCEI) and slow (LCEA) oscillatory regions and by solid grey lines for the bistable regimes (bi - bistability between up and down states; biosc - bistability between LCEI and the down state). The white dashed lines indicate the border between the two oscillatory regions. Up state (up) and down state (down) regions are also marked. het indicates the areas where we identified heterogeneous slow-fast oscillations (for b = 20 pA). Model parameters are given in Table 1.

Figure 4
Six-panel visualization shows

Figure 4. Slice of state space of the whole-brain Wilson-Cowan model without (b = 0; top row) and with (b = 60; bottom row) spike-triggered adaptation for a brain network with 100 (left column), 200 (middle column), and 500 (right column) nodes spanned by the external input currents to the E and I populations. In every panel, the horizontal axis shows the external input current to the excitatory population (μEext), and the vertical axis shows the external input current to the inhibitory population (μIext). The heatmap shows the maximum value of rE across all nodes in the network. State boundaries are indicated by solid white lines for the fast (LCEI) and by dotted white lines for the regimes of slow (LCEA) oscillations. Solid grey lines denote the boundary of the regime of bistability between up and down states (bi). het indicate the areas where we identified heterogeneous slow-fast oscillations. Up state (up) and down state (down) regions are also marked. All model parameters are given in Table 2.

Our results show that, for both models, the state space remains generally robust to changes in network resolution, but there are some differences between the aLN and the Wilson-Cowan implementations. For the aLN model, we observe a region of bistability between the down-state and the LCEI in the case without adaptation, respectively a heterogeneous oscillation (different oscillation frequencies either within the same node or across nodes) in the case with adaptation for the network model with 100 nodes (see Figure 5 for an example time series of a nested slow-fast oscillation). Examining the top row in Figure 3 reveals that the region of bistability between the down state and the LCEI in the case of no adaptation expands as the number of nodes in the network increases. Inspecting the average dominant frequency, as well as the standard deviation of the dominant frequency of each node (bottom panels in Supplementary Figures SA3, 5) confirms that, for the case with adaptation, this region corresponds to an expanding regime of heterogeneous slow-fast oscillations across nodes.

Figure 5
Graphs depict neuron firing rates with and without adaptation across networks of 100, 200, and 500 nodes. Panels A, C, and E are without adaptation; B, D, and F with adaptation. Enlarged sections illustrate firing rate details and power spectra, revealing differences in neuronal response and adaptation impact over time.

Figure 5. Example time series of the firing rate rE of one randomly chosen node (black line) of the whole-brain aLN network at several points in the state space: (A), (C), and (E) illustrate bistability between the down state and the fast oscillatory region LCEI using a decaying stimulus (red) delivered to all nodes in the network (μEext = μIext = 0.0 mV/ms, b = 0 pA for all three parcellations); (B), (D), and (F) illustrate coexisting slow and fast oscillations for the case of adaptation (b = 20 pA for all three parcellations, μEext = 0.08 mV/ms for the 100 node resolution, μEext = 0.04 mV/ms for 200 and 500 nodes, μIext = 0.0 mV/ms for all three parcellations). All other model parameters are given in Table 1. The light (top) and dark green (bottom) insets display enlarged intervals of the time series of the firing rate rE (black) and, in case of finite adaptation, the current IA (blue) for the chosen node, and also show the power spectrum for the brain network with 500 nodes averaged across all nodes.

For the Wilson-Cowan model, we also find a region of heterogeneous oscillations in the case with adaptation (example time series in Supplementary Figure SA2), which expands with increasing network resolution (bottom row in Supplementary Figures SA4, 6). However, in contrast to the aLN model, this region emerges at the border between the LCEI and LCEA and no regime of bistability between the down state and the LCEI appears.

3.2 State classification

The analysis methods presented in Section 2.3.2 allowed us to identify four types of states (unistable, multistable, fast metastable, and slow metastable) across the LCEI and LCEA regions. Figure 6 shows examples of recurrence plots and interhemispheric cross-correlograms for a multistable, a fast metastable, and a slow metastable state. The recurrence plots allow us to identify the temporal structure of these states, with the multistable state displaying a clear repetitive pattern over the 20 s of activity shown here (Figure 6A), the fast metastable state displaying rapid state switches, as evidenced by the noisy recurrence plot in Figure 6B, and the slow metastable state showing states which persist for a longer duration, as demonstrated by the appearance of more defined clusters (Figure 6C). The cross-correlograms additionally allow us to highlight the spatiotemporal properties of these states. As mentioned in Roberts et al. (2019), if short incoherent waves dominate, we would expect the interhemispheric coherence to be close to zero across all explored time lags and time points, whereas waves with longer wavelengths would display specific signatures composed of alternating high and low correlation values as a function of the time lag that would persist for a longer time. In the example highlighted here, the multistable state shows repeating spatiotemporal patterns for both initializations. In the fast and slow metastable cases (r.h.s in Figures 6B, C), we observe signatures of wave patterns which remain stable for a few hundred miliseconds (in the fast metastable case) up to a few seconds (in the slow metastable case), before rapidly desynchronizing for brief periods of time and transitioning into other wave patterns. To further highlight the difference in state durations between the fast and the slow metastable case, we computed the distribution of state durations identified for the fast and slow metastable points shown in Figure 6 (Supplementary Figure SA9), where we observe longer state durations (up to a few seconds) in the slow compared to the fast metastable case.

Figure 6
Graph depicting three different data analyses labeled A, B, and C. The left side features a line plot comparing

Figure 6. Examples of multistable (A), fast metastable (B), and slow metastable (C) states of the aLN model with 100 nodes and without adaptation (b=0 pA). In each subplot, the left panel shows the recurrence plots, and the right panel the corresponding cross-correlograms. The interhemispheric cross-correlations (cc, see Section 2.3.4) range from −1 (blue) to 1 (red). For the multistable example (A), results are shown for two different random initializations of the network (top and bottom rows). Parameters (positions in state space are shown in the inset on the top left): (A) - (μEext = 1.3 mV/ms, μIext = 0.8 mV/ms), (B) - (μEext = 0.4 mV/ms, μIext = 0.1 mV/ms), (C) - (μEext = 0.9 mV/ms, μIext = 0.0 mV/ms). The simulation time was 20 s. All other parameters are given in Table 1.

Results of the state classification for the entire slice of state space are summarized in Figure 7 (aLN model) and Figure 8 (Wilson-Cowan model). Qualitatively, the results are similar across models and resolutions, with all four regimes being present in all cases, and with the fast metastable regime occupying the largest portion of the LCEI, while being absent from the LCEA region (which is dominated by unistable patterns). However, some quantitative differences are apparent. For the aLN model without adaptation, both the multistable and slow metastable regimes emerge on the right side of the LCEI region close to the up state. For the Wilson-Cowan model, however, they appear on the left side of this region close to the down state.

Figure 7
Six graphs display data on input current I versus input voltage E at 100, 200, and 500 nodes, comparing no adaptation and with adaptation scenarios. Each graph uses color coding for different states: slow metastable, multistable, unistable, fast metastable, and no oscillation. Colors include brown, orange, green, red, and blue. The setup shows changes in behavior with adaptation and node count variations.

Figure 7. Classification of states inside the oscillatory regions for the aLN whole-brain model in the case without (b = 0 pA; top row) and with (b = 20 pA; bottom row) adaptation for a parcellation with 100 (left column), 200 (middle column), and 500 (right column) nodes. The slice of state space is spanned by the external input current to the E and I populations. The white solid contour marks the two oscillatory regions, and the white dashed lines indicate the approximate border between them.

Figure 8
Six-panel graph showing stability regions for different node configurations. Top row: No adaptation with 100, 200, and 500 nodes. Bottom row: With adaptation. X-axis labeled

Figure 8. Classification of states inside the oscillatory regions for the Wilson-Cowan whole-brain model in the case without (b = 0; top row) and with (b = 60; bottom row) adaptation for a parcellation with 100 (left column), 200 (middle column), and 500 (right column) nodes. The slice of state space is spanned by the external input current to the E and I populations. The white solid contour marks the two oscillatory regions, and the white dashed lines indicate the approximate border between them.

As metastability is usually identified based on the mean and the standard deviation (SD) of the Kuramoto order parameter (metastability corresponds to a high standard deviation of the Kuramoto order parameter), we report these results for completeness in Supplementary Figures SA7, 8. The results show high synchrony (mean Kuramoto 1) and low metastability (SD of Kuramoto 0.1) in the areas identified above as uni/multistable, lower synchrony (mean 0.4–0.7) and higher metastability (SD 0.1–0.2) for the corresponding slow metastable points, and lowest values for the corresponding fast metastable points (mean and SD <0.1). Given that the Kuramoto order parameter is only sensitive to global states and misses local synchrony and that the fast metastable dynamics are also more local, these results are not surprising.

3.3 Spatial modes of activity

Our analysis of the spatial modes of activity reveals that, in general, the modes which explain a larger proportion of variance of the activity (percentages given in Supplementary Tables SA1, 2) in the concatenated data (obtained by concatenating the velocity vector fields computed for each point in the oscillatory regions, with time steps in rows and nodes in columns) consist of large-scale waves traveling mainly along the horizontal and dorso-ventral axes. The results are summarized in Figures 8A,B for the aLN model and in Supplementary Figures SA12a, b in the appendix for the Wilson-Cowan model. For example, modes 1 and 4 in the aLN model (Figure 8A) and modes 2 and 4 in the Wilson-Cowan model (Supplementary Figure SA12a) exemplify large-scale waves with coherent horizontal and dorso-ventral directions of propagation encompassing approximately three-quarters of the brain. Another example of a large-scale wave pattern is represented by the hemispheric-segregated pattern present in the Wilson-Cowan model (mode 3 in Supplementary Figure SA12a) and in the aLN model (mode 9 in Figure 9). Interestingly, these modes explain similar proportions of variance (1.78% in the aLN vs. 1.44% in the Wilson-Cowan model). In contrast, modes explaining less variance within each model and each resolution usually capture more complex patterns of propagation. For example, in both models, mode 13 (Figure 9A, Supplementary Figure SA12b) displays smaller clusters of arrows with the same color and direction (i.e., same horizontal and dorso-ventral directions), as well as more neighboring arrows with different colors and directions compared to the large-scale modes indicated above. While we identify similar modes in both models (see above), the overall proportion of variance explained by the 15 first modes differs (30.28% for the aLN vs. 9.19% for the Wilson-Cowan model with adaptation). There is also a tendency towards decreased explained variance per mode with increasing model resolution, as well as differences in the percentages of variance explained by the dominant modes between the models with and without adaptation (Supplementary Tables SA1, 2).

Figure 9
Panel A shows 15 circular diagrams representing different modes, with arrows indicating direction. A color gradient from dorsal to ventral is at the bottom. Panel B contains four diagrams comparing node adaptations with varying quantities and adaptations labeled as

Figure 9. (A) First 15 modes obtained from the singular value decomposition of the velocity vector fields in the whole-brain aLN model with 100 nodes and spike-triggered adaptation (b = 20 pA). Modes are ordered in decreasing order of explained variance. (B) Left panels: Modes explaining the largest proportion of variance for the whole-brain aLN model without spike-triggered adaptation (b = 0 pA) with a parcellation of 100, 200 and 500 nodes. Right panels: Same as before, but with spike-triggered adaptation (b = 20 pA) and with a parcellation of 200 and 500 nodes. The arrows represent the orientation in the xy plane (left-right and antero-posterior directions) and are color-coded according to the direction along the z-axis (dorso-ventral direction). (C) Percentage of explained variance (mean ± standard deviation across points in the parameter space) of the first 15 modes identified in (A) for the aLN model with 100 nodes and spike-triggered adaptation (b = 20 pA). The percentage is shown for the different pattern types identified in Section 3.2: uni/multistable (orange), fast metastable (blue), and slow metastable (green).

To verify whether the modes obtained from the decomposition of the concatenated data can be reliably identified in the individual velocity vector fields computed for each parametrization in the oscillatory regions LCEI and LCEA, we projected these modes and investigated the explained proportion of variance for the state types identified in Section 3.2 (i.e., fast metastable, slow metastable, uni/multistable). Figure 9C, Supplementary Figure SA12c show that, in general, the most dominant five modes, representing global propagation patterns, explain the largest proportion of variance in individual states regardless of state type. Nevertheless, the largest proportion of variance is explained in the stable states (>25% explained by the first five modes), followed by the slow (>10%), and the fast metastable states (<10%). We also observe that the first dominant mode identified in the concatenated data does not necessarily capture the largest proportion of variance in individual states (Figure 9C in contrast with Supplementary Figure SA12c), suggesting that while this pattern of activity is consistently present across states, it may not be dominant in all of them.

As a further example, we examined the spatial modes of activity in the LCEA region, obtained from the data concatenated over all points identified as unistable and with an average dominant frequency 2 Hz, for the aLN and Wilson-Cowan models with 100 nodes and adaptation. Supplementary Figures SA10, 11 confirm the presence of large-scale activity patterns traveling along the horizontal and dorso-ventral directions similar to the ones described above. For example, mode 1 in the aLN model and mode 7 in the Wilson-Cowan model are similar to modes 9, respectively 3, described above, whereas modes 2, 3, and 4 in both models are similar to modes 1 and 4, respectively 2 and 4, described above. Furthermore, we also observe that most spatial modes contain a small component propagating along the antero-posterior direction (for example, the arrows pointing anteriorly/posteriorly in the first two modes of both models, which is in agreement with previous reports regarding the antero-posterior direction of SO propagation (Cakan et al., 2022; Massimini et al., 2004). In both cases, the modes obtained from the decomposition of the unistable patterns in the LCEA region of slow oscillations explain a significantly higher proportion of variance compared to those obtained from the decomposition of the data concatenated over all state types in both oscillatory regions: 73.52% vs. 30.28% for the aLN and 58.99% vs. 9.19% for the Wilson-Cowan model, with the first mode explaining 26.71% of the variance (aLN) and 24.33% (Wilson-Cowan) vs. 9.31% and 3.72%.

3.4 Similarity of spatial modes of imbalanced short-versus long-range connection strengths

To identify the impact of the balance between short- and long-range connection strength, we compared the 10%-most dominant spatial modes (i.e., the spatial modes that explain the largest amount of variance in the spatial organization of activity patterns) of the activity induced by empirically informed and artificially manipulated connectivity matrices. We simulated both models for parameters corresponding to all four types of stability per resolution (see Supplementary Figure SA13 for the corresponding locations in state space). We used the average connectivity matrix C whose resulting spatial modes are collected in the columns of V, and compared results obtained to the results for the empirical and the artificially enhanced matrices with weaker vs. stronger long-range connections: Cweak−longemp,Cweak−longart, Cstrong−longemp,Cstrong−longart, whose resulting spatial modes are collected in Vweak−longemp,Vweak−longart, Vstrong−longemp,Vstrong−longart, respectively. Then we estimated the distribution of the values of the correlation coefficients Corr(V,Vstrengthtype) for type{emp,art},strength{weaklong,stronglong} where we normalized each distribution by its maximum value to ensure the option of visual comparability. Results are shown in Supplementary Figure SA16 for the aLN, and in Supplementary Figure SA17 for the Wilson-Cowan model. Means and standard deviations of the distributions are given in Table 3 for the aLN and in in Supplementary Table SA3 for the Wilson-Cowan model. Additionally, we show the resulting correlation coefficient matrices for all settings without adaptation in Supplementary Figure SA14 (aLN model) and A15 (Wilson-Cowan model).

Table 3
www.frontiersin.org

Table 3. Average standard deviation (σ) and mean (μ) of the density estimates of Supplementary Figure SA16 for the aLN model, per type of stability, with and without adaptation, and per resolution. Density estimates of broadest width per resolution are highlighted with and without adaptation in bold.

All distributions are centered around a value of zero. However, we notice that the standard deviations (Table 3) for the fast metastable states are the largest (indicating higher similarity between spatial modes). Due to the model systems experiencing a higher amount of metastable attractors, inducing fast switching between activity states, they are inherently less stable compared to unistable or slow metastable states. Only for the cases of unistability with adaptation at resolutions N{100,200} they are not largest. This is because the activity for those settings converges to a spatially homogeneous unistable state for all matrices, having a diagonal of Corr(V,Vstrengthtype)nn1, and Corr(V,Vstrengthtype)nñ0, for nñ. Overall, the similarity is comparably low for all settings, but lower on average than for the aLN model (see averages over standard deviations in Table 3; Supplementary Table SA3).

Nonetheless, we see a higher similarity of spatial organization in states of stability that promote more local, complex activity patterns rather than the global, synchronized patterns that appear in unistable or multistable states. While the states showing the broadest widths differ between both models (multistable states for the Wilson-Cowan model vs. unistable or fast metastable states for the aLN model), the overall low similarity in the spatial organization between activity patterns caused by the average connectivity matrix vs. by the connectivity matrices with weaker and stronger long-range connections generalizes across all resolutions, both model types and all settings. Finally, we see that the results of the comparison between the spatial organization of activity patterns induced by the different connectivity matrices are predominantly the same for the artificial versus empirical connectivity matrices for both models and all resolutions.

3.5 Effect of the antero-posterior gradient of structural connectivity strengths on sleep SO propagation

The results presented above show that for both the aLN and Wilson-Cowan models dynamical features remain generally robust to changes in the parcellation. Also, the phenomenological Wilson-Cowan model is capable of producing qualitatively similarly complex spatiotemporal dynamics as the biophysically realistic aLN model. In the current section, we explore whether this remains to be the case when both models are applied to the phenomenon of sleep SO propagation (Cakan et al., 2022). In particular, we examine whether the relation between the antero-posterior structural connectivity gradient and the propagation of sleep SOs as waves of silence from anterior to posterior brain areas remains present in both models and for all parcellations. Furthermore, we test whether changes in the strength of this connectivity gradient have a causal effect on the direction of propagation of SOs.

Figure 10 shows that the relation reported in Cakan et al. (2022) is present in both the aLN and Wilson-Cowan models for all three network resolutions. Furthermore, decreasing the gradient strength along the antero-posterior axis causes a reversal of the direction of SO propagation, with down states being initiated preferentially in posterior areas and traveling towards the front of the brain. Increasing the gradient strength increases this preference to propagate from anterior to posterior areas. In the Wilson-Cowan model, however, the relation between node degree and the transition phase decreases with the increase in resolution, as the magnitude of the correlation coefficients decreases at higher resolutions. This could potentially be caused by the fact that in the Wilson-Cowan model the adaptation strength b and adaptation time constant τA had to be drastically increased at higher resolutions in order to observe SOs.

Figure 10
Six line graphs compare the correlation coefficient versus change in maximum connection strength for two models: aLN and Wilson-Cowan. Each model has three graphs with 100, 200, and 500 nodes. The blue line represents

Figure 10. Correlation coefficient between the mean transition phases of the nodes from the up to the down state (blue) and vice versa (orange) and the node coordinates along the antero-posterior axis as a function of the percentage by which the connection strengths of the most anterior node were changed. The left (right) column shows results for the aLN (Wilson-Cowan) models with 100 (top row), 200 (middle row), and 500 nodes (bottom row). 0% corresponds to the unchanged structural antero-posterior gradient where the value of the y-slope was not changed, −100% indicates that the gradient was enhanced, whereas +100% indicates that the gradient was reversed. Model parameters are given in Supplementary Tables SA4, 5.

To ensure that the results presented in Figure 10 are not due to changes in the underlying network topology induced by the specific gradient manipulation method, we employ a control model in which we preserve the total sum of connection strengths in the network and destroy the relation between fiber length and connection strength (cf. Section 2.3.6). Supplementary Figure SA19 shows that the relationship described above remains present in the aLN model at all three network resolutions. In the Wilson-Cowan model, destroying the relation between the distance and connectivity strength destroys and even reverts the propagation direction of SOs, suggesting that the model is more sensitive to changes in the particular structure of the connectome.

3.6 Stronger long-range connections lead to an increase in coherence as observed empirically

Motivated by the findings that show that rare long-range connections play an effective role in the cascade of information processing (see Deco et al., 2021b) and that stronger long-range connections correlate with enhanced coherence between cortical regions over lower frequency ranges (Liang et al., 2023), we investigated how changes in the strength of long- versus short-range connections influence waves of SOs.

Since long-range connections are assumed to play a crucial role in the propagation of global patterns, we assume that the stronger the long-range connections, the higher the coherence over lower frequency values induced by slow oscillations. We therefore compared results obtained using the matrices Cstrong−long, Cweak−long, and C.

Figure 11; Supplementary Figure SA19 show the average power spectra and coherence values for the aLN and the Wilson-Cowan models for three different parcellations. In Figure 11A we see that for all parcellations the dominant temporal frequencies are <1Hz and the power spectra of the simulated SOs align with the power spectra of empirically recorded SOs (see the supplementary material of Cakan et al. (2022); Supplementary Figure S5b) across all models and resolutions. Nonetheless, small differences between the power spectra for the different parcellations caused by the three different connectivity matrices Cstrong−long, Cweak−long, and C are more pronounced for the empirical matrices, which is confirmed by values for the dominant temporal frequency, given in Supplementary Table SA6. Furthermore, the power for lower frequencies decreases with increasing resolution, in particular for the stronger long-range connections (blue line), see Supplementary Table SA6. The decrease in power is less pronounced in the artificial compared to the empirical case. We argue that this is caused by the matrix C being an average, hence the connection strengths are more evenly distributed rather than promoting sparse connectivity profiles, unlike for the empirical matrices Cemp. According to our method of manipulation, the connection strengths in the artificially manipulated matrices are also more evenly distributed than for the empirical matrices.

Figure 11
Graphs display power spectral density and coherence across networks with 100, 200, and 500 nodes. Panel A shows artificial and empirical average power spectral density, while panel B presents artificial and empirical coherence across frequencies 0 to 3 Hz. Lines represent different connectivity types: short-range and long-range connections with variations in strength, as specified in the legend.

Figure 11. Power and coherence as a function of frequency for SO activity generated by the aLN model. Results are shown for the average connectivity matrix, C, (coral), and the connectivity matrices with weaker, Cweak−long, (green) and stronger, Cstrong−long, (blue) long-range connections. Every column corresponds to one parcellation. (A) Averaged power spectra on a logarithmic scale with standard deviation for each activity induced by the three connectivity matrices. The top (bottom) row shows the results for the artificially changed (empirically selected) connections. (B) Corresponding coherence values plotted separately for nodes that are connected through short- (solid lines) or long-range (dashed lines) connections. Model parameters are given in Supplementary Table SA4. Averages over standard deviation given in bottom row.

In the artificial case, we see that the change of coherence over frequency for the aLN (see Figure 11B) and for the Wilson-Cowan model (see Supplementary Figure SA20b) agrees with our expectation. The coherence over low frequencies is higher for SOs induced by Cstrong−long (blue lines) than Cweak−long (green lines), both between nodes connected with short-range (solid lines) and long-range (dotted lines) connections. This is also observable in the corresponding coherence values given in Table 4, where we can see that, in the artificial case, the coherence values are higher at f=0.5Hz for the SOs induced by Cstrong−long compared to Cweak−long. With an increase in resolution, we see an alignment of coherence values over the entire frequency range between nodes connected with short- and long-range connections due to an overall decrease of coherence values between nodes connected but short-range connections (see Figure 11B; Supplementary Figure SA20b).

Table 4
www.frontiersin.org

Table 4. Maximum coherence values for non-zero frequencies for the metastable states of the aLN model for all settings shown in Figure 10B. Both for the artificial and the empirical case, values in bold indicate the highest coherence values per parcellation, per set of nodes connected on a short-range (long-range). The corresponding frequencies were 0.5 Hz for all settings. Parameters are as for Figure 10.

The results of the Wilson-Cowan model agree mostly with the results of the aLN, however, the dominant temporal frequency varies more strongly depending on the parcellation and the used connectivity matrix, see Supplementary Table SA7. The coherence values are consistently larger for waves of SOs induced by Cstrong−long in the artificial case, see Supplementary Table SA8.

We observe the expected effect in neither model for the empirical matrices. We argue that this is due to the fitting process applied to the averaged matrix C whose distribution of connection strengths is more similar to the artificially manipulated connectivity matrices than to the empirical matrices.

Overall, models and resolutions agree with the expected increase in coherence values over low frequencies for the artificially manipulated matrices, but do not display the same effect for the empirically selected matrices.

4 Discussion

In this work, we investigated whether we can employ generalized whole-brain models for the study of complex brain dynamics or whether the latter are significantly influenced by the choice details of the dynamical system and the parcellation. To that end, we compared a biophysically realistic model (aLN) and a phenomenological model (Wilson-Cowan) with similar state spaces and bifurcations at three network resolutions (the Schaefer parcellation scheme with 100, 200, 500 nodes). Overall, we found that the results remain relatively robust to changes in both model and parcellation, but dynamics at detail appear sensitive to these changes, indicating the need for careful model adjustment depending on the application.

We started our analysis with the exploration of the coarse-grained structure of the dynamical landscape. We found that both the aLN and the Wilson-Cowan model display a down state of no or low activity, an up state of constant high activity, a fast limit cycle, where the activity oscillates between low and high values with frequencies >10 Hz, a bistable regime, where the activity remains either in a stable up or a stable down state depending on the initial condition in the case with and without adaptation, and a slow limit cycle, where the activity oscillates at low frequencies (<2 Hz) in the case of finite adaptation. The state boundaries remained relatively robust to changes in network resolution and are in agreement with those previously reported in the literature (Cakan et al., 2022). Nevertheless, we reported the emergence of a region of bistability between the down state and the LCEI in the case without adaptation in the aLN model, respectively of heterogeneous oscillations in the case with adaptation for both models. This is not present for a single node and it enlarged with increasing network resolution. We hypothesize that this is due to the fact that in the parcellations with higher resolutions we observe stronger local connection strengths (Roberts et al., 2019), which in turn favor the emergence of more complex dynamics, such as heterogeneous oscillations.

In a second step, we classified the oscillatory network states. We identified four types of states, namely, unistable, multistable, slow, and fast metastable states, in both models and at all resolutions, and observed quantitative changes with respect to the distribution of each type of state in the oscillatory regimes both across models and across resolutions (Figures 7, 8). Our detailed analysis of the types of oscillatory network states revealed that complex wave dynamics emerge even at low network resolutions and in relatively simple phenomenological models. Furthermore, the detailed mapping of the oscillatory regimes presented here can provide useful information for further studies aiming to explore induced state transitions, such as, for example, through the application of electrical stimulation (for example, see Ladenbauer et al. (2017); Ladenbauer et al. (2023)).

We explored large-scale patterns through the spatial modes obtained from singular value decomposition. We found that results are qualitatively similar across models and resolutions, but that specific patterns emerge depending on either model or resolution. Given that recent work (Das et al., 2024; Mohan et al., 2024) investigating the relation between spatiotemporal wave patterns and cognitive function has shown an association between specific patterns and specific behavioral processes, future modeling work in this direction should take into account the variability introduced by model and parcellation when exploring such phenomena. In alignment with the variability introduced by the model components investigated in this study, Roberts et al. (2019) additionally emphasize the importance of delays for enabling the emergence of complex wave patterns in whole-brain models. It is important to note that the conduction velocities used in this study, while in agreement with the work of Cakan et al. (2022), are higher compared to those of Petkoski and Jirsa (2022). Given that axonal delays can play a significant role in the dynamics of complex oscillatory networks, future work should investigate the effects of decreasing the conduction velocity on the emergent wave patterns.

We showed that changes in the balance of connectivity strengths between short- and long-range connections alter the spatial organization in states exhibiting global patterns (multi- and unistable) as well as complex patterns (fast and slow metastable), a result which stays predominantly consistent across models, resolutions, parametrizations, and states (see Supplementary Figures SA16, 17). Artificially manipulating the long- versus short-range connection strengths beyond empirically observed variability had no significantly different effect to the loss of similarity between the spatial organization of activity patterns induced by the artificially manipulated and the empirical connectivity matrices. Furthermore, we noticed that the strongest similarity in the spatial modes collected from the activity patterns caused by the different connectivity matrices was observable in the fast and slow metastable states in which complex local activity patterns emerge (see Kelso, 2012).

Furthermore, in the specific case of sleep SOs, we have shown that the aLN model is robust to changes in network resolution and even in parcellation scheme (as we used the original parametrization introduced in Cakan et al. (2022) with only minimal parameter adjustments). In this case, we were also able to demonstrate that changes in the antero-posterior structural connectivity gradient have a causal effect on the propagation of SOs. In contrast, the Wilson-Cowan model required optimization for the Schaefer parcellation scheme with 100 nodes and an additional adjustment of its parameters for higher resolutions. Here, manipulating the antero-posterior gradient of node degrees showed a robust causal effect only in the case where the model parameters were explicitly fitted to data rather than adjusted to support SO activity. The model also displayed high sensitivity to the changes in of the relationship between connection strength and distance.

For understanding the impact of changes in the strength of short- vs. long-range connections on SOs, we investigated power spectra and coherence values (see Figure 11; Supplementary Figure SA20). For the case of artificially manipulated connectivity matrices we found the coherence in lower frequency bands (<2Hz) to be higher in value for matrices with stronger long-range connections, than for the averaged C matrix that was used for the fitting process as well as for Cweak−longart. This agrees with the results of Liang et al. (2023) who also observed an increase in coherence between cortical regions in mice connected by stronger long-range connections. Our results are consistent across models and resolutions, highlighting the robustness of the temporal dynamics against model choice and resolution reliably aligning with empirical results under the manipulation of their structural connectivity.

For the empirical connectivity matrices Cemp we found the opposite effect (see Figure 11B; Supplementary Figure SA20b). This could be due to the fitting process being conducted with the averaged C matrix. Since the artificially manipulated connectivity profiles are based on the averaged C matrix, they are more similar in the distribution of the connection strengths, unlike the empirical connectivity matrices that are characterised by rather sparse connectivity profiles. Additionally, it is important to highlight the fact that probabilistic tractography tends to underestimate long-range connections due to distance-dependent reductions in streamline likelihood (Chang et al., 2023), which might further influence the richness of spatiotemporal dynamics produced by whole-brain models. Nevertheless, this is an inherent limitation of the method, and the effect of potential mitigation strategies remains beyond the scope of the current work.

We thus conclude that the deployment of whole-brain models for the investigation of the coarse-grained dynamics provides results which are fairly independent of model type and resolution. All model variants enable the same dynamical landscape with qualitatively similar changes in dynamical features with resolution and with the manipulation of the connectivity profiles. In the specific application to sleep SOs, both the phenomenological and the biophysically realistic model show similar changes in the temporal dynamics. While the antero-posterior directionality of simulated SOs by the aLN corresponds to the expected changes induced by the manipulation of the underlying antero-posterior structural connectivity gradient, the phenomenological Wilson-Cowan model requires a much more careful handling to demonstrate the empirically observed directionality. In total, this indicates that both model types are fairly robust to the simulation of empirically realistic temporal features, but not so for propagation dynamics. While computational costs increase (see Section 3.5) with resolution, the investigation of the fine-grained dynamics of wave propagations benefits from the higher resolution, for example, the higher resolved parcellations showed more robustness to the manipulation of connectivity profiles (see Section 3.4), and to the manipulation of structural gradients (see Section 3.5). In addition, if wave propagation across cortical space becomes the subject of investigation, a high resolution is mandatory. Nonetheless, for the investigation of quantitative features, detailed dynamics or specific application cases, the phenomenological Wilson-Cowan model requires a much more careful handling and finer tuning, while the biophysically realistic aLN model allows the investigation of specific features in a more reliable way.

Data availability statement

The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.

Ethics statement

The studies involving humans were approved by Local Ethics Committee at the Universitätsmedizin Greifswald. The studies were conducted in accordance with the local legislation and institutional requirements. Written informed consent for participation was obtained from the participants in accordance with the national legislation and institutional requirements.

Author contributions

CD: Conceptualization, Formal Analysis, Methodology, Visualization, Writing – original draft, Writing – review and editing. RS: Conceptualization, Formal Analysis, Methodology, Visualization, Writing – original draft, Writing – review and editing. AF: Investigation, Writing – review and editing. KO: Funding acquisition, Supervision, Writing – review and editing.

Funding

The author(s) declare that financial support was received for the research and/or publication of this article. This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) Project number 327654276, SFB1315, project number B03 (KO and AF) and project grants to AF: Research Unit 5429/1 (467143400), FL 379/34-1, FL 379/35-1.

Acknowledgments

A pre-print version of this paper was published at Dimulescu et al. (2025).

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declare that no Generative AI was used in the creation of this manuscript.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fnetp.2025.1589566/full#supplementary-material

Footnotes

1Breakspear et al. (2003) refer to excitatory couplings between cortical columns as long-range connections.

References

Augustin, M., Ladenbauer, J., Baumann, F., and Obermayer, K. (2017). Low-dimensional spike rate models derived from networks of adaptive integrate-and-fire neurons: comparison and implementation. PLoS Comput. Biol. 13, e1005545. doi:10.1371/journal.pcbi.1005545

PubMed Abstract | CrossRef Full Text | Google Scholar

Bhattacharya, S., Brincat, S. L., Lundqvist, M., and Miller, E. K. (2022). Traveling waves in the prefrontal cortex during working memory. PLoS Comput. Biol. 18, e1009827. doi:10.1371/journal.pcbi.1009827

PubMed Abstract | CrossRef Full Text | Google Scholar

Breakspear, M., Terry, J. R., and Friston, K. J. (2003). Modulation of excitatory synaptic coupling facilitates synchronization and complex dynamics in a biophysical model of neuronal dynamics. Netw. Comput. Neural Syst. 14, 703–732. doi:10.1088/0954-898X/14/4/305

PubMed Abstract | CrossRef Full Text | Google Scholar

Cabral, J., Kringelbach, M., and Deco, G. (2012). Functional graph alterations in schizophrenia: a result from a global anatomic decoupling? Pharmacopsychiatry 45, S57–S64. doi:10.1055/s-0032-1309001

PubMed Abstract | CrossRef Full Text | Google Scholar

Cakan, C., Dimulescu, C., Khakimova, L., Obst, D., Flöel, A., and Obermayer, K. (2022). Spatiotemporal patterns of adaptation-induced slow oscillations in a whole-brain model of slow-wave sleep. Front. Comput. Neurosci. 15, 800101. doi:10.3389/fncom.2021.800101

PubMed Abstract | CrossRef Full Text | Google Scholar

Cakan, C., Jajcay, N., and Obermayer, K. (2021). neurolib: a simulation framework for whole-brain neural mass modeling. Cogn. Comput. 15, 1132–1152. doi:10.1007/s12559-021-09931-9

CrossRef Full Text | Google Scholar

Cakan, C., and Obermayer, K. (2020). Biophysically grounded mean-field models of neural populations under electrical stimulation. PLoS Comput. Biol. 16, e1007822. doi:10.1371/journal.pcbi.1007822

PubMed Abstract | CrossRef Full Text | Google Scholar

Capone, C., De Luca, C., De Bonis, G., Gutzen, R., Bernava, I., Pastorelli, E., et al. (2023). Simulations approaching data: cortical slow waves in inferred models of the whole hemisphere of mouse. Commun. Biol. 6, 266. doi:10.1038/s42003-023-04580-0

PubMed Abstract | CrossRef Full Text | Google Scholar

Chang, Y.-N., Halai, A. D., and Lambon Ralph, M. A. (2023). Distance-dependent distribution thresholding in probabilistic tractography. Hum. Brain Mapp. 44, 4064–4076. doi:10.1002/hbm.26330

PubMed Abstract | CrossRef Full Text | Google Scholar

Das, A., Zabeh, E., Ermentrout, B., and Jacobs, J. (2024). Planar, spiral, and concentric traveling waves distinguish cognitive states in human memory. bioRxiv, 2024–01doi. doi:10.1101/2024.01.26.577456

CrossRef Full Text | Google Scholar

Dasilva, M., Camassa, A., Navarro-Guzman, A., Pazienti, A., Perez-Mendez, L., Zamora-López, G., et al. (2021). Modulation of cortical slow oscillations and complexity across anesthesia levels. NeuroImage 224, 117415. doi:10.1016/j.neuroimage.2020.117415

PubMed Abstract | CrossRef Full Text | Google Scholar

Deco, G., Kringelbach, M. L., Arnatkeviciute, A., Oldham, S., Sabaroedin, K., Rogasch, N. C., et al. (2021a). Dynamical consequences of regional heterogeneity in the brain's transcriptional landscape. Sci. Adv. 7, eabf4752. doi:10.1126/sciadv.abf4752

PubMed Abstract | CrossRef Full Text | Google Scholar

Deco, G., Kringelbach, M. L., Jirsa, V. K., and Ritter, P. (2017). The dynamics of resting fluctuations in the brain: metastability and its dynamical cortical core. Sci. Rep. 7, 3095. doi:10.1038/s41598-017-03073-5

PubMed Abstract | CrossRef Full Text | Google Scholar

Deco, G., Perl, Y. S., Vuust, P., Tagliazucchi, E., Kennedy, H., and Kringelbach, M. L. (2021b). Rare long-range cortical connections enhance human information processing. Curr. Biol. 31, 4436–4448.e5. doi:10.1016/j.cub.2021.07.064

PubMed Abstract | CrossRef Full Text | Google Scholar

Dimulescu, C., Strömsdörfer, R., Flöel, A., and Obermayer, K. (2025). On the robustness of the emergent spatiotemporal dynamics in biophysically realistic and phenomenological whole-brain models at multiple network resolutions

Google Scholar

Ester, M., Kriegel, H.-P., Sander, J., and Xu, X. (1996). A density-based algorithm for discovering clusters in large spatial databases with noise. Knowl. Discov. Data Min. 96, 226–231.

Google Scholar

Fousek, J., Rabuffo, G., Gudibanda, K., Sheheitli, H., Petkoski, S., and Jirsa, V. (2024). Symmetry breaking organizes the brain’s resting state manifold. Sci. Rep. 14, 31970. doi:10.1038/s41598-024-83542-w

PubMed Abstract | CrossRef Full Text | Google Scholar

Freyer, F., Aquino, K., Robinson, P. A., Ritter, P., and Breakspear, M. (2009). Bistability and non-Gaussian fluctuations in spontaneous cortical activity. J. Neurosci. 29, 8512–8524. doi:10.1523/JNEUROSCI.0754-09.2009

PubMed Abstract | CrossRef Full Text | Google Scholar

Freyer, F., Roberts, J. A., Becker, R., Robinson, P. A., Ritter, P., and Breakspear, M. (2011). Biophysical mechanisms of multistability in resting-state cortical rhythms. J. Neurosci. 31, 6353–6361. doi:10.1523/JNEUROSCI.6693-10.2011

PubMed Abstract | CrossRef Full Text | Google Scholar

Hashemi, M., Depannemaecker, D., Saggio, M., Triebkorn, P., Rabuffo, G., Fousek, J., et al. (2025). Principles and operation of virtual brain twins. IEEE Rev. Biomed. Eng. PP, 1–29. doi:10.1109/RBME.2025.3562951

PubMed Abstract | CrossRef Full Text | Google Scholar

Illoul, L., and Lorong, P. (2011). On some aspects of the cnem implementation in 3d in order to simulate high speed machining or shearing. Comput. & Struct. 89, 940–958. doi:10.1016/j.compstruc.2011.01.018

CrossRef Full Text | Google Scholar

Jirsa, V., Wang, H., Triebkorn, P., Hashemi, M., Jha, J., Gonzalez-Martinez, J., et al. (2023). Personalised virtual brain models in epilepsy. Lancet Neurology 22, 443–454. doi:10.1016/S1474-4422(23)00008-X

PubMed Abstract | CrossRef Full Text | Google Scholar

Kelso, J. S. (2012). Multistability and metastability: understanding dynamic coordination in the brain. Philosophical Trans. R. Soc. B Biol. Sci. 367, 906–918. doi:10.1098/rstb.2011.0351

PubMed Abstract | CrossRef Full Text | Google Scholar

Kilpatrick, Z. P. (2013). Wilson-cowan model. New York, NY: Springer, 1–5. doi:10.1007/978-1-4614-7320-6_80-1

CrossRef Full Text | Google Scholar

Ladenbauer, J., Khakimova, L., Malinowski, R., Obst, D., Tönnies, E., Antonenko, D., et al. (2023). Towards optimization of oscillatory stimulation during sleep. Neuromodulation Technol. A. T. Neural Interface 26, 1592–1601. doi:10.1016/j.neurom.2022.05.006

PubMed Abstract | CrossRef Full Text | Google Scholar

Ladenbauer, J., Ladenbauer, J., Külzow, N., de Boor, R., Avramova, E., Grittner, U., et al. (2017). Promoting sleep oscillations and their functional coupling by transcranial stimulation enhances memory consolidation in mild cognitive impairment. J. Neurosci. 37, 7111–7124. doi:10.1523/JNEUROSCI.0260-17.2017

PubMed Abstract | CrossRef Full Text | Google Scholar

Levenstein, D., Buzsáki, G., and Rinzel, J. (2019). Nrem sleep in the rodent neocortex and hippocampus reflects excitable dynamics. Nat. Commun. 10, 2478. doi:10.1038/s41467-019-10327-5

PubMed Abstract | CrossRef Full Text | Google Scholar

Liang, Y., Liang, J., Song, C., Liu, M., Knöpfel, T., Gong, P., et al. (2023). Complexity of cortical wave patterns of the wake mouse cortex. Nat. Commun. 14, 1434. doi:10.1038/s41467-023-37088-6

PubMed Abstract | CrossRef Full Text | Google Scholar

Massimini, M., Huber, R., Ferrarelli, F., Hill, S., and Tononi, G. (2004). The sleep slow oscillation as a traveling wave. J. Neurosci. 24, 6862–6870. doi:10.1523/JNEUROSCI.1318-04.2004

PubMed Abstract | CrossRef Full Text | Google Scholar

Mohan, U. R., Zhang, H., Ermentrout, B., and Jacobs, J. (2024). The direction of theta and alpha travelling waves modulates human memory processing. Nat. Hum. Behav. 8, 1124–1135. doi:10.1038/s41562-024-01838-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Muller, L., Piantoni, G., Koller, D., Cash, S. S., Halgren, E., and Sejnowski, T. J. (2016). Rotating waves during human sleep spindles organize global patterns of activity that repeat precisely through the night. eLife 5, e17267. doi:10.7554/eLife.17267

PubMed Abstract | CrossRef Full Text | Google Scholar

Muller, L., Reynaud, A., Chavane, F., and Destexhe, A. (2014). The stimulus-evoked population response in visual cortex of awake monkey is a propagating wave. Nat. Commun. 5, 3675. doi:10.1038/ncomms4675

PubMed Abstract | CrossRef Full Text | Google Scholar

Papadopoulos, L., Lynn, C. W., Battaglia, D., and Bassett, D. S. (2020). Relations between large-scale brain connectivity and effects of regional stimulation depend on collective dynamical state. PLOS Comput. Biol. 16, e1008144–43. doi:10.1371/journal.pcbi.1008144

PubMed Abstract | CrossRef Full Text | Google Scholar

Pessoa, L. (2022). The entangled brain: how perception, cognition, and emotion are woven together. MIT Press.

Google Scholar

Petkoski, S., and Jirsa, V. K. (2022). Normalizing the brain connectome for communication through synchronization. Netw. Neurosci. 6, 722–744. doi:10.1162/netn_a_00231

PubMed Abstract | CrossRef Full Text | Google Scholar

Pinto, D. J., Brumberg, J. C., Simons, D. J., Ermentrout, G. B., and Traub, R. (1996). A quantitative population model of whisker barrels: Re-examining the wilson-cowan equations. J. Comput. Neurosci. 3, 247–264. doi:10.1007/BF00161134

PubMed Abstract | CrossRef Full Text | Google Scholar

Popovych, O. V., Jung, K., Manos, T., Diaz-Pier, S., Hoffstaedter, F., Schreiber, J., et al. (2021). Inter-subject and inter-parcellation variability of resting-state whole-brain dynamical modeling. NeuroImage 236, 118201. doi:10.1016/j.neuroimage.2021.118201

PubMed Abstract | CrossRef Full Text | Google Scholar

Proix, T., Spiegler, A., Schirner, M., Rothmeier, S., Ritter, P., and Jirsa, V. K. (2016). How do parcellation size and short-range connectivity affect dynamics in large-scale brain network models? NeuroImage 142, 135–149. doi:10.1016/j.neuroimage.2016.06.016

PubMed Abstract | CrossRef Full Text | Google Scholar

Rasch, B., and Born, J. (2013). About sleep’s role in memory. Physiol. Rev. 93, 681–766. doi:10.1152/physrev.00032.2012

PubMed Abstract | CrossRef Full Text | Google Scholar

Richardson, M. J. E. (2007). Firing-rate response of linear and nonlinear integrate-and-fire neurons to modulated current-based and conductance-based synaptic drive. Phys. Rev. E - Stat. nonlinear, soft matter Phys. 76 (2 Pt 1), 021919. doi:10.1103/PhysRevE.76.021919

PubMed Abstract | CrossRef Full Text | Google Scholar

Roberts, J. A., Gollo, L. L., Abeysuriya, R. G., Roberts, G., Mitchell, P. B., Woolrich, M. W., et al. (2019). Metastable brain waves. Nat. Commun. 10, 1056. doi:10.1038/s41467-019-08999-0

PubMed Abstract | CrossRef Full Text | Google Scholar

Sanchez-Vives, M. V., Massimini, M., and Mattia, M. (2017). Shaping the default activity pattern of the cortical network. Neuron 94, 993–1001. doi:10.1016/j.neuron.2017.05.015

PubMed Abstract | CrossRef Full Text | Google Scholar

Schaefer, A., Kong, R., Gordon, E. M., Laumann, T. O., Zuo, X.-N., Holmes, A. J., et al. (2018). Local-global parcellation of the human cerebral cortex from intrinsic functional connectivity mri. Cereb. Cortex 28, 3095–3114. doi:10.1093/cercor/bhx179

PubMed Abstract | CrossRef Full Text | Google Scholar

Sporns, O. (2022). The complex brain: connectivity, dynamics, information. Trends Cognitive Sci. 26, 1066–1067. doi:10.1016/j.tics.2022.08.002

PubMed Abstract | CrossRef Full Text | Google Scholar

Torao-Angosto, M., Manasanch, A., Mattia, M., and Sanchez-Vives, M. V. (2021). Up and down states during slow oscillations in slow-wave sleep and different levels of anesthesia. Front. Syst. Neurosci. 15, 609645. doi:10.3389/fnsys.2021.609645

PubMed Abstract | CrossRef Full Text | Google Scholar

Townsend, R. G., and Gong, P. (2018). Detection and analysis of spatiotemporal patterns in brain activity. PLoS Comput. Biol. 14, e1006643. doi:10.1371/journal.pcbi.1006643

PubMed Abstract | CrossRef Full Text | Google Scholar

Wang, H. E., Triebkorn, P., Breyton, M., Dollomaja, B., Lemarechal, J.-D., Petkoski, S., et al. (2024). Virtual brain twins: from basic neuroscience to clinical use. Natl. Sci. Rev. 11, nwae079. doi:10.1093/nsr/nwae079

PubMed Abstract | CrossRef Full Text | Google Scholar

Wilson, H. R., and Cowan, J. D. (1972). Excitatory and inhibitory interactions in localized populations of model neurons. Biophysical J. 12, 1–24. doi:10.1016/S0006-3495(72)86068-5

PubMed Abstract | CrossRef Full Text | Google Scholar

Zbilut, J. P., Zaldivar-Comenges, J.-M., and Strozzi, F. (2002). Recurrence quantification based liapunov exponents for monitoring divergence in experimental data. Phys. Lett. A 297, 173–181. doi:10.1016/S0375-9601(02)00436-X

CrossRef Full Text | Google Scholar

Keywords: whole-brain modeling, network resolution, neural mass modeling, spatiotemporal dynamics, slow oscillations, network physiology

Citation: Dimulescu C, Strömsdörfer R, Flöel A and Obermayer K (2025) On the robustness of the emergent spatiotemporal dynamics in biophysically realistic and phenomenological whole-brain models at multiple network resolutions. Front. Netw. Physiol. 5:1589566. doi: 10.3389/fnetp.2025.1589566

Received: 07 March 2025; Accepted: 19 June 2025;
Published: 08 August 2025.

Edited by:

Viktor Jirsa, Aix-Marseille Université, France

Reviewed by:

Spase Petkoski, Aix Marseille Université, France
Yusuke Takeda, Advanced Telecommunications Research Institute International (ATR), Japan

Copyright © 2025 Dimulescu, Strömsdörfer, Flöel and Obermayer. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Ronja Strömsdörfer, c3Ryb2Vtc2RvZXJmZXJAdHUtYmVybGluLmRl

These authors share first authorship

ORCID: Agnes Flöel, orcid.org/0000-0002-1475-5872

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.