Heterogeneity extends criticality

Criticality has been proposed as a mechanism for the emergence of complexity, life, and computation, as it exhibits a balance between order and chaos. In classic models of complex systems where structure and dynamics are considered homogeneous, criticality is restricted to phase transitions, leading either to robust (ordered) or fragile (chaotic) phases for most of the parameter space. Many real-world complex systems, however, are not homogeneous. Some elements change in time faster than others, with slower elements (usually the most relevant) providing robustness, and faster ones being adaptive. Structural patterns of connectivity are also typically heterogeneous, characterized by few elements with many interactions and most elements with only a few. Here we take a few traditionally homogeneous dynamical models and explore their heterogeneous versions, finding evidence that heterogeneity extends criticality. Thus, parameter fine-tuning is not necessary to reach a phase transition and obtain the benefits of (homogeneous) criticality. Simply adding heterogeneity can extend criticality, making the search/evolution of complex systems faster and more reliable. Our results add theoretical support for the ubiquitous presence of heterogeneity in physical, biological, social, and technological systems, as natural selection can exploit heterogeneity to evolve complexity “for free”. In artificial systems and biological design, heterogeneity may also be used to extend the parameter range that allows for criticality.

It is often argued that critical dynamics are prevalent or desirable in a broad variety of systems because they offer a balance between robustness and adaptability (Langton, 1990;Kauffman, 1993;Hidalgo et al., 2016).If dynamics are too ordered, then information and functionality can be preserved, but it is difficult to adapt.The opposite occurs with chaotic dynamics: change allows for adaptability, but it also leads to fragility, as small changes percolate through the system.Thus, for phenomena such as life, computation, and complex systems in general, critical dynamics should be favored by evolutionary processes (Gershenson, 2012;Torres-Sosa et al., 2012;Roli et al., 2018).
There are different ways in which one can measure criticality, many of which are related to entropies.For example, Fisher information maximizes at phase transitions (Wang et al., 2011;Prokopenko et al., 2011).Still, it rapidly decreases and it is difficult to evaluate how far a system is from criticality.In this work, we use a measure of complexity (Fernández et al., 2014;Santamaría-Bonfil et al., 2016) based on Shannon information that also maximizes at phase transitions, but reduces its value more gradually and is straightforward to calculate compared to Fisher information, as the latter requires to measure the effects of controlled perturbations.There are several definitions and measures of complexity (Lloyd, 2001), but, crucially, the one we use here is highly correlated with criticality.
If criticality is found only near a phase transition, then most of a parameter space would have "undesirable" solutions.Thus, how can a search procedure find the right parameters for criticality?Self-organized criticality (Bak et al., 1987;Adami, 1995;Hesse and Gross, 2014;Vidiella et al., 2020) has been proposed as an answer.Although interesting and useful for specific cases, it is not universal and has hidden variables.In general, one can think of different mechanisms that will find or adjust parameters so that criticality is achieved.But, could criticality be more prevalent than previously thought?
In previous work where we have studied rank dynamics in a variety of systems (Cocho et al., 2015;Morales et al., 2016Morales et al., , 2018;;Iñiguez et al., 2022), we observe that the most relevant elements change more slowly than less relevant elements.We hypothesized that heterogeneous temporality equips systems with robustness and adaptability at the same time.Here we explore the role of heterogeneity in different dynamical systems.We show that different types of heterogeneity extend the parameter region where critical dynamics are observed.Thus, we can say that heterogeneity results in "criticality for free", reducing the problem of fine-tuning parameters.

Results
We first present results of a heterogeneous version of the Ising model, where elements have different temperatures.We then explore structural and temporal heterogeneity in random Boolean networks.Afterwards, we abstract the specific dynamics of a system and investigate under which conditions heterogeneity promotes criticality.Finally, we provide a general solution, independent of any measure, using Jensen's inequality.

Value heterogeneity: the Ising model
We can consider a system of interacting atoms arranged in a network-like structure (Fig. 1A).The state of an atom is defined by its dipole nuclear magnetic moment: a two-valued spin representing the orientation of the magnetic field produced by the atom.Intuitively, neighboring atoms with the same spin value contribute less to the total energy of the system than atoms with different spin values.Systems of this kind evolve preferentially to states with the lowest possible energy.When the temperature of the environment is increased, the system heats, and we can observe a sudden change in a global property of the system, namely loss of magnetization.A theoretical model of such a system of atoms is the Ising model (Ising, 1925;Glauber, 1963).
The Ising model is usually homogeneous: all cells have the same temperature, and one explores different properties as the temperature T varies.This is a good assumption when all atoms can be considered to behave in a similar way.However, if we are modeling an Ising-like biological system (Hopfield, 1982), then each element might have slightly different properties.In the proposed heterogeneous case, each cell has a temperature taken from a Poisson distribution with a mean equal to the temperature of the homogeneous case (see Sec. 4.1 for details).
Following Lopez-Ruiz et al. (1995), we have proposed a measure of complexity (Fernández et al., 2014) based on Shannon's information (Shannon, 1948), where K is a positive constant and b is the length of the alphabet (for all the cases considered in this paper, b = 2).This measure is equivalent to the Boltzmann-Gibbs entropy.To normalize I to [0, 1], we use I is maximal when the probabilities are homogeneous, i.e. there is the same probability of observing any symbol along a string.I is minimal when only one symbol is found in a string (so it has a probability of one, and all the rest have a probability of zero).Chaotic dynamics are characterized by a high I, while ordered (static) dynamics are characterized by a low I. Inspired by Lopez-Ruiz et al. (1995), we define complexity C as the balance between ordered and chaotic dynamics, where the constant 4 is added to normalize the measure to [0, 1] ( Santamaría-Bonfil et al., 2017).
Figure 1B shows the correlation of the Ising model for varying temperature.This is maximal in the phase transition at T ≈ 2.27, i.e. criticality.Figure 1C shows that there is a correspondence between the  correlation and the complexity measure in Eq. 3. Figure 1D shows results of average complexity C as T increases.Complexity is maximal near the phase transition for the homogeneous case.Heterogeneity shifts the expected maximum complexity (that reflects criticality), but it also expands it, in the sense that the area under the curve is broadened.In other words, critical-like dynamics (one can assume arbitrarily complexity values greater than 0.8, just for comparison) are found for a broader range of T values.

Temporal and structural heterogeneity: random Boolean networks
A gene is a part of the genomic sequence that encodes how to produce (synthesise) either a protein or some RNA (gene products).Gene product synthesis is called gene expression.Because not all gene products are synthesised at the same time, the regulation of gene expression is constantly taking place within a cell.In fact, the expression of each gene is regulated (among many things) by the expression of other genes in the genome.This gives rise to an interaction structure known as a genetic regulatory network.Boolean networks are a theoretical model of genetic regulatory networks.In random Boolean networks (RBNs) (Kauffman, 1969(Kauffman, , 1993)), traditionally there is homogeneous topology and updating.In this case, critical dynamics are found close to a phase transition between ordered and chaotic phases (Derrida and Pomeau, 1986;Luque and Solé, 1997;Wang et al., 2010).Figure 2A shows an example of the topology of a RBN with seven nodes (N = 7) and two connections (inputs K) each.Each node has a lookup table where all possible combinations of their inputs are specified (e.g. Figure 2A).Using an ensemble approach, for each parameter combination, we randomly generate topologies (structure) and lookup tables (function), and then evaluate them in simulations.Depending on different parameters, the dynamics of RBNs can be classified as ordered, critical (near a phase transition), and chaotic.Figure 2C shows example of these dynamics for different K values.
One can have heterogeneous topology in different ways (Oosawa and Savageau, 2002;Aldana, 2003), as genetic regulatory networks are not homogeneous: few genes affect many genes, and many genes affect few genes.Here, we use Poisson and exponential distributions.Strictly speaking, both are heterogeneous, but exponential is more heterogeneous than Poisson, which here we consider as "homogeneous".The technical reason for using a Poisson distribution is that it allows us to explore non-integer average connectivity in the network.
We can also have heterogeneous updating schemes (Gershenson, 2002), as it can be argued that not all genes in a network "march in step" (Harvey and Bossomaier, 1997).Classical RBNs (CRBNs) have synchronous, homogeneous temporality, while in here we use Deterministic Generalized Asynchronous RBNs (DGARBNs) for heterogeneous temporality.In particular, each node is updated every number of time steps equal to its out-degree, so the more nodes one node affects, the slower it will be updated (see Sec 4.2 for details).
Fig. 2D compares the average complexity C as the average connectivity K is increased.Structural and temporal homogeneity (CRBN-Poisson) has a classical complexity profile, maximizing near the phase transition (K = 2 for the thermodynamical limit, i.e., N → ∞).It can be seen that only structural heterogeneity (CRBN-Exponential) extends criticality more than only temporal heterogeneity (DGARBN-Poisson), that basically shifts the curve to the right.Still, having both structural and temporal heterogeneity (DGARBN-Exponential) extends criticality even more than having only structural heterogeneity.

Arbitrary complexity
Abstracting the results from the previous subsections, and trying not to depend on any model in particular, we can explore exhaustively the measure of complexity (Eq. 3) in homogeneous and heterogeneous settings, to observe when each case yields a higher average complexity.So we simply vary the probability p 1 of having ones in a binary string directly as shown in Figure 3A.
In the homogeneous case, we calculate directly the complexity C as a function of p 1 using Eq. 3, assuming that we are averaging the complexities of several elements with the same p 1 .For the heterogeneous case, we generate a collection of probabilities with mean p 1 and standard deviation of 0.2 (truncating to zero negative values and to one values greater than one), calculate their complexity, and then average it.Heterogeneity achieves higher complexities for roughly 0.25 < p 1 < 0.75.One might wonder why all heterogenous complexities avoid extreme values, even when heterogeneous RBNs can have complexities close to zero and one.This is because of the standard deviation of the distributions from which the means are generated.Smaller standard deviations yield curves closer to the heterogeneous case.
By assuming that heterogeneity sometimes will be better than homogeneity and vice versa, we can further generalize our results to be independent of any measure or function.If we have homogeneity of a variable x, all elements will have the same value for x, and thus the mean |x| will be equal to any x i .Thus, the average of any function |f (x)| will be equal to any f (x i ).If we have heterogeneity, then the mean |x| will be given by some distribution of different values of x, and similarly for |f (x)|.
We can then say that heterogeneity is preferred when the function of the average is greater than the average of the function, Jensen's inequality (McShane, 1937) tells us already that heterogeneity will be "better" than homogeneity for concave functions, as illustrated in Figure 3B.For more complex functions, their concave parts will benefit from heterogeneity and their convex parts will benefit from homogeneity (as it can be seen for C in Figure 3A).
For linear functions, it can be shown that there is no difference between homogeneity and heterogeneity, as f (|x|) will always be equal to |f (x)| (see proof in Section 4.3).Thus, it can be concluded that the difference between homogeneity and heterogeneity is relevant only for nonlinear functions.

Discussion
There are several recent examples of heterogeneity offering advantages when compared to homogeneous systems in the literature.For example, in public transportation systems, theory tells us that passengers are served optimally (wait at stations for a minimum time) if headways are equal, i.e., homogeneous.However, equal headways are unstable (Gershenson and Pineda, 2009;Chen et al., 2021).Still, adaptive heterogeneous headways can deliver supraoptimal performance through self-organization (Gershenson, 2011;Carreón et al., 2017), due to the slower-is-faster effect (Gershenson and Helbing, 2015): passengers do wait more time at stations, but once they board a vehicle, on average they will reach faster their destination, as the idling required to maintain equal headways is avoided.
There are other examples where heterogeneity promotes synchronization (see Zhang et al. (2021) and references therein).In particular, Zhang et al. (2021) shows that random parameter heterogeneity among oscillators can consistently rescue the system from losing synchrony.In related work, Molnar et al. (2021) finds that heterogeneous generators improve stability in power grids.Recently, Ratnayake et al. ( 2021) explored complex networks with heterogeneous nodes, observing that these have a greater robustness as compared to networks with homogeneous nodes.In social networks, Zhou et al. (2020) finds that heterogeneity of social status may drive the network evolution towards self-optimization.Also, structural heterogeneity has been shown to favor the evolution of cooperation (Santos et al., 2006(Santos et al., , 2008)).
These examples suggest that heterogenous networks improve information processing.With heterogeneity, elements can in principle process information differently, potentially increasing the computing power of a heterogeneous system over an homogeneous one with similar characteristics.This is related to Ashby's law of requisite variety (Ashby, 1956;Gershenson, 2015), which states that an active controller should have at least the same variety (number of states) as the controlled.It is straightforward to see with random Boolean networks that temporal heterogeneity increases the variety of the system: the state space (of size 2 N for homogeneous temporality) can explode once we have to include the precise periods and phases of all nodes (in heterogeneous temporality), as different combinations of the temporal substates may lead a transition from the same node substate to different node substates.Also in random Boolean networks, higher K implies more possible nets.Even if there are evolutionary pressures for efficiency (smaller networks), if heterogeneity shifts criticality to higher K, then it will be easier for an evolutionary search to find critical dynamics in larger spaces.Shannon's (1948) information, equivalent to Boltzmann-Gibbs entropy, is maximal when the probability of every symbol or state is the same, i.e. homogeneous.Thus, one can measure heterogeneity as an inverse of entropy (one minus the normalized Shannon's information) (Fernández et al., 2014).It is clear that maximum heterogeneity (as measured here, it would occur when only one symbol or state has a probability of one and all the rest a probability of zero) has its limitations.Thus, we can assume that there will be an "optimal" balance between minimum and maximum heterogeneities.The precise balance will probably depend on the system, its context, and may even change in time.If we want heterogeneity to take the dynamics towards criticality (or somewhere else), then the precise "optimal" heterogeneity will depend on how far we are from criticality (Gershenson, 2012;Pineda et al., 2019).In this sense, a potential relationship with no-free-lunch theorems (Wolpert andMacready, 1995, 1997) seems an interesting area of further research.
When homogeneous systems are analyzed in terms of their symmetries, heterogeneity is a type of symmetry breaking.Still, in converse symmetry breaking (Nishikawa and Motter, 2016), only heterogeneity leads to stability, i.e. the system symmetry is broken to preserve the state symmetry.This idea can be used to control the stability of complex systems using heterogeneity (Nicolaou et al., 2021).A further avenue of research is the relationship between heterogeneity and Lévy flights (Iñiguez et al., 2022).Lévy flights are heterogeneous, since they consist of many short jumps and few large ones.They offer a balance between exploration and exploitation, and seem advantageous for foraging (Ramos-Fernández et al., 2004), preventing extinctions (Dannemann et al., 2018), and search algorithms (Martínez-Arévalo et al., 2020).Another interesting relationship to study is the one between heterogeneity and non-reciprocal systems (Fruchart et al., 2021).
Network science (Albert and Barabasi, 2002;Newman, 2003;Barabási, 2016) has demonstrated the relevance of structural heterogeneity.This should be complemented with a systematic exploration of temporal (Barabási, 2005) and other types of heterogeneity.For example, it would be interesting to study heterogeneous adaptive (Gross and Sayama, 2009) and temporal (Holme and Saramäki, 2012;Holme, 2015) networks, where each node has a different speed for its dynamics.Temporal heterogeneity enables a system to match the requisite variety of their environment at different timescales.If systems can adapt at the scales at which their environments change, then they will better do so if they have a variety of timescales, i.e., heterogeneous temporality.Recently, Sormunen et al. (2022) have shown that adaptive networks have critical manifolds that can be navigated as parameters change.In other words, criticality is not restricted to a single value, but can be associated to a manifold in a multidimensional system.
Further research is required to better understand the role of heterogeneity in the criticality of complex systems.The present work is limited and many open questions remain.We encourage the reader to experiment with a heterogeneous version of their favorite homogeneous complex system model, be it structural, temporal, or other type of heterogeneity.We could learn more from heterogeneous models of collective motion, opinion formation, financial markets, urban growth, and more.This could contribute to a broader understanding of heterogeneity and its relationship with criticality.

Methods
A graph G consists of a set of vertices V and a set of edges E, where an edge is an unordered pair of distinct vertices of G.We write u ∼ v to denote that {u, v} is an edge and in this case we say that u and v are adjacent.If H is a graph with vertex set W ⊂ V and edge set F ⊂ E, we say that H is a subgraph of G.A graph is said to be connected if for every pair of distinct vertices u and v, there is a finite sequence of distinct vertices a 0 , a 1 . . ., a n such that a 0 = u, a n = v, and a i−1 ∼ a i for each i = 0, 1, . . ., n.A connected component of G is a connected subgraph of G.A graph is said to be finite just in case its vertex set is finite.A graph is called d-regular if every vertex is adjacent to exactly d ≥ 1 distinct vertices.
A directed graph D consists of a set V of elements a, b, c, . . .called the nodes of D and a set A of ordered pairs of nodes (a, b), (b, c), . . .called the arcs of D. We use the symbol ab to represent the arc (a, b).If ab is in the arc set A of D, then we say that a is an incoming neighbour (or in-neighbour ) of b, and also that b is a outgoing neighbour (or out-neighbour ) of a.We say that D is k-in regular (k ≥ 1) if every node has exactly k in-neighbours: for every node a there are distinct nodes a 1 , . . ., a k , such that a j a ∈ A for j = 1, . . ., k.In other words, D is k-in regular just in case the set of in-neighbours of any node has exactly k elements, all distinct, and possibly including itself.The out-degree of a node a is the number of nodes b such that the arc ab is in the arc set of D. Thus the out-degree of a is the number of out-neighbours of a.Similarly, the in-degree of a node a is the number of nodes c such that ca ∈ A. Thus the in-degree of a is the number of in-neighbours of a.

The Ising model with individual temperatures
It is quite common to study the Ising model on a finite, connected 4-regular graph where the number of edges is twice the number of vertices.This graph is usually introduced as a finite lattice of two-dimensional points on the surface of a three-dimensional torus.An example of such a graph with 25 vertices and 50 edges is shown in Figure 1A.

The Ising model
We start with a finite graph G = (V, E).We identify the vertex set of G with a system of interacting atoms.Each vertex u ∈ V is assigned a spin σ u which can take the value +1 or −1.The energy of a configuration of spins is The energy increases with the number of pairs of adjacent vertices having different spins.The Ising model is a way to assign probabilities to the system configurations.The probability of a configuration σ is proportional to exp(−βH(σ)), where β ≥ 0 is a variable inversely proportional to the temperature.More precisely, the Ising model with inverse temperature β is the probability measure µ on the set of configurations X = {+1, −1} V defined by where Z = Z(G, β) is a normalizing constant.This constant can be computed explicitly as where |A| denotes the cardinality of a finite set A, and k F the number of connected components of the (spanning where C = F ⊂E 2 k F and so, for any configuration σ, we have that As the temperature increases (and hence β → 0), µ converges to the uniform measure over the space of configurations.When the temperature decreases, β > 0 increases, and µ assigns greater probability to configurations that have a large number of pairs of adjacent vertices with the same spin.

Simulation
Most simulations of the Ising model use either the Glauber dynamics or the Metropolis algorithm for constructing a Markov chain with stationary measure µ.Here we only describe the Metropolis chain for the Ising model.Given two configurations σ, σ ∈ X, let P (σ, σ ) denote the probability that the Metropolis chain for the Ising model moves from σ to σ .For every a ∈ V , we write σ a to denote the configuration obtained from σ by flipping the sign of the value that σ assigns to a and leaving all the other spins the same.In other words, σ a ∈ X is the unique configuration which agrees everywhere with σ except for the spin assigned to vertex a: for every u ∈ V , σ a u = σ u if u = a and σ a u = −σ u if u = a.We let the transition probabilities to be positive P (σ, σ ) > 0 just in case σ = σ or σ = σ a for some a ∈ V .In the latter case, the Metropolis chain moves from σ to σ a with probability where x ∧ y denotes the minimum of the quantities x and y.The probability that the chain stays at the same configuration σ is then P (σ, σ) = 1 − a∈V P (σ, σ a ).
A key property about these transition probabilities is that they only depend on the ratios µ(σ a )/µ(σ).Therefore, to simulate the Metropolis chain it is not necessary to compute the normalizing constant Z of the Ising measure µ.
To summarize, we have constructed a transition matrix P that defines a reversible Markov chain with stationary measure µ.
Proposition 1.The Metropolis chain for the Ising model has stationary measure µ.
Proof.It is sufficient to prove that the probability measure µ and the transition matrix P satisfy the detailed balance equations µ(σ)P (σ, σ ) = µ(σ )P (σ , σ) for all σ = σ .To show this, it suffices to verify that the equation ( 5) holds when σ = σ a for some a ∈ V .After cancellation of 1/|V | and distributing µ(σ) and µ(σ a ) accordingly, it suffices to check which is obvious.

Individual temperatures
In the previous section, we described how to construct a transition matrix P that defines a reversible Markov chain with stationary measure µ.Starting at a configuration σ, the probability that the chain moves to a new configuration σ a for any a ∈ V , is given by where Thus, the transition probability from σ to σ a of the Metropolis chain P for the Ising model with parameter β ≥ 0 is determined by the quantity exp(−β∆H a (σ)).
We now turn to study a situation where each vertex a has its own parameter β a .In other word, we shall describe a Markov chain P ind that moves from σ to σ a with probability depending on exp(−β a ∆H a (σ)), where β a ≥ 0 is a individual (possibly distinct) parameter for each a ∈ V .More precisely, the probability that the new chain moves from σ to σ a is defined as The probability that the chain stays at the same configuration is Hence, all the configurations σ that differ from σ in at least two vertices are not reachable from σ.That is to say, P ind (σ, σ ) = 0 if and only if σ = σ a for any a ∈ V .
Definition 1 (Ising measure with individual temperatures).Let G = (V, E) be a finite, connected graph and (β u : u ∈ V ) a collection of non-negative real numbers.The probability measure µ ind on X = {+1, −1} V is defined by where Z ind = σ∈X µ ind (σ) is a normalizing constant.
Remark 1.We can think of µ ind as an heterogenous Ising model as opposed to the homogeneous version µ defined in Section 4.1.1 by Remark 2. It is cleat that the probability measure µ is a stationary measure of the Markov chain defined by the transition matrix P ind just in case we have β a = β for all a ∈ V .In other words, µ ind = µ if and only if the individual parameters β a in the definition of P ind are all equal to the single parameter β of the homogeneous Ising model.
Proposition 2. The probability measure µ ind is the stationary measure of the Markov chain defined by the transition matrix P ind .
Proof.In order to satisfy the detailed balanced equations for all σ and σ a , because ∆H a (σ a ) = H(σ) − H(σ a ) = −∆H a (σ).Now, if ∆H a (σ) ≥ 0 then β a ∆H a (σ) ≥ 0, and hence exp(β a ∆H a (σ)) ≥ 1, so Otherwise, if ∆H a (σ) < 0 then −β a ∆H a (σ) ≥ 0, and so exp(−β a ∆H a (σ)) ≥ 1, hence In both cases, we arrive at the conclusion that in order for µ ind to be the stationary measure of the chain defined by P ind , we must have for every σ ∈ X and a ∈ V .Now we proceed to prove that equation ( 6) holds.After cancellation of 1/Z ind and using properties of the exponential function, it suffices to check u,v∈V u∼v Therefore, the probability measure µ ind and the transition matrix P ind satisfy the detailed balance equations and the result follows.

Homogeneous random Boolean networks
Let D = (V, A) be a directed graph.We identify the nodes of D with the genes in a gene regulatory network.Suppose D is a k-in regular directed graph.Figure 2A is an example of a 2-in regular digraph with 7 nodes, i.e.N = 7, K = 2.A family (f a ) a∈V of functions f a : {0, 1} k −→ {0, 1} is called a Boolean network on D. Figure 2B is an example of a Boolean network on a graph with 7 nodes, and with the parameter of "connectivity" k equal to 2. A Boolean network is called random if the assignment a → f a is made at random by sampling independently and uniformly from the set of all the 2 2 k Boolean functions with k inputs.A function σ : V −→ {0, 1}, a → σ a , is called a state of the random Boolean network on D. The value σ a is called the state of a.The updating function F (σ) of a state σ is the function F (σ) : V −→ {0, 1}, a → σ a , defined as σ a = f a (σ a1 , . . ., σ a k ).
For every σ, we have a sequence of states σ, σ , σ , . . .such that each state is the updating function of the previous state in the sequence: σ = F (σ), σ = F (σ ), and so on.The sequence of states σ a , σ a , σ a , . . . is called the time series of a.

Heterogeneous random Boolean networks
The description given in 4.2.1 corresponds to the case where the structure and the updating scheme of the random Boolean network are homogeneous.Here we describe the two versions of heterogeneous random Boolean networks that were used in the simulations.The first of these heterogeneous descriptions is structural, while the second gives rise to some sort of asynchronous dynamics.
The definition of Boolean network above makes the assumption that every node in the directed graph has the same in-degree.Now we consider Boolean networks over arbitrary (not necessarily k-in regular, directed) graphs.A generalized Boolean network on a directed graph D consists of a family (f a ) a∈V of functions f a : {0, 1} k − a −→ {0, 1} with k − a ≥ 1 the in-degree a.Thus a heterogeneous random Boolean network is a generalized Boolean network chosen uniformly at random.
For talking about temporal heterogeneity we need to introduce asynchronous updating schemes (Gershenson, 2002).The heterogeneous updating function of a state σ of a random heterogeneous Boolean network on D is the function F (σ) : V × N −→ {0, 1}, defined by (a, t) → σ a if t is a multiple of k + a σ a otherwise where t is called the discrete time-step, and k + a is the out-degree of a: there are nodes a 1 , . . ., a k + a all distinct, such that aa j ∈ E for j = 1, . . ., k + a .

Linear functions
Here we observe that for linear functions, there is no difference between homogeneity and heterogeneity.Indeed a function f : R d −→ R with d ≥ 1, is called linear if for all x, y ∈ R d and all a, b ∈ R, we have f (ax + by) = af (x) + bf (y).
For x 1 , . . ., x n ∈ R d , n ≥ 1, it can be shown, by induction on the number of points n, that Thus, in the context of linear functions, average value (heterogeneity) is the same as value of the average (homogeneity).

Figure 1 :
Figure 1: (A) Two-dimensional Ising model displayed on a square lattice.The graph may be wrapped into a torus, highlighting periodic boundary conditions.(B) The correlation function decays faster at low and high temperatures than at the critical temperature where the correlation function is maximum.(C) Correlation as a function of complexity in two-dimensional Ising model illustrates that complexity is a good proxy for criticality.(D) Average complexity with error bars of the Ising model for different temperatures, considering homogeneous (blue), heterogeneous with exponentially distributed (green), and heterogeneous with Poisson distributed (orange) temperatures.The black dotted vertical line represents the theoretical phase transition at T ≈ 2.27 (in practice smaller due to finite size effects).

B
Figure3: A. Average complexity C for collections of strings with average probability of ones p 1 , in homogeneous (blue circles) and heterogeneous (red triangles) cases.The latter yields higher average complexity in the central region, where the homogeneous complexity is low.B. Illustration of Jensen's inequality.The function of the averages f (|x|) of a variable with a distribution with average |x| is lower than the average of the functions |f (x)| for concave functions.The opposite is the case for convex functions.