<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Neural Circuits</journal-id>
<journal-title>Frontiers in Neural Circuits</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Neural Circuits</abbrev-journal-title>
<issn pub-type="epub">1662-5110</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fncir.2016.00077</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Hodge Decomposition of Information Flow on Small-World Networks</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Haruna</surname> <given-names>Taichi</given-names></name>
<xref ref-type="author-notes" rid="fn001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/288575/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Fujiki</surname> <given-names>Yuuya</given-names></name>
</contrib>
</contrib-group>
<aff><institution>Department of Planetology, Graduate School of Science, Kobe University</institution> <country>Kobe, Japan</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Kazutaka Takahashi, University of Chicago, USA</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Leonard Maler, University of Ottawa, Canada; Keiji Miura, Kwansei Gakuin University, Japan</p></fn>
<fn fn-type="corresp" id="fn001"><p>&#x0002A;Correspondence: Taichi Haruna <email>tharuna&#x00040;penguin.kobe-u.ac.jp</email></p></fn></author-notes>
<pub-date pub-type="epub">
<day>28</day>
<month>09</month>
<year>2016</year>
</pub-date>
<pub-date pub-type="collection">
<year>2016</year>
</pub-date>
<volume>10</volume>
<elocation-id>77</elocation-id>
<history>
<date date-type="received">
<day>31</day>
<month>03</month>
<year>2016</year>
</date>
<date date-type="accepted">
<day>14</day>
<month>09</month>
<year>2016</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2016 Haruna and Fujiki.</copyright-statement>
<copyright-year>2016</copyright-year>
<copyright-holder>Haruna and Fujiki</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract><p>We investigate the influence of the small-world topology on the composition of information flow on networks. By appealing to the combinatorial Hodge theory, we decompose information flow generated by random threshold networks on the Watts-Strogatz model into three components: gradient, harmonic and curl flows. The harmonic and curl flows represent globally circular and locally circular components, respectively. The Watts-Strogatz model bridges the two extreme network topologies, a lattice network and a random network, by a single parameter that is the probability of random rewiring. The small-world topology is realized within a certain range between them. By numerical simulation we found that as networks become more random the ratio of harmonic flow to the total magnitude of information flow increases whereas the ratio of curl flow decreases. Furthermore, both quantities are significantly enhanced from the level when only network structure is considered for the network close to a random network and a lattice network, respectively. Finally, the sum of these two ratios takes its maximum value within the small-world region. These findings suggest that the dynamical information counterpart of global integration and that of local segregation are the harmonic flow and the curl flow, respectively, and that a part of the small-world region is dominated by internal circulation of information flow.</p></abstract>
<kwd-group><kwd>small-world network</kwd>
<kwd>random threshold network</kwd>
<kwd>transfer entropy</kwd>
<kwd>Hodge decomposition</kwd>
<kwd>functional brain networks</kwd></kwd-group>
<contract-num rid="cn001">KAKENHI Grant Number 25280091</contract-num>
<contract-sponsor id="cn001">Japan Society for the Promotion of Science<named-content content-type="fundref-id">10.13039/501100001691</named-content></contract-sponsor>
<counts>
<fig-count count="5"/>
<table-count count="0"/>
<equation-count count="7"/>
<ref-count count="35"/>
<page-count count="8"/>
<word-count count="6008"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>1. Introduction</title>
<p>Recently, small-world topology of brain networks has been paid much attention in neuroscience. It is found ubiquitously in both structural and functional neuronal networks from those of local neuronal populations to large-scale brain areas (Bassett and Bullmore, <xref ref-type="bibr" rid="B4">2006</xref>; Bullmore and Sporns, <xref ref-type="bibr" rid="B5">2010</xref>; Poli et al., <xref ref-type="bibr" rid="B27">2015</xref>). It has been suggested that the small-world topology is significant to brain functions because it balances integration and segregation of information processing on brain networks (Sporns and Zwi, <xref ref-type="bibr" rid="B31">2004</xref>; Downes et al., <xref ref-type="bibr" rid="B9">2012</xref>). Disruption of the small-world topology is suggested to be related to brain disease (Fornito and Bullmore, <xref ref-type="bibr" rid="B10">2015</xref>).</p>
<p>Small-world topology is characterized by two structural metrics of networks. One is the mean path length and the other is the clustering coefficient. A network is called small-world when its mean path length is small and its clustering coefficient is large (Watts and Strogatz, <xref ref-type="bibr" rid="B35">1998</xref>). Note that it is meaningful only for sparse networks since densely connected networks trivially satisfy the two features of the small-world topology (Markov et al., <xref ref-type="bibr" rid="B21">2013</xref>).</p>
<p>A small value of the mean path length could make communication between any pair of nodes rapid and thus contribute to global integration of information. On the other hand, high clustering in a sparse network indicates that it consists of local groups of nodes that are densely connected within each group. This could support segregated specialized information processing. Thus, the small-world topology seems to compromise apparently opposite aspects of information processing on networks: integration and segregation. In other words, small-world networks can be characterized as both locally and globally efficient (Latora and Marchiori, <xref ref-type="bibr" rid="B19">2001</xref>). However, what is relevant to functioning of a system is not structure but dynamical processes under the structural constraint (Barrat et al., <xref ref-type="bibr" rid="B2">2008</xref>). The influence of the small-world topology on dynamical behaviors has been studied in the literature. For example, coexistence of fast response and coherent oscillations in the dynamics of networks of model neurons (Lago-Fern&#x000E1;ndez et al., <xref ref-type="bibr" rid="B18">2000</xref>) and improvement of synchronizability of general coupled identical oscillators (Barahona and Pecora, <xref ref-type="bibr" rid="B1">2002</xref>) are achieved in the small-world regime of the Watts-Strogatz small-world network model (Watts and Strogatz, <xref ref-type="bibr" rid="B35">1998</xref>). However, the quantitative effect of the small-world topology on information flow generated by dynamical processes on networks is still obscure.</p>
<p>In this paper, we do not concern whether the small-world topology is relevant to functioning of real-world brain networks. Rather, we take it for granted and study its influence on information flow generated by a dynamical process on the Watts-Strogatz small-world network model (Watts and Strogatz, <xref ref-type="bibr" rid="B35">1998</xref>). We consider random threshold networks that have been used as a model of neural network dynamics (K&#x000FC;rten, <xref ref-type="bibr" rid="B17">1988</xref>) for their simplicity and low computational costs (Rohlf, <xref ref-type="bibr" rid="B28">2008</xref>). Information flow is quantified by the transfer entropy (Schreiber, <xref ref-type="bibr" rid="B30">2000</xref>). For the analysis of information flow, we employ the combinatorial Hodge theory (Jiang et al., <xref ref-type="bibr" rid="B15">2011</xref>). Miura and Aoki (<xref ref-type="bibr" rid="B22">2015a</xref>) used this technique to reveal global loop structure of an evolving neural network model and Miura and Aoki (<xref ref-type="bibr" rid="B23">2015b</xref>) showed that it can distinguish different learning rules. Fujiki and Haruna (<xref ref-type="bibr" rid="B11">2014</xref>) applied the combinatorial Hodge theory to study the influence of different degree distributions on the composition of information flow generated by a dynamical process on networks. The combinatorial Hodge theory enables us to decompose any flow on a network into three mutually orthogonal components: gradient, harmonic and curl flows. In succeeding sections, we study how the balance between these components in information flow changes as the parameter of the Watts-Strogatz model is varied by numerical simulation and discuss its implications.</p>
</sec>
<sec sec-type="materials and methods" id="s2">
<title>2. Materials and methods</title>
<sec>
<title>2.1. Random threshold networks on the small-world model</title>
<p>We employed the conventional <italic>Watts-Strogatz model (WS model)</italic> (Watts and Strogatz, <xref ref-type="bibr" rid="B35">1998</xref>). It is constructed as follows. First, <italic>N</italic> nodes are arranged on a ring lattice and each node is connected to its 2<italic>k</italic> nearest neighbors (<italic>k</italic> &#x0003C; &#x0003C; <italic>N</italic>). For example, if <italic>k</italic> &#x0003D; 2, a node is connected to 4 other nodes: its two nearest neighbors and two second-nearest neighbors. Second, each edge is randomly rewired with probability <italic>p</italic> (0 &#x02264; <italic>p</italic> &#x02264; 1). <italic>p</italic> &#x0003D; 0 and <italic>p</italic> &#x0003D; 1 correspond to the lattice network and completely random networks (<italic>Erd&#x000F6;s-R&#x000E9;nyi random networks</italic>), respectively. For a certain range of <italic>p</italic> between these two extremes, we get so-called small-world networks with a small mean path length and a high clustering coefficient. In order to run random threshold networks on the WS model, we needed to assign a direction to each link. For each link, one of the two directions was chosen at random with equal probability. In this paper, we set <italic>N</italic> &#x0003D; 400 and consider the two cases <italic>k</italic> &#x0003D; 3 and <italic>k</italic> &#x0003D; 4. We also performed the same numerical simulation study except <italic>N</italic> &#x0003D; 200 and obtained the qualitatively similar results as those described below.</p>
<p>We simulated <italic>random threshold networks (RTNs)</italic> (Rohlf and Bornholdt, <xref ref-type="bibr" rid="B29">2002</xref>) on the WS model. In RTNs, each node is assumed to take two states &#x0002B;1 and &#x02212;1 corresponding to firing and resting states of a neuronal population, respectively. The state <italic>x</italic><sub><italic>i</italic></sub>(<italic>t</italic>) of node <italic>i</italic> at time <italic>t</italic> is updated synchronously by the rule
<disp-formula id="E1"><label>(1)</label><mml:math id="M2"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mstyle mathvariant="normal"><mml:mi>s</mml:mi><mml:mi>g</mml:mi><mml:mi>n</mml:mi></mml:mstyle><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mtext>&#x000A0;</mml:mtext><mml:mo>=</mml:mo><mml:mtext>&#x000A0;</mml:mtext><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:munderover></mml:mstyle><mml:msub><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>h</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
where <italic>sgn</italic>(<italic>x</italic>) &#x0003D; 1 if <italic>x</italic> &#x02265; 0 and <italic>sgn</italic>(<italic>x</italic>) &#x0003D; &#x02212;1 otherwise. If there is a directed link from node <italic>j</italic> to <italic>i</italic>, we set <italic>w</italic><sub><italic>ij</italic></sub> &#x0003D; &#x000B1;1 with equal probability. Otherwise, <italic>w</italic><sub><italic>ij</italic></sub> &#x0003D; 0. The threshold <italic>h</italic><sub><italic>i</italic></sub> for each node <italic>i</italic> is set to 0 in this paper.</p>
<p>The dynamics of RTNs can take three phases, ordered, critical and chaotic, depending on the values of the parameters (K&#x000FC;rten, <xref ref-type="bibr" rid="B17">1988</xref>; Rohlf and Bornholdt, <xref ref-type="bibr" rid="B29">2002</xref>; Rohlf, <xref ref-type="bibr" rid="B28">2008</xref>; Szejka et al., <xref ref-type="bibr" rid="B32">2008</xref>). For <italic>N</italic> &#x0003D; 400 and <italic>k</italic> &#x0003D; 3, 4, they exhibit weakly chaotic behaviors for all 0 &#x02264; <italic>p</italic> &#x02264; 1, namely, reside in the chaotic phase close to criticality, as we numerically verify below. These conditions were adopted in order to mimic spontaneous background activity of real-world neuronal networks (Chialvo, <xref ref-type="bibr" rid="B6">2010</xref>).</p>
<p>In general, other things being equal, the dynamics tend to become unstable for larger values of <italic>k</italic>. To the best of our knowledge, no analytic condition for the boundary between the ordered and the chaotic phases for RTNs on the WS model is derived so far. However, the phase of RTNs can be numerically assessed by the behavior of damage spreading. In the chaotic phase, a damage applied to a node, namely, a flip of the state of the node, propagates indefinitely as the state of the system evolves and eventually a finite fraction of the whole nodes is influenced. On the other hand, the damage dies away in the ordered phase. At the critical phase, a flip propagates to exactly one succeeding node on average. The size of the influence can be quantified as follows (Gershenson, <xref ref-type="bibr" rid="B12">2003</xref>). Let <bold>x</bold>(0) be a random initial state of an RTN on the WS model. A node is chosen at random and its state is flipped. Let <bold>y</bold>(0) be the resulting state of the whole system which is 1 bit away from <bold>x</bold>(0).
<disp-formula id="E2"><label>(2)</label><mml:math id="M3"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:mi>d</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>x</mml:mtext></mml:mstyle><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mstyle mathvariant="bold"><mml:mtext>y</mml:mtext></mml:mstyle><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:mfrac><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mtext>&#x000A0;</mml:mtext><mml:mo>=</mml:mo><mml:mtext>&#x000A0;</mml:mtext><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:munderover></mml:mstyle><mml:mfrac><mml:mrow><mml:mo>|</mml:mo><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:mfrac></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
is the Hamming distance between the two states after <italic>t</italic> time steps. Let
<disp-formula id="E3"><label>(3)</label><mml:math id="M4"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>&#x003B4;</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>d</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>x</mml:mtext></mml:mstyle><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mstyle mathvariant="bold"><mml:mtext>y</mml:mtext></mml:mstyle><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>-</mml:mo><mml:mi>d</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>x</mml:mtext></mml:mstyle><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>0</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mstyle mathvariant="bold"><mml:mtext>y</mml:mtext></mml:mstyle><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>0</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
where <italic>d</italic>(<bold>x</bold>(0), <bold>y</bold>(0)) &#x0003D; 1/<italic>N</italic> and <inline-formula><mml:math id="M5"><mml:msup><mml:mrow><mml:mi>&#x003B4;</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x0002A;</mml:mo></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:munder class="msub"><mml:mrow><mml:mo class="qopname">lim</mml:mo></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x02192;</mml:mo><mml:mi>&#x0221E;</mml:mi></mml:mrow></mml:munder><mml:msub><mml:mrow><mml:mi>&#x003B4;</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>. &#x003B4;<sup>&#x0002A;</sup> &#x0003E; 0 indicates that the dynamic is sensitive to initial conditions and thus is an evidence for the chaotic phase. On the other hand, &#x003B4;<sup>&#x0002A;</sup> &#x0003C; 0 or &#x003B4;<sup>&#x0002A;</sup> &#x0003D; 0 mean that the dynamic is insensitive or neutral to perturbations and thus correspond to the ordered or the critical phases, respectively. Note that the asymptotic size of influence of perturbations is a well-known order parameter of Boolean network dynamics (Derrida and Pomeau, <xref ref-type="bibr" rid="B8">1986</xref>). The quantity <inline-formula><mml:math id="M6"><mml:munder class="msub"><mml:mrow><mml:mo class="qopname">lim</mml:mo></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x02192;</mml:mo><mml:mi>&#x0221E;</mml:mi></mml:mrow></mml:munder><mml:mi>d</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>x</mml:mtext></mml:mstyle><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mstyle mathvariant="bold"><mml:mtext>y</mml:mtext></mml:mstyle><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> approximates it in a finite size system. Thus, the sign of &#x003B4;<sup>&#x0002A;</sup> is a convenient way to numerically assess the phase of Boolean network dynamics. This method has been applied to random Boolean networks on the WS model (Lizier et al., <xref ref-type="bibr" rid="B20">2011</xref>) and here we have followed this approach.</p>
<p>Figure <xref ref-type="fig" rid="F1">1</xref> shows time evolution of &#x003B4;<sub><italic>t</italic></sub> for <italic>k</italic> &#x0003D; 3 (a) and <italic>k</italic> &#x0003D; 4 (b). The rewiring probability <italic>p</italic> is varied within the range 10<sup>&#x02212;3</sup> &#x02264; <italic>p</italic> &#x02264; 1. For each <italic>p</italic>, &#x003B4;<sub><italic>t</italic></sub> &#x0003E; 0 in the depicted range of time and converges to a small positive value within a few tens to hundreds time steps. The system with <italic>k</italic> &#x0003D; 4 is more susceptible to perturbations than that with <italic>k</italic> &#x0003D; 3 and there is an overall tendency that the size of eventual damage influence becomes larger as <italic>p</italic> increases, namely, it becomes more remote from criticality.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p><bold>Time evolution of &#x003B4;<sub><italic>t</italic></sub> for (A) <italic>k</italic> &#x0003D; 3 and (B) <italic>k</italic> &#x0003D; 4</bold>. Each curve is the average over 100 random initial conditions for each realization of RTN, 100 realizations of RTNs on each network and 400 networks generated by the WS model with a specified value of <italic>p</italic>. <italic>p</italic> &#x0003D; 0.001000 (red), <italic>p</italic> &#x0003D; 0.003375 (green), <italic>p</italic> &#x0003D; 0.011391 (blue), <italic>p</italic> &#x0003D; 0.038443 (magenta), <italic>p</italic> &#x0003D; 0.129746 (cyan), <italic>p</italic> &#x0003D; 0.437894 (orange), and <italic>p</italic> &#x0003D; 0.985261 (black). These values of <italic>p</italic> were chosen so that they are arranged with an equal interval in the logarithmic scale because the small-world regime can be discriminated well in the logarithmic scale of <italic>p</italic> as we can see from Figure <xref ref-type="fig" rid="F5">5</xref>. Concretely, <inline-formula><mml:math id="M1"><mml:mi>p</mml:mi><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mo>&#x000D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>.</mml:mo><mml:msup><mml:mrow><mml:mn>5</mml:mn></mml:mrow><mml:mrow><mml:mn>3</mml:mn><mml:mi>n</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula> for <italic>p</italic><sub>0</sub> &#x0003D; 0.001000 and 0 &#x02264; <italic>n</italic> &#x02264; 6.</p></caption>
<graphic xlink:href="fncir-10-00077-g0001.tif"/>
</fig>
</sec>
<sec>
<title>2.2. Quantification of information flow</title>
<p>We quantified information transfer along each causal link in RTNs by the transfer entropy (Schreiber, <xref ref-type="bibr" rid="B30">2000</xref>). Let us consider a directed link from node <italic>j</italic> to node <italic>i</italic>. The quantity
<disp-formula id="E4"><label>(4)</label><mml:math id="M7"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>T</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>&#x02192;</mml:mo><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>H</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>X</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>|</mml:mo><mml:msub><mml:mrow><mml:mi>X</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>-</mml:mo><mml:mi>H</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>X</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>|</mml:mo><mml:msub><mml:mrow><mml:mi>X</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>X</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
is a measure of information transfer from node <italic>j</italic> to node <italic>i</italic> and is called the <italic>transfer entropy</italic>. Here,
<disp-formula id="E5"><label>(5)</label><mml:math id="M8"><mml:mtable><mml:mtr><mml:mtd><mml:mi>H</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>X</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#43;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>|</mml:mo><mml:msub><mml:mrow><mml:mi>X</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mo>-</mml:mo><mml:mstyle displaystyle="true"><mml:munder class="msub"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mstyle class="text"><mml:mtext>&#x000A0;</mml:mtext></mml:mstyle><mml:mo>&#43;</mml:mo><mml:mstyle class="text"><mml:mtext>&#x000A0;</mml:mtext></mml:mstyle><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:munder></mml:mstyle><mml:mi>p</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#43;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mstyle><mml:mtext>&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;</mml:mtext></mml:mstyle><mml:mo>&#x000D7;</mml:mo><mml:msub><mml:mrow><mml:mo class="qopname">log</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mi>p</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#43;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>|</mml:mo><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
is the conditional entropy (Cover and Thomas, <xref ref-type="bibr" rid="B7">1991</xref>) of the future state <italic>x</italic><sub><italic>i</italic></sub>(<italic>t</italic> &#x0002B; 1) of node <italic>i</italic> given its present state <italic>x</italic><sub><italic>i</italic></sub>(<italic>t</italic>) which represents the amount of average uncertainty to predict the <italic>i</italic>&#x00027;s future state from its present state. <italic>p</italic>(<italic>x</italic><sub><italic>i</italic></sub>(<italic>t</italic> &#x0002B; 1), <italic>x</italic><sub><italic>i</italic></sub>(<italic>t</italic>)) is the joint probability that one observes a pair of states (<italic>x</italic><sub><italic>i</italic></sub>(<italic>t</italic> &#x0002B; 1), <italic>x</italic><sub><italic>i</italic></sub>(<italic>t</italic>)) and <italic>p</italic>(<italic>x</italic><sub><italic>i</italic></sub>(<italic>t</italic> &#x0002B; 1)|<italic>x</italic><sub><italic>i</italic></sub>(<italic>t</italic>)) is the conditional probability that the state of <italic>i</italic> at time <italic>t</italic> &#x0002B; 1 is <italic>x</italic><sub><italic>i</italic></sub>(<italic>t</italic> &#x0002B; 1) given its state at time <italic>t</italic> is <italic>x</italic><sub><italic>i</italic></sub>(<italic>t</italic>). On the other hand,
<disp-formula id="E6"><label>(6)</label><mml:math id="M9"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>H</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>X</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#43;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>|</mml:mo><mml:msub><mml:mrow><mml:mi>X</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>X</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mo>-</mml:mo><mml:mstyle displaystyle="true"><mml:munder class="msub"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mstyle class="text"><mml:mtext>&#x000A0;</mml:mtext></mml:mstyle><mml:mo>&#43;</mml:mo><mml:mstyle class="text"><mml:mtext>&#x000A0;</mml:mtext></mml:mstyle><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:munder></mml:mstyle></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mi>p</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#43;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mstyle><mml:mtext>&#x000A0;</mml:mtext></mml:mstyle><mml:mo>&#x000D7;</mml:mo><mml:msub><mml:mrow><mml:mo class="qopname">log</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mi>p</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#43;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>|</mml:mo><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
is the conditional entropy of the future state of node <italic>i</italic> given its present state and <italic>j</italic>&#x00027;s present state. The joint and conditional probabilities involved in Equation (6) are defined similarly as in Equation (5). Thus, the transfer entropy <italic>T</italic><sub><italic>j</italic>&#x02192;<italic>i</italic></sub> (Equation 4) can be interpreted as the reduction of average uncertainty when one incorporates the knowledge about <italic>j</italic>&#x00027;s present state into the prediction of <italic>i</italic>&#x00027;s future state from its own present state.</p>
<p>In this paper, <italic>T</italic><sub><italic>j</italic>&#x02192;<italic>i</italic></sub> on a fixed network was numerically estimated as follows. First, a network was generated from the WS model with given parameter values and fixed. Second, for each realization of RTN on the fixed network, 1000 time steps after disregarding initial 100 transient steps from a random initial condition were used to calculate <italic>T</italic><sub><italic>j</italic>&#x02192;<italic>i</italic></sub>. Finally, <italic>T</italic><sub><italic>j</italic>&#x02192;<italic>i</italic></sub> was averaged over 100 realizations of RTNs on the fixed network. The average of <italic>T</italic><sub><italic>j</italic>&#x02192;<italic>i</italic></sub> is also denoted by <italic>T</italic><sub><italic>j</italic>&#x02192;<italic>i</italic></sub> by abuse of notation. The length of transient time steps was determined from the inspection of Figure <xref ref-type="fig" rid="F1">1</xref>. &#x003B4;<sub><italic>t</italic></sub>s almost converge after 100 time steps for all <italic>p</italic>. This indicates that we can regard that the dynamics of RTNs settle down to the stationary regime after 100 time steps and the probability distributions involved in the formula of <italic>T</italic><sub><italic>j</italic>&#x02192;<italic>i</italic></sub> are well-defined.</p>
<p>Let us introduce a quantity <italic>e</italic><sub><italic>ij</italic></sub> as follows. <italic>e</italic><sub><italic>ij</italic></sub> &#x0003D; <italic>T</italic><sub><italic>i</italic>&#x02192;<italic>j</italic></sub> if there is a directed link from <italic>i</italic> to <italic>j</italic>, <italic>e</italic><sub><italic>ij</italic></sub> &#x0003D; &#x02212;<italic>T</italic><sub><italic>j</italic>&#x02192;<italic>i</italic></sub> if there is a directed link from <italic>j</italic> to <italic>i</italic> and <italic>e</italic><sub><italic>ij</italic></sub> &#x0003D; 0 if there is no link between <italic>i</italic> and <italic>j</italic>. <italic>e</italic><sub><italic>ij</italic></sub> defines a skew-symmetric matrix <italic>e</italic> &#x0003D; (<italic>e</italic><sub><italic>ij</italic></sub>). Namely, <italic>e</italic> satisfies <italic>e</italic><sub><italic>ij</italic></sub> &#x0003D; &#x02212;<italic>e</italic><sub><italic>ji</italic></sub> for all 1 &#x02264; <italic>i, j</italic> &#x02264; <italic>N</italic>. We call <italic>e information flow</italic>. Here, we have defined information flow as such because the combinatorial Hodge decomposition can be applied to only skew-symmetric matrices. (In general, if one has a matrix representing correlation, coupling strength, etc. between nodes, first one can decompose it into the sum of the symmetric part and the skew-symmetric part and then apply the Hodge decomposition to the latter.)</p>
<p>In the literature, information theoretic quantities such as transfer entropy are sometimes used for the purpose of causality detection (Hlav&#x000E1;&#x0010D;kov&#x000E1;-Schindler et al., <xref ref-type="bibr" rid="B13">2007</xref>). In this paper, causality between two nodes is taken for granted. <italic>T</italic><sub><italic>j</italic>&#x02192;<italic>i</italic></sub> quantifies the magnitude of impact of <italic>j</italic> on <italic>i</italic> along the causal relationship when there is a directed link from <italic>j</italic> to <italic>i</italic>. We emphasize that by definition <italic>e</italic><sub><italic>ij</italic></sub> &#x0003D; 0 for pairs (<italic>i, j</italic>) that are not connected. Although <italic>T</italic><sub><italic>j</italic>&#x02192;<italic>i</italic></sub> could be positive for such pairs, they are ignored because we here conceive information flow as influence of one node to the other node along a causal link between them.</p>
</sec>
<sec>
<title>2.3. Combinatorial hodge decomposition</title>
<p>An <italic>edge flow</italic> on a network of size <italic>N</italic> is an <italic>N</italic> &#x000D7; <italic>N</italic> skew-symmetric matrix <italic>e</italic> &#x0003D; (<italic>e</italic><sub><italic>ij</italic></sub>) satisfying <italic>e</italic><sub><italic>ij</italic></sub> &#x0003D; 0 for all pairs of nodes (<italic>i, j</italic>) that are not connected. The information flow introduced in the last subsection is an instance of edge flow.</p>
<p>An edge flow <italic>e</italic> &#x0003D; (<italic>e</italic><sub><italic>ij</italic></sub>) can be uniquely decomposed into three orthogonal components via the combinatorial Hodge decomposition theorem (Jiang et al., <xref ref-type="bibr" rid="B15">2011</xref>): gradient <italic>g</italic> &#x0003D; (<italic>g</italic><sub><italic>ij</italic></sub>), harmonic <italic>h</italic> &#x0003D; (<italic>h</italic><sub><italic>ij</italic></sub>) and curl <italic>c</italic> &#x0003D; (<italic>c</italic><sub><italic>ij</italic></sub>) flows. A <italic>gradient flow g</italic> is an edge flow that can be written as the difference of a potential function. Namely, there exists a real-valued function <italic>f</italic> on the set of nodes such that <italic>g</italic><sub><italic>ij</italic></sub> &#x0003D; <italic>f</italic><sub><italic>j</italic></sub> &#x02212; <italic>f</italic><sub><italic>i</italic></sub> for all pairs (<italic>i, j</italic>) that are connected. A <italic>harmonic flow h</italic> is a non-gradient edge flow that is also curl-free. Namely, <italic>h</italic> vanishes on every triangle {<italic>i, j, k</italic>} (any pair of nodes from {<italic>i, j, k</italic>} is linked) in the sense that <italic>h</italic><sub><italic>ij</italic></sub> &#x0002B; <italic>h</italic><sub><italic>jk</italic></sub> &#x0002B; <italic>h</italic><sub><italic>ki</italic></sub> &#x0003D; 0. A <italic>curl flow c</italic> is defined by <italic>c</italic> &#x0003D; <italic>e</italic> &#x02212; <italic>g</italic> &#x02212; <italic>h</italic> and thus is non-gradient and may have non-zero curls on some triangles. One can say that the harmonic flow represents the globally circulating component of a given edge flow, while the curl flow corresponds to the local circulating one. We call the sum of the harmonic and the curl flows <italic>loop flow</italic> and denote it by <italic>l</italic> &#x0003D; <italic>h</italic> &#x0002B; <italic>c</italic>. Note that <italic>l</italic> represents the non-gradient flow and is precisely equal to the divergence-free flow which is a result of elementary linear algebra. Here, the divergence of an edge flow <italic>e</italic> at node <italic>i</italic> is given by the sum of <italic>e</italic><sub><italic>ij</italic></sub> over all <italic>j</italic> connected to <italic>i</italic>. If the divergence of <italic>e</italic> is zero at a node, the flow is conserved at the node, namely, the sum of incoming flows and the sum of outgoing flows are equal. It follows that vanishing of the divergence of nonzero <italic>e</italic> at every node implies that it contains a loop along which each element of <italic>e</italic> is positive.</p>
<p>The magnitude of an edge flow <italic>e</italic> can be measured by its <italic>l</italic><sup>2</sup>-norm <inline-formula><mml:math id="M10"><mml:mo>|</mml:mo><mml:mo>|</mml:mo><mml:mi>e</mml:mi><mml:mo>|</mml:mo><mml:msup><mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:munder class="msub"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:munder><mml:msubsup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:math></inline-formula>. Adopting the <italic>l</italic><sup>2</sup>-norm has a certain advantage since we have the equality
<disp-formula id="E7"><label>(7)</label><mml:math id="M11"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:mo>|</mml:mo><mml:mo>|</mml:mo><mml:mi>e</mml:mi><mml:mo>|</mml:mo><mml:msup><mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:mo>|</mml:mo><mml:mo>|</mml:mo><mml:mi>g</mml:mi><mml:mo>|</mml:mo><mml:msup><mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>&#x0002B;</mml:mo><mml:mo>|</mml:mo><mml:mo>|</mml:mo><mml:mi>l</mml:mi><mml:mo>|</mml:mo><mml:msup><mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:mo>|</mml:mo><mml:mo>|</mml:mo><mml:mi>g</mml:mi><mml:mo>|</mml:mo><mml:msup><mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>&#x0002B;</mml:mo><mml:mo>|</mml:mo><mml:mo>|</mml:mo><mml:mi>h</mml:mi><mml:mo>|</mml:mo><mml:msup><mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>&#x0002B;</mml:mo><mml:mo>|</mml:mo><mml:mo>|</mml:mo><mml:mi>c</mml:mi><mml:mo>|</mml:mo><mml:msup><mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
due to the orthogonality of the decomposition. We can define the relative strength of each component by &#x003B3; &#x0003D; ||<italic>g</italic>||<sup>2</sup>/||<italic>e</italic>||<sup>2</sup>, &#x003B7; &#x0003D; ||<italic>h</italic>||<sup>2</sup>/||<italic>e</italic>||<sup>2</sup> and &#x003C7; &#x0003D; ||<italic>c</italic>||<sup>2</sup>/||<italic>e</italic>||<sup>2</sup> called <italic>gradient ratio, harmonic ratio</italic> and <italic>curl ratio</italic>, respectively (Fujiki and Haruna, <xref ref-type="bibr" rid="B11">2014</xref>). The sum of them is 1 by Equation (7). We also introduce &#x003BB; &#x0003D; ||<italic>l</italic>||<sup>2</sup>/||<italic>e</italic>||<sup>2</sup> and call it <italic>loop ratio</italic>.</p>
<p>Each component constitutes a linear subspace of the finite-dimensional vector space consisting of all edge flows. Thus, it makes sense that we talk about the dimension of the subspace consisting of all gradient flows, and so on. We define the relative size of each subspace by the ratio of the dimension of the subspace to the dimension of the space of all edge flows. Let &#x00393; be the relative size of the subspace of gradient flows, <italic>H</italic> that of harmonic flows, <italic>X</italic> that of curl flows and &#x0039B; that of loop flows. We call them <italic>structural gradient ratio, structural harmonic ratio, structural curl ratio</italic> and <italic>structural loop ratio</italic>, respectively. Note that these structural ratios are determined by the underlying network alone, while ratios denoted by lower-case Greek letters defined above depend on each edge flow. In particular, the latter quantities are a function of dynamical processes on the network for information flows. Note also that each structural ratio is equal to the average relative strength of corresponding component of edge flows of a fixed <italic>l</italic><sup>2</sup>-norm chosen uniformly at random. Thus, we can quantitatively evaluate whether a dynamical process on a given network enhance or diminish intrinsic strength of each component determined by network topology alone by comparing the relative strength of that component for the information flow generated by the dynamical process to the corresponding structural ratio.</p>
<p>The gradient and curl components of a given edge flow can be numerically computed by solving corresponding least square optimization problems. Here, we obtained them by computing the Moore-Penrose inverses of appropriate matrices (Jiang et al., <xref ref-type="bibr" rid="B15">2011</xref>). The computation of curl components involves manipulation of matrices whose size is the number of triangles. This requires high computational costs when the underlying network is close to the lattice network. This is the reason why we restricted our numerical simulations to networks with modest sizes (<italic>N</italic> &#x02264; 400).</p>
<p>Figure <xref ref-type="fig" rid="F2">2</xref> illustrates the combinatorial Hodge decomposition of an information flow <italic>e</italic> obtained by the procedure described in Section 2.2 on a network generated from the WS model with <italic>N</italic> &#x0003D; 8, <italic>k</italic> &#x0003D; 2 and <italic>p</italic> &#x0003D; 0.1. For this example, we have ||<italic>e</italic>||<sup>2</sup> &#x0003D; 0.411, ||<italic>g</italic>||<sup>2</sup> &#x0003D; 0.028, ||<italic>h</italic>||<sup>2</sup> &#x0003D; 0.122, ||<italic>c</italic>||<sup>2</sup> &#x0003D; 0.261 and ||<italic>l</italic>||<sup>2</sup> &#x0003D; 0.383. Thus, the relative strength of each component is: &#x003B3; &#x0003D; 0.069, &#x003B7; &#x0003D; 0.297, &#x003C7; &#x0003D; 0.634 and &#x003BB; &#x0003D; 0.931. On the other hand, the structural ratios are: &#x00393; &#x0003D; 7/16 &#x0003D; 0.4375, <italic>H</italic> &#x0003D; 1/16 &#x0003D; 0.0625, <italic>X</italic> &#x0003D; 8/16 &#x0003D; 0.5 and &#x0039B; &#x0003D; 9/16 &#x0003D; 0.5625.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p><bold>An example of edge flow on a network generated from the WS model with <italic>N</italic> &#x0003D; 8, <italic>k</italic> &#x0003D; 2 and <italic>p</italic> &#x0003D; 0.1 and its combinatorial Hodge decomposition into the three components</bold>. The value of each flow is rounded off to the four decimal places and multiplied by 10<sup>2</sup> for visibility.</p></caption>
<graphic xlink:href="fncir-10-00077-g0002.tif"/>
</fig>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>3. Results</title>
<p>The information flow generated by RTNs on the WS model was decomposed into the three components. In this section, all quantities are averaged over 400 networks for each parameter set of the WS model and error bars in figures represent the standard deviations.</p>
<p>The magnitude of information flow ||<italic>e</italic>||<sup>2</sup> divided by the number of nodes <italic>N</italic> is shown in Figure <xref ref-type="fig" rid="F3">3</xref> for <italic>k</italic> &#x0003D; 3 and <italic>k</italic> &#x0003D; 4. The range of this quantity is confined within an interval well apart from the zero in both cases. This indicates that non-trivial information flows were generated for all values of <italic>p</italic>. One might expect that the magnitude of information flow becomes large as the randomness of the underlying network is strengthened since we have observed that the dynamic becomes more unstable as <italic>p</italic> increases in Figure <xref ref-type="fig" rid="F1">1</xref>. However, we can see a minimum of ||<italic>e</italic>||<sup>2</sup>/<italic>N</italic> for both <italic>k</italic> &#x0003D; 3 and <italic>k</italic> &#x0003D; 4 in Figure <xref ref-type="fig" rid="F3">3</xref>. It could result from the small-world topology since the minimum points are contained in the small-world region as we define below (see <bold>Figure 5</bold>). However, the exact reason for this unexpected non-linear behavior is obscure at present. In the following, we concentrate on the relative strength of components of information flow and leave it as future work.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p><bold>The magnitude of information flow divided by the number of nodes <italic>N</italic> &#x0003D; 400 for <italic>k</italic> &#x0003D; 3 and <italic>k</italic> &#x0003D; 4</bold>.</p></caption>
<graphic xlink:href="fncir-10-00077-g0003.tif"/>
</fig>
<p>In Figure <xref ref-type="fig" rid="F4">4</xref>, the relative strength of each component of information flow is shown together with the corresponding structural ratio. The gradient ratio &#x003B3; is significantly smaller than the structural gradient ratio &#x00393; for all <italic>p</italic> in both <italic>k</italic> &#x0003D; 3 (Figure <xref ref-type="fig" rid="F4">4A</xref>) and <italic>k</italic> &#x0003D; 4 (Figure <xref ref-type="fig" rid="F4">4D</xref>). This indicates that information flows generated by RTNs favor the loop component. The value of &#x00393; can be theoretically obtained. Indeed, the dimension of the space of edge flows is just the number of edges and is equal to <italic>kN</italic>. The dimension of the subspace of gradient flows is the number of nodes minus the number of connected components of the underlying network. However, the latter can be assumed to be negligible compared to <italic>N</italic> in the setting of our numerical simulations. Thus, &#x00393; &#x0003D; <italic>N</italic>/(<italic>kN</italic>) &#x0002B; <italic>O</italic>(1/<italic>N</italic>) &#x02248; 1/<italic>k</italic> which does not dependent on <italic>p</italic>. This agrees well with the result of numerical simulations as shown in Figures <xref ref-type="fig" rid="F4">4A,D</xref>.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p><bold>The relative strength of gradient &#x003B3; (A,D), harmonic &#x003B7; (B,E) and curl &#x003C7; (C,F) flows together with corresponding structural ratios are shown for <italic>k</italic> &#x0003D; 3 (top row) and <italic>k</italic> &#x0003D; 4 (bottom row)</bold>.</p></caption>
<graphic xlink:href="fncir-10-00077-g0004.tif"/>
</fig>
<p>Figures <xref ref-type="fig" rid="F4">4B,E</xref> show the harmonic ratio &#x003B7; and the structural harmonic ratio <italic>H</italic>. Both increase as <italic>p</italic> increases, namely, the underlying network becomes more random. When <italic>p</italic> is close to 1, &#x003B7; is significantly larger than <italic>H</italic>. On the other hand, the curl ratio &#x003C7; and the structural curl ratio <italic>X</italic> decrease as <italic>p</italic> increases. For small values of <italic>p</italic>, &#x003C7; is significantly larger than <italic>X</italic> as shown in Figures <xref ref-type="fig" rid="F4">4C,F</xref>. Thus, the dominant part of the loop component is enhanced by information flow generated by RTNs. Namely, when the network is close to the lattice network, the curl component is enhanced while the harmonic component is enhanced for networks close to Erd&#x000F6;s-R&#x000E9;nyi random networks. We can give simple theoretical estimations of <italic>X</italic> and <italic>H</italic>. Let us first consider the lattice network (<italic>p</italic> &#x0003D; 0). In this case, <italic>H</italic> &#x0003D; 0 since any loop of length greater than 3 can be expressed as a &#x0201C;sum&#x0201D; of triangles. Hence, the dimension of the subspace of curl flows is the dimension of the space of edge flows (<italic>kN</italic>) minus the dimension of the subspace of gradient flows (<italic>N</italic> &#x02212; 1). Thus, <italic>X</italic> &#x0003D; (<italic>kN</italic> &#x02212; (<italic>N</italic> &#x02212; 1))/<italic>kN</italic> &#x0003D; (<italic>k</italic> &#x02212; 1)/<italic>k</italic> &#x0002B; <italic>O</italic>(1/<italic>N</italic>) for <italic>p</italic> &#x0003D; 0. Now, let us assume <italic>p</italic> &#x0003E; 0. The dimension of the subspace of curl flows is the number of &#x0201C;linearly independent&#x0201D; triangles. By the random rewiring process, these triangles in the lattice network may be destroyed. The number of triangle decreases with multiplication factor (1 &#x02212; <italic>p</italic>)<sup>3</sup> up to <italic>O</italic>(1/<italic>N</italic>) terms in the WS model (Barrat and Weigt, <xref ref-type="bibr" rid="B3">2000</xref>). If we assume that the number of &#x0201C;linearly independent&#x0201D; triangles linearly scales with the number of triangles, then we predict <italic>X</italic> &#x0003D; (<italic>k</italic> &#x02212; 1)(1 &#x02212; <italic>p</italic>)<sup>3</sup>/<italic>k</italic>&#x0002B;<italic>O</italic>(1/<italic>N</italic>) for <italic>p</italic> &#x0003E; 0. For <italic>H</italic>, we have <italic>H</italic> &#x0003D; 1 &#x02212; &#x00393; &#x02212; <italic>X</italic> &#x0003D; (<italic>k</italic> &#x02212; 1)(1 &#x02212; (1 &#x02212; <italic>p</italic>)<sup>3</sup>)/<italic>k</italic> &#x0002B; <italic>O</italic>(1/<italic>N</italic>). These predictions agree well at least for small <italic>p</italic> &#x0003E; 0 as we can see from Figures <xref ref-type="fig" rid="F4">4B,C,E,F</xref>. The <italic>O</italic>(1/<italic>N</italic>) correction can be estimated for <italic>p</italic> &#x0003D; 1 which is visible in the scale of Figure <xref ref-type="fig" rid="F4">4</xref>. When <italic>p</italic> &#x0003D; 1, the expected number of triangles is 4<italic>k</italic><sup>3</sup>/3 (Newman, <xref ref-type="bibr" rid="B25">2010</xref>). Thus, our prediction is <italic>X</italic> &#x0003D; 4<italic>k</italic><sup>2</sup>/(3<italic>N</italic>) for <italic>p</italic> &#x0003D; 1. Since <italic>N</italic> &#x0003D; 400, we have <italic>X</italic> &#x0003D; 0.03 and <italic>X</italic> &#x0003D; 0.0533&#x022EF; for <italic>k</italic> &#x0003D; 3 and <italic>k</italic> &#x0003D; 4, respectively.</p>
<p>One can be aware of a small hollow in Figures <xref ref-type="fig" rid="F4">4A,D</xref> at an intermediate value of <italic>p</italic>. Its counterpart &#x003BB;(&#x0003D; 1 &#x02212; &#x003B3; &#x0003D; &#x003B7; &#x0002B; &#x003C7;) is enlarged in Figure <xref ref-type="fig" rid="F5">5</xref>. We also show a small-world index &#x003C9; (Telesford et al., <xref ref-type="bibr" rid="B33">2011</xref>) together. Another small-world index was suggested by Humphries and Gurney (<xref ref-type="bibr" rid="B14">2008</xref>) earlier. Here, we adopted the former because it better discriminates the small-world region. For a given network generated by the WS model, it is defined by &#x003C9; &#x0003D; <italic>L</italic><sub><italic>r</italic></sub>/<italic>L</italic> &#x02212; <italic>C</italic>/<italic>C</italic><sub><italic>c</italic></sub>, where <italic>L</italic> is the mean path length of the network, <italic>L</italic><sub><italic>r</italic></sub> is the average of the mean path length of Erd&#x000F6;s-R&#x000E9;nyi networks with the same numbers of nodes and links (<italic>p</italic> &#x0003D; 1), <italic>C</italic> is the clustering coefficient of the network and <italic>C</italic><sub><italic>c</italic></sub> is the clustering coefficient of the lattice network with the same <italic>k</italic> and <italic>N</italic> (<italic>p</italic> &#x0003D; 0). &#x003C9; has a value within the range &#x02212;1 &#x0003C; &#x003C9; &#x0003C; 1 and the network is judged to be small-world if &#x003C9; is close to 0. As &#x003C9; varies toward &#x02212;1, the network is more like a lattice network. On the other hand, the network becomes more like a random network as &#x003C9; approaches 1. From Figure <xref ref-type="fig" rid="F5">5</xref>, we can see that the loop ratio &#x003BB; takes its maximum value within the small-world region (If one would like to decide the boundary, one could take &#x02212;0.5 &#x02264; &#x003C9; &#x02264; 0.5 as the small-world region as suggested by Telesford et al., <xref ref-type="bibr" rid="B33">2011</xref>). When <italic>k</italic> &#x0003D; 3 (Figure <xref ref-type="fig" rid="F5">5A</xref>), the value of <italic>p</italic> such that &#x003BB; is maximum slightly shifts toward <italic>p</italic> &#x0003D; 1 from <italic>p</italic> satisfying &#x003C9; &#x0003D; 0. We also obtained the similar shift toward <italic>p</italic> &#x0003D; 1 for both <italic>k</italic> &#x0003D; 3, 4 when <italic>N</italic> &#x0003D; 200 but still the maximum point of &#x003BB; is contained in the small-world region (data not shown).</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p><bold>The loop ratio &#x003BB; is compared with a small-world index &#x003C9;</bold>. <bold>(A)</bold> <italic>k</italic> &#x0003D; 3 and <bold>(B)</bold> <italic>k</italic> &#x0003D; 4.</p></caption>
<graphic xlink:href="fncir-10-00077-g0005.tif"/>
</fig>
</sec>
<sec sec-type="discussion" id="s4">
<title>4. Discussion</title>
<p>The aim of this paper is to reveal the influence of the small-world topology on information flow generated by dynamical processes on it. In this paper, we studied the composition of information flow generated by RTNs on the Watts-Strogatz small-world model. Information flows were decomposed into three mutually orthogonal components by the combinatorial Hodge theory: gradient, harmonic and curl flows. The result for the structural ratios showed that networks close to the lattice network have a larger capacity to support locally circulating curl flows while those close to the Erd&#x000F6;s-R&#x000E9;nyi random networks favor globally circulating harmonic flows. The result for the relative strengths of harmonic and curl flows indicated that the dominant component of loop flows at fixed <italic>p</italic> is enhanced in information flows generated by RTNs (Figure <xref ref-type="fig" rid="F4">4</xref>). Furthermore, the relative strength of loop flows which is the sum of those for harmonic and curl flows takes its maximum value in the small-world region (Figure <xref ref-type="fig" rid="F5">5</xref>). This result suggests that the small-world topology promotes circulating information transfer generated by dynamical processes on it.</p>
<p>In the literature, the small-world topology has often been associated with a balance between integration and segregation of information processing (Sporns and Zwi, <xref ref-type="bibr" rid="B31">2004</xref>; Downes et al., <xref ref-type="bibr" rid="B9">2012</xref>). In terms of this point of view, our result in this paper can be interpreted as follows. Harmonic flow represents the globally circulating component of information flow and thus related to global integration of information processing. On the other hand, curl flow represents the locally circulating component of information flow and thus related to local segregation of information processing. The sum of relative strengths of them &#x003BB; takes its maximum value within the small-world region. This result can be seen as a representation of a balance between integration and segregation of information processing achieved in the small-world region. Note that the maximum point of &#x003BB; tends to shift toward more random side within the small-world region. Although our result is based on synthetic data, it could shed a new light on the interpretation of the result that several real-world brain networks reside more random part of the small-world region far away from the maximally small-world point (Muller et al., <xref ref-type="bibr" rid="B24">2014</xref>). Anyway, taking dynamical processes on networks into account is important to assess functions supported by the network topology.</p>
<p>The influence of the small-world topology on performance of artificial neural networks has been studied so far. Kim (<xref ref-type="bibr" rid="B16">2004</xref>) and Oshima and Odagaki (<xref ref-type="bibr" rid="B26">2007</xref>) showed that memory capacity of the Hopfield neural network is enhanced as the network becomes more random in the WS model. However, the neural network of the nematode <italic>Caenorhabditis elegans</italic> (Varshney et al., <xref ref-type="bibr" rid="B34">2011</xref>) is organized as small-world and has lower memory capacity than that of fully random networks (Kim, <xref ref-type="bibr" rid="B16">2004</xref>; Oshima and Odagaki, <xref ref-type="bibr" rid="B26">2007</xref>). What is the reason for the fact that natural selection does not select network topologies with an optimal performance? They discussed that one factor is the wiring cost to make long spatial connections which was not considered in their numerical experiments. The reason why high clustering diminishes memory capacity is still obscure. However, memory capacity is just one of many functions of brain networks. In particular, it is a global function of a network since patterns are stored as distributed synaptic strengths within the whole network. In contrast, our analysis took into account both global and local functions although less concrete. We identified not only the positive influence of small mean path length on the composition of information flow but also that of high clustering as shown in Figure <xref ref-type="fig" rid="F4">4</xref>.</p>
<p>In conclusion, our approach in this paper using the combinatorial Hodge theory provides a new tool to analyze information flow generated by dynamical processes on networks. In addition to apply this method to studying effects of various network structures such as degree correlations, network motifs and community structure in mathematical models, applications to real-world multivariate time series data that are becoming available by progress in multi-site recording techniques are of importance for future work to further assess the limit and applicability of this approach.</p>
</sec>
<sec id="s5">
<title>Author contributions</title>
<p>TH and YF designed and performed research. TH wrote the paper.</p>
</sec>
<sec id="s6">
<title>Funding</title>
<p>This work was partially supported by JSPS KAKENHI Grant Number 25280091.</p>
<sec>
<title>Conflict of interest statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</sec>
</body>
<back>
<ack>
<p>The authors are indebted to reviewers for their valuable comments that improve the manuscript.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Barahona</surname> <given-names>M.</given-names></name> <name><surname>Pecora</surname> <given-names>L. M.</given-names></name></person-group> (<year>2002</year>). <article-title>Synchronization in small-world systems</article-title>. <source>Phys. Rev. Lett.</source> <volume>89</volume>:<fpage>054101</fpage>. <pub-id pub-id-type="doi">10.1103/PhysRevLett.89.054101</pub-id><pub-id pub-id-type="pmid">12144443</pub-id></citation>
</ref>
<ref id="B2">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Barrat</surname> <given-names>A.</given-names></name> <name><surname>Barth&#x000E9;lemy</surname> <given-names>M.</given-names></name> <name><surname>Vespignani</surname> <given-names>A.</given-names></name></person-group> (<year>2008</year>). <source>Dynamical Processes on Complex Networks</source>. <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>. <pub-id pub-id-type="doi">10.1017/CBO9780511791383</pub-id></citation>
</ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Barrat</surname> <given-names>A.</given-names></name> <name><surname>Weigt</surname> <given-names>M.</given-names></name></person-group> (<year>2000</year>). <article-title>On the properties of small-world network models</article-title>. <source>Eur. Phys. J. B</source> <volume>13</volume>, <fpage>547</fpage>&#x02013;<lpage>560</lpage>. <pub-id pub-id-type="doi">10.1007/s100510050067</pub-id></citation>
</ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bassett</surname> <given-names>D.</given-names></name> <name><surname>Bullmore</surname> <given-names>E.</given-names></name></person-group> (<year>2006</year>). <article-title>Small-world brain networks</article-title>. <source>Neuroscientist</source> <volume>12</volume>, <fpage>512</fpage>&#x02013;<lpage>523</lpage>. <pub-id pub-id-type="doi">10.1177/1073858406293182</pub-id><pub-id pub-id-type="pmid">17079517</pub-id></citation>
</ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bullmore</surname> <given-names>E.</given-names></name> <name><surname>Sporns</surname> <given-names>O.</given-names></name></person-group> (<year>2010</year>). <article-title>Complex brain networks: graph theoretical analysis of structural and functional systems</article-title>. <source>Nat. Rev. Neurosci.</source> <volume>10</volume>, <fpage>186</fpage>&#x02013;<lpage>198</lpage>. <pub-id pub-id-type="doi">10.1038/nrn2575</pub-id><pub-id pub-id-type="pmid">19190637</pub-id></citation>
</ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chialvo</surname> <given-names>D. R.</given-names></name></person-group> (<year>2010</year>). <article-title>Emergent complex neural dynamics</article-title>. <source>Nat. Phys.</source> <volume>6</volume>, <fpage>744</fpage>&#x02013;<lpage>750</lpage>. <pub-id pub-id-type="doi">10.1038/nphys1803</pub-id></citation>
</ref>
<ref id="B7">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Cover</surname> <given-names>T. M.</given-names></name> <name><surname>Thomas</surname> <given-names>J. A.</given-names></name></person-group> (<year>1991</year>). <source>Elements of Information Theory</source>. <publisher-loc>Hoboken, NJ</publisher-loc>: <publisher-name>John Wiley &#x00026; Sons, Inc.</publisher-name> <pub-id pub-id-type="doi">10.1002/0471200611</pub-id></citation>
</ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Derrida</surname> <given-names>B.</given-names></name> <name><surname>Pomeau</surname> <given-names>Y.</given-names></name></person-group> (<year>1986</year>). <article-title>Random networks of automata: a simple annealed approximation</article-title>. <source>Europhys. Lett.</source> <volume>1</volume>, <fpage>45</fpage>&#x02013;<lpage>49</lpage>. <pub-id pub-id-type="doi">10.1209/0295-5075/1/2/001</pub-id></citation>
</ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Downes</surname> <given-names>J. H.</given-names></name> <name><surname>Hammond</surname> <given-names>M. W.</given-names></name> <name><surname>Xydas</surname> <given-names>D.</given-names></name> <name><surname>Spencer</surname> <given-names>M. C.</given-names></name> <name><surname>Becerra</surname> <given-names>V. M.</given-names></name> <name><surname>Warwick</surname> <given-names>K.</given-names></name> <etal/></person-group>. (<year>2012</year>). <article-title>Emergence of a small-world functional networks in cultured neurons</article-title>. <source>PLoS Comput. Biol.</source> <volume>8</volume>:<fpage>e1002522</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pcbi.1002522</pub-id><pub-id pub-id-type="pmid">22615555</pub-id></citation>
</ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fornito</surname> <given-names>A.</given-names></name> <name><surname>Bullmore</surname> <given-names>E. T.</given-names></name></person-group> (<year>2015</year>). <article-title>Connectomics: a new paradigm for understanding brain disease</article-title>. <source>Eur. Neuropsychopharmacol.</source> <volume>25</volume>, <fpage>733</fpage>&#x02013;<lpage>748</lpage>. <pub-id pub-id-type="doi">10.1016/j.euroneuro.2014.02.011</pub-id><pub-id pub-id-type="pmid">24726580</pub-id></citation>
</ref>
<ref id="B11">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Fujiki</surname> <given-names>Y.</given-names></name> <name><surname>Haruna</surname> <given-names>T.</given-names></name></person-group> (<year>2014</year>). <article-title>Hodge decomposition of information flow on complex networks</article-title>, in <source>Proceedings of 8th International Conference on Bio-inspired Information and Communications Technologies</source>, eds <person-group person-group-type="editor"><name><surname>Suzuki</surname> <given-names>J.</given-names></name> <name><surname>Nakano</surname> <given-names>T.</given-names></name></person-group> (<publisher-loc>Brussels</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>103</fpage>&#x02013;<lpage>112</lpage>.</citation>
</ref>
<ref id="B12">
<citation citation-type="other"><person-group person-group-type="author"><name><surname>Gershenson</surname> <given-names>C.</given-names></name></person-group> (<year>2003</year>). <article-title>Phase transitions in random Boolean networks with different updating schemes</article-title>. arXiv:nlin/0311008v1.</citation>
</ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hlav&#x000E1;&#x0010D;kov&#x000E1;-Schindler</surname> <given-names>K.</given-names></name> <name><surname>Palu&#x00161;</surname> <given-names>M.</given-names></name> <name><surname>Vejmelka</surname> <given-names>M.</given-names></name> <name><surname>Bhattacharya</surname> <given-names>J.</given-names></name></person-group> (<year>2007</year>). <article-title>Causality detection based on information-theoretic approaches in time series analysis</article-title>. <source>Phys. Rep.</source> <volume>441</volume>, <fpage>1</fpage>&#x02013;<lpage>46</lpage>. <pub-id pub-id-type="doi">10.1016/j.physrep.2006.12.004</pub-id></citation>
</ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Humphries</surname> <given-names>M. D.</given-names></name> <name><surname>Gurney</surname> <given-names>K.</given-names></name></person-group> (<year>2008</year>). <article-title>Network &#x02018;Small-World-Ness&#x02019;: a quantitative method for determining canonical network equivalence</article-title>. <source>PLoS ONE</source> <volume>3</volume>:<fpage>e0002051</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0002051</pub-id><pub-id pub-id-type="pmid">18446219</pub-id></citation>
</ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jiang</surname> <given-names>X.</given-names></name> <name><surname>Lim</surname> <given-names>L.-H.</given-names></name> <name><surname>Yao</surname> <given-names>Y.</given-names></name> <name><surname>Ye</surname> <given-names>Y.</given-names></name></person-group> (<year>2011</year>). <article-title>Statistical ranking and combinatorial Hodge theory</article-title>. <source>Math. Program. Ser. B</source> <volume>127</volume>, <fpage>203</fpage>&#x02013;<lpage>244</lpage>. <pub-id pub-id-type="doi">10.1007/s10107-010-0419-x</pub-id></citation>
</ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kim</surname> <given-names>B. J.</given-names></name></person-group> (<year>2004</year>). <article-title>Performance of networks of artificial neurons: the role of clustering</article-title>. <source>Phys. Rev. E</source> <volume>69</volume>:<fpage>045101(R)</fpage>. <pub-id pub-id-type="doi">10.1103/PhysRevE.69.045101</pub-id><pub-id pub-id-type="pmid">15169053</pub-id></citation>
</ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>K&#x000FC;rten</surname> <given-names>K. E.</given-names></name></person-group> (<year>1988</year>). <article-title>Critical phenomena in model neural networks</article-title>. <source>Phys. Lett. A</source> <volume>129</volume>, <fpage>157</fpage>&#x02013;<lpage>160</lpage>. <pub-id pub-id-type="doi">10.1016/0375-9601(88)90135-1</pub-id></citation>
</ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lago-Fern&#x000E1;ndez</surname> <given-names>L. F.</given-names></name> <name><surname>Huerta</surname> <given-names>R.</given-names></name> <name><surname>Corbacho</surname> <given-names>F.</given-names></name> <name><surname>Sig&#x000FC;enza</surname> <given-names>J. A.</given-names></name></person-group> (<year>2000</year>). <article-title>Fast response and temporal coherent oscillations in small-world networks</article-title>. <source>Phys. Rev. Lett.</source> <volume>84</volume>, <fpage>2758</fpage>&#x02013;<lpage>2761</lpage>. <pub-id pub-id-type="doi">10.1103/PhysRevLett.84.2758</pub-id><pub-id pub-id-type="pmid">11017318</pub-id></citation>
</ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Latora</surname> <given-names>V.</given-names></name> <name><surname>Marchiori</surname> <given-names>M.</given-names></name></person-group> (<year>2001</year>). <article-title>Efficient behavior of small-world networks</article-title>. <source>Phys. Rev. Lett.</source> <volume>87</volume>:<fpage>198701</fpage>. <pub-id pub-id-type="doi">10.1103/PhysRevLett.87.198701</pub-id><pub-id pub-id-type="pmid">11690461</pub-id></citation>
</ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lizier</surname> <given-names>J. T.</given-names></name> <name><surname>Pritam</surname> <given-names>S.</given-names></name> <name><surname>Prokopenko</surname> <given-names>M.</given-names></name></person-group> (<year>2011</year>). <article-title>Information dynamics in small-world Boolean networks</article-title>. <source>Artif. Life</source> <volume>17</volume>, <fpage>293</fpage>&#x02013;<lpage>314</lpage>. <pub-id pub-id-type="doi">10.1162/artl_a_00040</pub-id><pub-id pub-id-type="pmid">21762020</pub-id></citation>
</ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Markov</surname> <given-names>N. T.</given-names></name> <name><surname>Ercsey-Ravasz</surname> <given-names>M.</given-names></name> <name><surname>Van Essen</surname> <given-names>D. C.</given-names></name> <name><surname>Knoblauch</surname> <given-names>K.</given-names></name> <name><surname>Toroczkai</surname> <given-names>Z.</given-names></name> <name><surname>Kennedy</surname> <given-names>H.</given-names></name></person-group> (<year>2013</year>). <article-title>Cortical high-density counterstream architectures</article-title>. <source>Science</source> <volume>342</volume>:<fpage>1238406</fpage>. <pub-id pub-id-type="doi">10.1126/science.1238406</pub-id><pub-id pub-id-type="pmid">24179228</pub-id></citation>
</ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Miura</surname> <given-names>K.</given-names></name> <name><surname>Aoki</surname> <given-names>T.</given-names></name></person-group> (<year>2015a</year>). <article-title>Hodge-Kodaira decomposition of evolving neural networks</article-title>. <source>Neural Netw.</source> <volume>62</volume>, <fpage>20</fpage>&#x02013;<lpage>24</lpage>. <pub-id pub-id-type="doi">10.1016/j.neunet.2014.05.021</pub-id><pub-id pub-id-type="pmid">24958507</pub-id></citation>
</ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Miura</surname> <given-names>K.</given-names></name> <name><surname>Aoki</surname> <given-names>T.</given-names></name></person-group> (<year>2015b</year>). <article-title>Scaling of Hodge-Kodaira decomposition distinguishes learning rules of neural networks</article-title>. <source>IFAC-PapersOnLIne</source> <volume>48</volume>, <fpage>175</fpage>&#x02013;<lpage>180</lpage>. <pub-id pub-id-type="doi">10.1016/j.ifacol.2015.11.032</pub-id></citation>
</ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Muller</surname> <given-names>L.</given-names></name> <name><surname>Destexhe</surname> <given-names>A.</given-names></name> <name><surname>Rudolph-Lilith</surname> <given-names>M.</given-names></name></person-group> (<year>2014</year>). <article-title>Brain networks: small-worlds, after all?</article-title> <source>N. J. Phys.</source> <volume>16</volume>:<fpage>105004</fpage>. <pub-id pub-id-type="doi">10.1088/1367-2630/16/10/105004</pub-id></citation>
</ref>
<ref id="B25">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Newman</surname> <given-names>M. E. J.</given-names></name></person-group> (<year>2010</year>). <source>Networks: An Introduction</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Oxford University Press Inc</publisher-name>. <pub-id pub-id-type="doi">10.1093/acprof:oso/9780199206650.001.0001</pub-id></citation>
</ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Oshima</surname> <given-names>H.</given-names></name> <name><surname>Odagaki</surname> <given-names>T.</given-names></name></person-group> (<year>2007</year>). <article-title>Storage capacity and retrieval time of small-world neural networks</article-title>. <source>Phys. Rev. E</source> <volume>76</volume>:<fpage>036114</fpage>. <pub-id pub-id-type="doi">10.1103/PhysRevE.76.036114</pub-id><pub-id pub-id-type="pmid">17930313</pub-id></citation>
</ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Poli</surname> <given-names>D.</given-names></name> <name><surname>Pastore</surname> <given-names>P. V.</given-names></name> <name><surname>Massobrio</surname> <given-names>P.</given-names></name></person-group> (<year>2015</year>). <article-title>Functional connectivity in <italic>in vitro</italic> neuronal assemblies</article-title>. <source>Front. Neural Circuits</source> <volume>9</volume>:<issue>57</issue>. <pub-id pub-id-type="doi">10.3389/fncir.2015.00057</pub-id><pub-id pub-id-type="pmid">26500505</pub-id></citation>
</ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rohlf</surname> <given-names>T.</given-names></name></person-group> (<year>2008</year>). <article-title>Critical line in random-threshold networks with inhomogeneous thresholds</article-title>. <source>Phys. Rev. E</source> <volume>78</volume>:<fpage>066118</fpage>. <pub-id pub-id-type="doi">10.1103/PhysRevE.78.066118</pub-id><pub-id pub-id-type="pmid">19256916</pub-id></citation>
</ref>
<ref id="B29">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rohlf</surname> <given-names>T.</given-names></name> <name><surname>Bornholdt</surname> <given-names>S.</given-names></name></person-group> (<year>2002</year>). <article-title>Criticality in random threshold networks: annealed approximation and beyond</article-title>. <source>Physica A</source> <volume>310</volume>, <fpage>245</fpage>&#x02013;<lpage>259</lpage>. <pub-id pub-id-type="doi">10.1016/S0378-4371(02)00798-7</pub-id></citation>
</ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schreiber</surname> <given-names>T.</given-names></name></person-group> (<year>2000</year>). <article-title>Measuring information transfer</article-title>. <source>Phys. Rev. Lett.</source> <volume>85</volume>, <fpage>461</fpage>&#x02013;<lpage>464</lpage>. <pub-id pub-id-type="doi">10.1103/PhysRevLett.85.461</pub-id><pub-id pub-id-type="pmid">10991308</pub-id></citation>
</ref>
<ref id="B31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sporns</surname> <given-names>O.</given-names></name> <name><surname>Zwi</surname> <given-names>J. D.</given-names></name></person-group> (<year>2004</year>). <article-title>The small world of the cerebral cortex</article-title>. <source>Neuroinformatics</source> <volume>2</volume>, <fpage>145</fpage>&#x02013;<lpage>162</lpage>. <pub-id pub-id-type="doi">10.1385/NI:2:2:145</pub-id><pub-id pub-id-type="pmid">15319512</pub-id></citation>
</ref>
<ref id="B32">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Szejka</surname> <given-names>A.</given-names></name> <name><surname>Mihaljev</surname> <given-names>T.</given-names></name> <name><surname>Drossel</surname> <given-names>B.</given-names></name></person-group> (<year>2008</year>). <article-title>The phase diagram of random threshold networks</article-title>. <source>N. J. Phys.</source> <volume>10</volume>:<fpage>063009</fpage>. <pub-id pub-id-type="doi">10.1088/1367-2630/10/6/063009</pub-id></citation>
</ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Telesford</surname> <given-names>Q. K.</given-names></name> <name><surname>Joyce</surname> <given-names>K. E.</given-names></name> <name><surname>Hayasaka</surname> <given-names>S.</given-names></name> <name><surname>Burdette</surname> <given-names>J. H.</given-names></name> <name><surname>Laurienti</surname> <given-names>P. J.</given-names></name></person-group> (<year>2011</year>). <article-title>The ubiquity of small-world networks</article-title>. <source>Brain Connect.</source> <volume>1</volume>, <fpage>367</fpage>&#x02013;<lpage>375</lpage>. <pub-id pub-id-type="doi">10.1089/brain.2011.0038</pub-id><pub-id pub-id-type="pmid">22432451</pub-id></citation>
</ref>
<ref id="B34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Varshney</surname> <given-names>L. R.</given-names></name> <name><surname>Chen</surname> <given-names>B. L.</given-names></name> <name><surname>Paniagua</surname> <given-names>E.</given-names></name> <name><surname>Hall</surname> <given-names>D. H.</given-names></name> <name><surname>Chklovskii</surname> <given-names>D. B.</given-names></name></person-group> (<year>2011</year>). <article-title>Structural properties of the <italic>Caenorhabditis elegans</italic> neuronal network</article-title>. <source>PLoS Comput. Biol.</source> <volume>7</volume>:<fpage>e1001066</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pcbi.1001066</pub-id><pub-id pub-id-type="pmid">21304930</pub-id></citation>
</ref>
<ref id="B35">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Watts</surname> <given-names>D. J.</given-names></name> <name><surname>Strogatz</surname> <given-names>S. H.</given-names></name></person-group> (<year>1998</year>). <article-title>Collective dynamics of &#x02018;small-world&#x02019; networks</article-title>. <source>Nature</source> <volume>393</volume>, <fpage>440</fpage>&#x02013;<lpage>442</lpage>. <pub-id pub-id-type="doi">10.1038/30918</pub-id><pub-id pub-id-type="pmid">9623998</pub-id></citation>
</ref>
</ref-list>
</back>
</article>