Skip to main content

ORIGINAL RESEARCH article

Front. Comput. Neurosci., 05 October 2017
Volume 11 - 2017 | https://doi.org/10.3389/fncom.2017.00087

Dynamical Graph Theory Networks Methods for the Analysis of Sparse Functional Connectivity Networks and for Determining Pinning Observability in Brain Networks

Anke Meyer-Bäse1,2* Rodney G. Roberts3 Ignacio A. Illan1,4 Uwe Meyer-Bäse3 Marc Lobbes2 Andreas Stadlbauer5 Katja Pinker-Domenig1,6
  • 1Department of Scientific Computing, Florida State University, Tallahassee, FL, United States
  • 2Department of Radiology and Nuclear Medicine, Maastricht University Medical Center, Maastricht, Netherlands
  • 3Department of Electrical and Computer Engineering, Florida State University, Tallahassee, FL, United States
  • 4Department of Signal Theory and Communications, University of Granada, Granada, Spain
  • 5Department of Neurosurgery, University of Erlangen-Nürnberg, Erlangen, Germany
  • 6Memorial Sloan-Kettering Cancer Center, New York, NY, United States

Neuroimaging in combination with graph theory has been successful in analyzing the functional connectome. However almost all analysis are performed based on static graph theory. The derived quantitative graph measures can only describe a snap shot of the disease over time. Neurodegenerative disease evolution is poorly understood and treatment strategies are consequently only of limited efficiency. Fusing modern dynamic graph network theory techniques and modeling strategies at different time scales with pinning observability of complex brain networks will lay the foundation for a transformational paradigm in neurodegnerative diseases research regarding disease evolution at the patient level, treatment response evaluation and revealing some central mechanism in a network that drives alterations in these diseases. We model and analyze brain networks as two-time scale sparse dynamic graph networks with hubs (clusters) representing the fast sub-system and the interconnections between hubs the slow sub-system. Alterations in brain function as seen in dementia can be dynamically modeled by determining the clusters in which disturbance inputs have entered and the impact they have on the large-scale dementia dynamic system. Observing a small fraction of specific nodes in dementia networks such that the others can be recovered is accomplished by the novel concept of pinning observability. In addition, how to control this complex network seems to be crucial in understanding the progressive abnormal neural circuits in many neurodegenerative diseases. Detecting the controlling regions in the networks, which serve as key nodes to control the aberrant dynamics of the networks to a desired state and thus influence the progressive abnormal behavior, will have a huge impact in understanding and developing therapeutic solutions and also will provide useful information about the trajectory of the disease. In this paper, we present the theoretical framework and derive the necessary conditions for (1) area aggregation and time-scale modeling in brain networks and for (2) pinning observability of nodes in dynamic graph networks. Simulation examples are given to illustrate the theoretical concepts.

1. Introduction

Novel mathematical paradigms such as graph theoretical techniques can capture the brain connectivity and its topology (Fornito et al., 2012; Giessing and Thiel, 2012; Zeng et al., 2012). New descriptors of complex networks are able to quantify induced changes in topology or network organization or serve as theory-driven biomarkers to be used in disease prediction at the level of the individual subject.

While most graph networks applied to brain research are static graph networks and can not capture the dynamical processes governing the time evolution of neurodegenerative diseases, a new paradigm in brain research—dynamical graph networks—is required to advance this field and overcome the obstacles posed by static graph theory in terms of disease prediction and evolution, and its associated connectivity changes.

Biologically-relevant brain networks represent complex large-scale dynamical systems capturing the interaction of a large number of subsystems. Standard analysis methods and control strategies become very difficult and novel techniques need to be developed to (1) analyze the dynamical behavior of these large-scale systems or (2) to observe their states.

To address the first issue of how to control these networks, we need to employ a model simplification resulting in a model of lower complexity, that is easier to handle, and that will provide a simplified synthesis procedure for design problems at a reduced computational complexity. Balanced truncation is known as a popular method for model reduction since it is relatively simple and the quality of the reduced model is guaranteed. The interpretation of most balancing techniques is based on the concept of past and future energy. The most important contribution was the balancing for stable minimal linear systems (Moore, 1981). It is based on a state—space point of view of employing the well—known observability and controllability Gramians and related to the past input energy (controllability) and future input energy (observability). The idea behind transforming a system into balanced form is to easily detect and remove a state component of the initial system to obtain a reduced–order model. The importance of a component is based on Hankel singular values which determine if the output energy of a certain component is small and thus difficult to observe and if the input energy to reach this state is large. While for linear systems finding a balancing coordinate transformation via solutions of the controllability and observability Lyapunov equations is quite easy, for nonlinear systems these equations are almost impossible to solve and thus balancing becomes in general not a simple task (Lall et al., 2002; Scherpen, 1993). These techniques have been applied to the analysis of gene regulatory networks (Meyer-Bäse, 2008), however they are not quite efficient in terms of model reduction for large-scale networks since they involve the computationally expensive operation of matrix factorization. In addition, this method does not preserve the connection structure between subsystems and erases the neurological connection between the subsystem state variables. Therefore, a network topology-preserving mechanism to provide model reduction is required for large-scale networks. We will present an area aggregation and time-scale modeling for sparse brain networks with densely interconnected hubs and externally sparse interconnections between these hubs (Tahmassebi et al., 2017). In Biyik and Arcak (2008) it was shown that the neurons in the hubs synchronize on the fast time-scale and as aggregated neurons determine the slow dynamics of the neural network. We derive a simplified dynamic two-time scale representation for brain connectivity networks assuming linear connections between the nodes. The eigenvalues of the state matrices of the slow and fast system will provide important information about the dynamic evolution of brain connectivity networks at different stages of neurodegenerative diseases. The structural parameters of these networks expressed by the node and area parameter will unveil changes in the sparsity patterns in the course of the disease associated with the disease.

The second issue of observability of neural states can be achieved over synchronization. Synchronization plays an important role in the analysis of neural networks in neurodegenerative diseases. For many neurodegenerative diseases it is very important to obtain some information about some neural states in order to recover the others. This new concept of “pinning observability,” first proposed in Yu et al. (2014), refers to observing a small number of neurons such that the states of the other neurons can be recovered. Differently from the concept of pinning controllability (Chen et al., 2007; Liu et al., 2011; Song and Cao, 2010; Tang et al., 2012), the dynamics of the neurons can be heterogeneous. This property is extremely appealing to brain connectivity networks where we aim to obtain insight into the dynamics of several regions by observing only a few of them. We will derive for these networks a general criterion for synchronization and then some decoupled conditions for pinning observability taking into account their heterogeneous architecture.

In the present paper, we will present a reduced-model approximation over time for large-scale brain networks and derive pinning observability conditions for competitive neural networks and illustrate in an example the theoretical analysis.

2. Reduced-Model Approximation Over Time

Many brain connectivity networks exhibit a heterogeneous structure with densely linked nodes in an area but with sparse connections between these areas. The network is viewed as an interconnected graph with links between the areas which are viewed as nodes in the graph. Thus, the architecture can be described concisely by two main parameters (Chow and Kokotovic, 1985): the node parameter d and the area parameter δ. The node parameter is given as

d=cEcI1    (1)

where cE is the number of external links of the node with the largest number of external links to nodes outside its area and cI is the number of internal links of the node with the smallest number of internal links to nodes inside its area. d needs to be a small number. The area parameter is given as

δ=γEmcI1    (2)

where γE is the number of external links of the area with the largest number of external links and m is the minimal number of nodes found in an area.

While the node and area parameters describe the static behavior and the sparsity pattern of the network, a more interesting approach is to analyze the dynamical behavior of such a network. In order to obtain a reduced-model approximation we view the brain connectivity graph as a structured representation with dense areas (clusters) and sparse interconnections between these areas (Biyik and Arcak, 2008). The time behavior of networks with densely linked neurons in an area but with sparsely connected areas can be described dynamically by a two-time scale system where the neurons within the same area synchronize on the fast time-scale because the dense internal links allow the neural potential in the same area to quickly reach an equilibrium. During the fast time scale the exchange with the other areas is slow because of the sparsity of the links and this becomes significant only over a longer period thus leading to a slow time-scale. This coupled dynamics leads to a reduced-order model describing the long-term behavior of the overall network.

The dynamical formulation will be that the activation states within a group synchronize with each other and achieve a reference group velocity υ(t):

limt|xi(t)xj(t)|=0,            i,j=1,,N          limt|x˙iυ(t)|=0,           i=1,,N    (3)

The synaptic connections dik are defined as

dik={1,:if the i-th node is the positive end of the k-th link1,:if the i-th node is the negative end of the k-th link0:otherwise    (4)

We assume having a neural network of N nodes and M total links yielding thus a N × M incidence matrix D describing this neural network. The formulated objectives in Equation (3) are achieved by the following network architecture

x˙i=k=1Mdikfk(θk)+υ(t),    i=1,,N    (5)

where the difference variable θk is the difference between the positive and negative ends of k-th link, i.e.,

θk:=l=1Ndlkxl={xixj:if i is the positive end,xjxi:if j is the positive end,    (6)

and fk(θk) being uniformly monotone and fulfilling the sector condition fk(θk)θk > 0 for all θkR.

In matrix representation this can be expressed as

x˙=Df(DTx)+1Nυ(t)    (7)

with f(θ)=[f1(θ1),,fM(θM)]T and 1N being the N-vector of ones.

We reorder the incidence matrix D as

D=[DI|DE]    (8)

such that DI=diag(D1I,,DrI) correspond to the r areas.

We assume that we have an N-node network with r internally dense regions but sparsely connected. Area α has mα neurons with α = 1, 2, ⋯, r and the vector xα=[x1αxmαα]T contains all neural activities in area α. Then we define the slow variable (Biyik and Arcak, 2008; Chow and Kokotovic, 1985) representing an aggregate region as

yα:=i=1mαxiαmα=1mαuαTxα,    (9)

i.e., yα is the average of the components of xα, and in matrix representation, we obtain

y=Ma1UTx    (10)

with Ma = diag(m1, m2, ⋯, mr) and U = diag(u1, u2, ⋯, ur) where each uα = 1mα is an mα-vector of 1's.

The fast variable zα is given as the transformation of the differences between the activation of the neurons in the same region (Biyik and Arcak, 2008; Chow and Kokotovic, 1985)

zα=Qαxα    (11)

where the (mα − 1) × mα matrix Qα is a difference matrix. In Chow and Kokotovic (1985) Qα is given by

Qα=(110010101001)    (12)

while in Biyik and Arcak (2008) it is chosen to have orthonormal rows with the vector 1N as a null vector:

Qα=(1+(n1)v1υυυ1+(n1)vυ1υυ1+(n1)vυυ1υ)    (13)

where n = mα and υ=(n-n)/(n(n-1)). In this work we will use the latter version of Qα given in (Equation 13). This has the advantage that the pseudoinverse of Qα is simply QαT.

In matrix form we have the fast z (Nr)-vector with z=[z1Tz2TzrT]T

z = Qx    (14)

with Q = diag(Q1, Q2, ⋯, Qr) being an (Nr) × N block diagonal matrix.

We have thus defined a new transformation of the original neural activation X into a slow and fast variable (Biyik and Arcak, 2008)

(yz)=(Ma1UTQ)x    (15)

Similarly to the matrix Q describing the connections' differences between the nodes in one area, we introduce the matrix R, an (r − 1) × r matrix also with orthonormal rows and with null vector 1r, that models the interarea dynamics

y˜=Ry    (16)

where y˜ is the area difference variable. For further computations, we define a new matrix C as C=RMa-1RT.

By splitting the nonlinear vector f(θ) from Equation (7) into two components and UTD and QD we obtain the block diagonal matrices representations

UTD=(0|Σ),f=(fIfE),QD=(A|B)    (17)

where Σ=UTDE, A is the matrix of the internal links over all areas and B corresponds to the external links. This yields to a new singular perturbed model

y˜˙i=CRΣfE(ΣTRTy˜+BTz)  z˙=AfI(ATz)BfE(ΣTRTy˜+BTz)    (18)

The above model (Equation 18) applies to large-scale brain networks for sufficiently small network parameters δ and d. The theoretical results from Biyik and Arcak (2008) state that the neurons in the same region synchronize in fast time-scale leading to a substitute aggregate neuron in the slow time-scale.

Since most brain connectivity graph networks are modeled as linear systems, we can derive a reduced model approximation over time. We assume linear connections for the brain network and replace the nonlinearity f by the identity function and set υ(t) = 0. Then x˙ = Kx = (KI + KE)x with KE being the external and KI being the internal connection matrix.

The new linear singular perturbed system is given as

(y˙z˙)=(A11˜A12˜A21˜A22˜)·(yz)    (19)

where

(A11˜A12˜A21˜A22˜)=(GKEUGKEQTQKEUQ(KI+KE)QT)    (20)

and G=Ma-1UT. Note that in Equation (20) we chose Q using the Qα matrices given in Equation (13). If we had chosen to use (Equation 12) instead, the rows of Q would not have been orthonormal and the QT in A12˜ and A21˜ would be replaced with Q+ = QT(QQT)−1.

We can determine the time-scale model by defining the fast and slow time-scales

tf=cIt    and    ts=δtf    (21)

and rescaling the matrices Aij as

A11=A11˜cIδ   A12=A12˜cIδA21=A21˜cId   A22=A22˜cI    (22)

leading to a new system

dydts=A11y+A12zδdzdts=dA21y+A22z    (23)

The above results are summarized in the theorem proven in Chow and Kokotovic (1985).

Theorem 1. Chow and Kokotovic (1985) There are δ* and d* such that for all 0 < δ ≤ δ*, 0 < dd* the system in Equation (23) has r slow eigenvalues and nr fast eigenvalues. The fast and slow subsystems are given as

dysdts=(A11dA12A221A21)ys=A0ys,    ys(0)=y(0)dzfdtf=A22zf,    zf(0)=z(0)+dA221A21y(0).    (24)

In Chow and Kokotovic (1985) a simplified formulation of the slow system was presented, the so-called aggregate system, given as

Madysdt=Kays    (25)

with Ka=UTKEU.

We illustrate the time-scale separation properties and the aggregation procedure first on a general example and then on a structural and functional connectivity network from healthy and dementia patients. The reduced-order model will deliver important dynamic parameters derived from the state matrices of the slow and fast subsystem that are different for healthy and dementia subjects. In addition, it determines sparsity parameters based on the area and node parameters which are different at each specific stage of the disease.

2.1. Examples

Example 1. Consider an 8-node brain network that is partitioned into two areas as shown in Figure 1.

The connection matrix for Figure 1 are

K1I=(3111121011311012),K2I=(3111131111311113),    (26)
K12E=(0000000010000001)=(K21E)T,    (27)
K11E=diag(0,0,1,1),     and     K22E=diag(1,0,0,1).    (28)

We have cI = 2, cE = 1, d = 0.5, γE = 2, δ = 0.25. Further we have for the slow subsystem according to the Equation (24)

dysdts=A0ys    (29)

with A0=(0.76340.76340.76340.7634) and having the following two eigenvalues 0 and −1.5267. These two eigenvalues are close to those two of 0 and −2 of the matrix A11. Thus we could show that the system can be in the long-term correctly approximated by the slow subsystem.

FIGURE 1
www.frontiersin.org

Figure 1. An 8-node, 2-area network.

Example 2. We apply the theoretical results on functional (FDG-PET) and structural (MRI) connectivity graphs (Ortiz et al., 2015) for control (CN), mild cognitive impairment (MCI) and Alzheimer's disease (AD) subjects. For the structural data, the connections in the graph show the inter-regional covariation of gray matter volumes in different areas while in the case of functional data, the connections do not show the correlation in activity but in the glucose uptake between different regions. (Ortiz et al., 2015) considered only 42 out of the 116 from the AAL in the frontal, parietal, occipital and temporal lobes. The nodes in the graphs represent the regions while the links show if a connection is existing between these regions or not.

Figure 2 shows the clusters found on the functional data for (A) controls, (B) MCI and (3) AD (Ortiz et al., 2015). We perform a time-scale modeling and area aggregation with two main areas on the three functional networks from Figure 2. For CN and MCI, we can apply Theorem 1, however for AD we are not able to obtain an area aggregation since the conditions in Theorem 1 are not satisfied.

FIGURE 2
www.frontiersin.org

Figure 2. Areas of the connectivity graph for functional data for (A) controls, (B) MCI, and (C) AD.

The results of the in-depth dynamical analysis are shown in Table 1. The controls show smaller node and area parameters than the MCIs. But most importantly, both the exact as well as the rigid aggregate model as shown in Equation (25) show smaller eigenvalues for the controls than the MCIs.

TABLE 1
www.frontiersin.org

Table 1. Area aggregation parameters and time-scale modeling for functional connectivity graphs.

Similarly, we perform a time-scale modeling and area aggregation with two main areas on the three structural networks from Figure 3. For CN and MCI, we can apply Theorem 1, however for AD we are not able to obtain an area aggregation again because the conditions are not met.

FIGURE 3
www.frontiersin.org

Figure 3. Areas of the connectivity graph for structural data for (A) controls, (B) MCI, and (C) AD.

Figure 3 shows the clusters found on the structural data for (A) controls, (B) MCI and (3) AD (Ortiz et al., 2015). The results of the in-depth dynamical analysis are shown in Table 2. Similarly, the controls show smaller area parameters than the MCIs and smaller eigenvalues. However, the node parameter is larger in the case of controls.

TABLE 2
www.frontiersin.org

Table 2. Area aggregation parameters and time-scale modeling for structural connectivity graphs.

While the results obtained through static graph analysis revealed the loss of strong connections in AD and MCI compared to CN, the dynamic graph analysis reveals different slow modes between MCI and CN. The CN have smaller eigenvalues than the MCI for both functional and structural data and those eigenvalues remain operative. The contribution of the larger eigenvalues over time decreases quickly. The range of the eigenvalues for each subject represents an important biomarker for disease prediction. It is worth noting that an area aggregation is not possible for the ADs in both structural and functional networks since the conditions of Theorem 1 are not fulfilled. By providing an area and node parameter, we are able to add to the dynamic biomarkers additional static graph descriptors. The reduced-order time scale modeling provides a change in the sparsity pattern for MCI and AD patients compared to the CN, and shows higher values for the node and area parameters.

It is worth mentioning that the slow or aggregate variable represents in Markov chains the probability for a Markov process to be in a group of states.

3. Pinning Observability

Brain connectivity networks have lots of nodes and connections between them, and it is effectively impossible to observe the states of all nodes such that the network can be recovered by reaching synchronization between the original network and a reconstructed network. Pinning observability is introduced as a new technique to reduce the number of observable nodes, and at the same time to be able to recover the states of other nodes. We give a new criterion for synchronization via pinning observability for nonlinear brain networks and derive decoupled conditions for pinning observability for brain connectivity networks.

We consider in the following the general neural network equation describing the temporal evolution of the neural activation states for the ith neuron of an N–neuron network:

x˙i=Aixi+j=1nd˜ijf¯(xj)      i=1,,N    (30)

where xi=(xi1(t),,xin(t))TRn is the state vector of node i, AiRn×n is a matrix, and f¯(xi) is the neuron's output. d˜ij represents a synaptic connection parameter between the ith neuron and the jth neuron and is defined as the matrix D˜=(d˜ij).

Pinning observability is applied only to a small number l of nodes and thus we obtain a pinning observable network supposing these first l nodes are selected:

x˜˙i=Aix˜i+j=1nd˜ijf¯(x˜j)+ui           i=1,,N    (31)

where

gi(xj)=j=1Nd˜ijf¯(xj)    (32)

and

ui=di(x˜i(t)xi(t))       1,,l    (33)

are n-dimensional linear feedback controllers with the control gains di > 0, i = 1, ⋯, l and di = 0 for i = l + 1, ⋯, N.

By subtracting the pinned system Equation (31) and the original system Equation (30), we obtain the error dynamical system

e˙i(t)=Aiei(t)+gi(x1˜(t),,xN˜(t))gi(x1(t),,xN(t))    (34)

and the error signal

ei(t)=xi˜(t)xi(t).    (35)

The following assumptions and lemmas are needed to obtain the main result of the paper.

Assumption A Assume a scale-free graph network and let D˜=(d˜ij)N×N be the coupling configuration matrix. Then there exists a symmetric matrix Γ such that

i=1N(xi^wi^)T(gi(x1^(t),,xN^(T))gi(w1^(t),,wN^(t)))i=1Nj=1Nlij(xi^wi^)TΓ(xj^wj^).    (36)

The following two lemmas will be useful.

Lemma 4. For any vectors x, yRN and positive definite matrix GRN×N, the following matrix inequality holds

2xTyxTGx+yTG1y.    (37)

Lemma 5. Schur complement (Boyd et al., 1994). The following linear matrix inequality (LMI)

(Q(x)S(x)S(x)TR(x))>0    (38)

with Q(x) = Q(x)T, R(x) = R(x)T is equivalent to either of the following equivalent conditions

(1)   Q(x)>0,R(x)S(x)TQ(x)1S(x)>0(2)   R(x)>0,Q(x)S(x)R(x)1S(x)T>0    (39)

We can now state a result concerning the two networks Equations (30) and (31).

Theorem 6. Under Assumption A the two networks Equations (30) and (31) are globally synchronized if the following condition is satisfied

(A+AT)/2DIn+(LΓ+LTΓ)/2<0    (40)

where A = diag(A1, …, AN), D = diag(d1, …, dl, 0, …, 0), D˜=(d˜ij)N×N and In being an n-dimensional identity matrix.

Proof. Consider the Lyapunov function candidate

V(t)=12i=1NeiT(t)·ei(t).    (41)

We take the derivative of V(t) along the trajectories of Equation (34) and obtain

V˙(t)=i=1NeiT(t)·ei.(t)         =i=1NeiT(t)(Aiei(t)+gi(x1˜(t),,xN˜(t))               gi(x1(t),,xN(t))).    (42)

From Assumption A we have

      i=1NeiT(t)(gi(x1˜,xN˜,t)gi(x1,xN,t))i=1Nj=1Nd˜ijeiT(t)Γej(t).    (43)

Substituting Equation (43) into Equation (42), we have

V˙(t)i=1NeiT(t)·(AidiIn)·ei(t)         +i=1Nj=1NlijeiT(t)Γej(t)         =eT(t)((A+AT)/2DIn+D˜Γ)e(t).    (44)

As shown in Yu et al. (2014), it is possible to identify the number of nodes that can be observed. The following corollary gives a condition to check for each node whether it can be controlled or not without involving the other nodes.

Synchronized if there exists a constant c > 0 such that the following condition is satisfied

Corollary 7. Under the Assumption A, the two networks Equations (30) and (31) are globally synchronized if there exists a constant c > 0 such that the following condition is satisfied

(Ai+AiT)/2diIn+d˜iiΓ+12cij=1,jiN|d˜ij|ΓΓT                                              +12j=1,ijN1cj|d˜ji|In<0    (45)

The proof is based on using the same Lyapunov function as in the above Theorem.

Thus we are able to give a much simpler condition with this fixed constant c: if di = 0 and the condition given in the below equation is satisfied for a node i, then the node may not be controlled. Otherwise, if this condition is not satisfied for node i, then the node can be controlled.

Ξi(c)=((Ai+AiT)/2diIn+liiIn+c2j=1,jiNlijInj=1,ijN|lij|Inj=1,ijN|lij|In2cIn)<0,            i=1,2,,N.

The above equation shows a simple modality to find the few nodes that have to be controlled such that synchronization is reached.

The following examples are given to show the application of the theory to a simple neural network. We illustrate the concept of pinning observability first on a general example describing a network with nonlinear coupling and then on a network processing olphactoric stimuli with linear coupling. Both examples employ a simplified condition Equation 46 for pinning observability developed in the above theoretical framework.

3.1. Examples

Example 3. In the following we give a numerical example to elucidate the theoretical results in pinning observability.

The simplest example consists of three neurons. Let N = 3, a1 = 2, a2 = 1.6, a3 = 0.5, c = 1 D˜=(0.7-0.2-0.1-0.20.5-0.3-0.1-0.30.8), and let the nonlinearity f be a sigmoid function with K = 1.

The condition to be tested for each neuron is given by Theorem 6.

In case the condition is satisfied for a fixed parameter c and di = 0, then the neuron i can not be controlled. Node 3 is the one that needs to be controlled and d1 = 1.5.

Example 4. The processing of olphactoric stimuli was shown to involve a network of cortical and subcortical regions (Nigri et al., 2013). The resulting connectivity graph in fMRI studies shows a small number of hubs in this network suggesting that they have a predominant role in information gathering. Applying the theory of pinning observability on this networks reveals that there are few nodes in the graphs that can be controlled, and that these nodes are surprisingly not hubs but have sparse connections. Figure 4 shows the distribution of these controlling nodes in the olphactoric network.

FIGURE 4
www.frontiersin.org

Figure 4. Nodes that can be controlled in the connectivity graph during olphactoric stimulation marked as a donut (Nigri et al., 2013).

4. Discussion

In this paper, we introduced some novel dynamic graph theory techniques to the analysis of the dynamical behavior of brain connectivity networks. We applied the new concepts of time-scale modeling for sparse networks and pinning observability on brain networks of heterogeneous architecture. We considered graphs with densely linked nodes in an area but with sparse connections between these areas. We have shown that the nodes in a dense area synchronize on the fast time-scale while the dense areas become aggregate nodes on the slow time scale. For the time scale modeling, we have derived new local models that describe the fast and slow dynamics of brain connectivity networks assuming linear connections between the nodes. This new paradigms provides us with important disease descriptors showing changes over the disease trajectory such as the modes of the dynamic system and the sparsity patterns.

Observing a small number of nodes in brain connectivity network and recovering the states of the others is of major interest in a large-scale network. This was achieved through pinning observability. We formulated a new criterion for synchronization for nonlinear brain networks and derived decoupled simplified conditions for determining the small number of observable states in the network. Examples are given to elucidate the theoretical results for both new concepts.

While static graph theory shows the changes in graph measures at certain points in time and the differences between disease and normal control, the derived results may have important implication for understanding and controlling the evolution of neurodegenerative diseases that may further lead to better therapeutic interventions. Thus by describing the dynamic of the aggregate areas and the resulting time-scale modeling and determining the observable nodes in a brain network, a new research avenue is opened that allows more detailed study of the differences between disease and healthy groups and which provides a wider variety of characteristic parameters over time for those groups.

Author Contributions

All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication. AM: manuscript writing, research and revision, RR, IAI, UM, ML, AS, and KP: research and revision.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

We wish to thank Elena Gonzalez Avidad for her design contribution in this work. We acknowledge funding through a Fulbright Award and Marie Sklodowska-Curie actions (MSCA-IF-GF-656886).

References

Biyik, E., and Arcak, M. (2008). Area aggregation and time-scale modeling for sparse nonlinear networks. Sys. Cont. Lett. 57, 142–149. doi: 10.1016/j.sysconle.2007.08.003

CrossRef Full Text | Google Scholar

Boyd, S., El Ghaoui, L., Feron, E., and Balakrishnan, V. (1994). Linear Matrix Inequalities in System and Control Theory. SIAM.

Google Scholar

Chow, J. H. and Kokotovic. P. (1985). Time scale modeling of sparse dynamic networks. IEEE Trans. Automat. Cont. 30, 714–722. doi: 10.1109/TAC.1985.1104055

CrossRef Full Text | Google Scholar

Chen, T., Liu, X., and Lu, W. (2007). Pinning complex networks by a single controller. IEEE Trans. Circ. Syst. I 54, 1317–1326. doi: 10.1109/TCSI.2007.895383

CrossRef Full Text | Google Scholar

Fornito, A., Zalesky, A., Pantelis, C., and Bullmore, E. (2012). Schizophrenia, neuroimaging and connectomics. Neuroimage 62, 2296–2314. doi: 10.1016/j.neuroimage.2011.12.090

PubMed Abstract | CrossRef Full Text | Google Scholar

Giessing, C., and Thiel, C. (2012). Pro-cognitive drug effects modulate functional brain network organization. Front. Behav. Neurosci. 6, 1–9. doi: 10.3389/fnbeh.2012.00053

PubMed Abstract | CrossRef Full Text | Google Scholar

Lall, S., Marsden, J., and Glavaski, S. (2002). A subspace approach to balanced truncation for model reduction of nonlinear control systems. Int. J. Robust Nonlin. Control. 12, 519–535. doi: 10.1002/rnc.657

CrossRef Full Text | Google Scholar

Liu, Y. Y., Slotine, J. J., and Barabasi, A. (2011). Controllability of complex networks. Nature 473, 167–173. doi: 10.1038/nature10011

PubMed Abstract | CrossRef Full Text | Google Scholar

Meyer-Bäse, A. (2008). Gene regulatory networks simplified by nonlinear balanced truncation. SPIE Proc. Ser. 6979:69790C. doi: 10.1117/12.777292

CrossRef Full Text | Google Scholar

Moore, B. (1981). Principal component analysis in linear systems: controllability, observability, and model reduction. IEEE Trans. Auto. Cont. 26, 17–32. doi: 10.1109/TAC.1981.1102568

CrossRef Full Text | Google Scholar

Nigri, A., Ferraro, S., D'incerti, L., Critchley, H., Bruzzone, M., and Minati, L. (2013). Connectivity of the amygdala, piriform, and orbitofrontal cortex during olfactory stimulation: a functional MRI study. Neuroreport 24, 171–175. doi: 10.1097/WNR.0b013e32835d5d2b

PubMed Abstract | CrossRef Full Text | Google Scholar

Ortiz, A., Munilla, J., Alvarez-Illan, I., Gorriz, M., and Ramirez, J. (2015). Exploratory graphical models of functional and structural connectivity patterns for Alzheimer's disease diagnosis. Front. Comput. Neurosci. 9:132. doi: 10.3389/fncom.2015.00132

PubMed Abstract | CrossRef Full Text | Google Scholar

Song, Q., and Cao, J. (2010). On pinning synchronization of directed and undirected complex dynamical networks. IEEE Trans. Circ. Sys. I 57, 672–680. doi: 10.1109/TCSI.2009.2024971

CrossRef Full Text | Google Scholar

Scherpen, J. (1993). Balancing for nonlinear systems. Sys. Cont. Lett. 21, 143–153. doi: 10.1016/0167-6911(93)90117-O

CrossRef Full Text | Google Scholar

Tahmassebi, A., Pinker-Domenig, K., Wengert, G., Lobbes, M., Stadlbauer, A., Romero, F. J., et al. (2017). “Dynamical graph theory networks techniques for the analysis of sparse connectivity networks in dementia,” in Proceeding of SPIE 10216, Smart Biomedical and Physiological Sensor Technology XIV (Anaheim, CA). doi: 10.1117/12.2263555

CrossRef Full Text | Google Scholar

Tang, Y., Wang, Z., Gao, H., Swift, S., and Kurths, J. (2012). A constrained evolutionary computation method for detecting controlling regions of cortical networks. IEEE/ACM Trans. Comput. Biol. Bioinformat. 9, 1569–1581. doi: 10.1109/TCBB.2012.124

PubMed Abstract | CrossRef Full Text | Google Scholar

Yu, W., Wen, G., Lü, L., and Yu, X. (2014). Pinning observability in complex networks. IET Cont. Theory Appl. 8, 2136–2144. doi: 10.1049/iet-cta.2013.0969

CrossRef Full Text | Google Scholar

Zeng, L., Shen, H., Liu, L., Wang, L., Li, B., Fang, P., et al. (2012). Identifying major depression using whole-brain functional connectivity: a multivariate pattern analysis. Brain 1498, 1498–1507. doi: 10.1093/brain/aws059

CrossRef Full Text | Google Scholar

Keywords: neurodegenerative disease, singular perturbations, area aggregation, multi-time-scale brain network, neural network, synchronization, pinning observability

Citation: Meyer-Bäse A, Roberts RG, Illan IA, Meyer-Bäse U, Lobbes M, Stadlbauer A and Pinker-Domenig K (2017) Dynamical Graph Theory Networks Methods for the Analysis of Sparse Functional Connectivity Networks and for Determining Pinning Observability in Brain Networks. Front. Comput. Neurosci. 11:87. doi: 10.3389/fncom.2017.00087

Received: 18 November 2016; Accepted: 11 September 2017;
Published: 05 October 2017.

Edited by:

Concha Bielza, Universidad Politècnica de Madrid (UPM), Spain

Reviewed by:

Andres Ortiz, University of Málaga, Spain
Bao Ge, Shaanxi Normal University, China

Copyright © 2017 Meyer-Bäse, Roberts, Illan, Meyer-Bäse, Lobbes, Stadlbauer and Pinker-Domenig. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Anke Meyer-Bäse, ameyerbaese@fsu.edu

Download