Skip to main content

ORIGINAL RESEARCH article

Front. Appl. Math. Stat., 19 January 2018
Sec. Mathematics of Computation and Data Science
Volume 3 - 2017 | https://doi.org/10.3389/fams.2017.00028

When Is Network Lasso Accurate?

  • Department of Computer Science, Aalto University, Espoo, Finland

The “least absolute shrinkage and selection operator” (Lasso) method has been adapted recently for network-structured datasets. In particular, this network Lasso method allows to learn graph signals from a small number of noisy signal samples by using the total variation of a graph signal for regularization. While efficient and scalable implementations of the network Lasso are available, only little is known about the conditions on the underlying network structure which ensure network Lasso to be accurate. By leveraging concepts of compressed sensing, we address this gap and derive precise conditions on the underlying network topology and sampling set which guarantee the network Lasso for a particular loss function to deliver an accurate estimate of the entire underlying graph signal. We also quantify the error incurred by network Lasso in terms of two constants which reflect the connectivity of the sampled nodes.

1. Introduction

In many applications ranging from image processing, social networks to bioinformatics, the observed datasets carry an intrinsic network structure. Such datasets can be represented conveniently by signals defined over a “data graph” which models the network structure inherent to the dataset [1, 2]. The nodes of this data graph represent individual data points which are labeled by some quantity of interest, e.g., the class membership in a classification problem. We represent this label information as a graph signal whose value for a particular node is given by its label [1, 38]. This graph signal representation of datasets allows to apply efficient methods from graph signal processing (GSP) which are obtained, in turn, by extending established methods (e.g., fast filtering and transforms) from discrete time signal processing (over chain graphs) to arbitrary graphs [911].

The resulting graph signals are typically clustered, i.e., these signals are nearly constant over well connected subset of nodes (clusters) in the data graph. Exploiting this clustering property enables the accurate recovery of graph signals from few noisy samples. In particular, using the total variation to measure how well a graph signal conforms with the underlying cluster structure, the authors of Hallac et al. [12] obtain the network Lasso (nLasso) by adapting the well-known Lasso estimator which is widely used for learning sparse models [13, 14]. The nLasso can be interpreted as an instance of the regularized empirical risk minimization principle, using total variation of a graph signal for the regularization. Some applications where the use of nLasso based methods has proven beneficial include housing price prediction and personalized medicine [12, 15].

A scalable implementation of the nLasso has been obtained via the alternating direction method of multipliers (ADMM) [16]. However, the authors of Boyd et al. [16] do not discuss conditions on the underlying network structure which ensure success of the network Lasso. We close this gap in the understanding of the performance of network Lasso, by deriving sufficient conditions on the data graph (cluster) structure and sampling set such that nLasso is accurate. To this end, we introduce a simple model for clustered graph signals which are constant over well connected groups or clusters of nodes. We then define the notion of resolving sampling sets, which relates the cluster structure of the data graph to the sampling set. Our main contribution is an upper bound on the estimation error obtained from nLasso when applied to resolving sampling sets. This upper bound depends on two numerical parameters which quantify the connectivity between sampled nodes and cluster boundaries.

Much of the existing work on recovery conditions and methods for graph signal recovery (e.g., [1722]), relies on spectral properties of the data graph Laplacian matrix. In contrast, our approach is based directly on the connectivity properties of the underlying network structure. The closest to our work is Sharpnack et al. [23] and Wang et al. [24], which provide sufficient conditions such that a special case of the nLasso (referred to as the “edge Lasso”) accurately recovers piece-wise constant (or clustered) graph signals from noisy observations. However, these works require access to fully labeled datasets, while we consider datasets which are only partially labeled (as it is typical for machine learning applications where label information is costly).

1.1. Outline

The problem setting considered is formalized in section 2. In particular, we show how to formulate the problem of learning a clustered graph signal from a small amount of signal samples as a convex optimization problem, which is underlying the nLasso method. Our main result, i.e., an upper bound on the estimation error of nLasso is stated in section 3. Numerical experiments which illustrate our theoretical findings are discussed in section 4.

1.2. Notation

We will conform to standard notation of linear algebra as used, e.g., in Golub and Van Loan [25]. For a binary variable b, we denote its negation as b̄.

2. Problem Formulation

We consider datasets which are represented by a network model, i.e., a data graph G=(V,E,W) with node set V={1,,N}, edge set E and weight matrix W+N×N. The nodes V of the data graph represent individual data points. For example, the node iV might represent a (super-)pixel in image processing, a neuron of a neural network [26] or a social network user profile [27].

Many applications naturally suggest a notion of similarity between individual data points, e.g., the profiles of befriended social network users or grayscale values of neighboring image pixels. These domain-specific notions of similarity are represented by the edges of the data graph G, i.e., the nodes i,jV representing similar data points are connected by an undirected edge {i,j}E. We denote the neighborhood of the node iV by N(i):={jV:{i,j}E}. It will be convenient to associate with each undirected edge {i, j} a pair of directed edges, i.e., (i, j) and (j, i). With slight abuse of notation we will treat the elements of the edge set E either as undirected edges {i, j} or as pairs of two directed edges (i, j) and (j, i).

In some applications it is possible to quantify the extent to which data points are similar, e.g., via the physical distance between neighboring sensors in a wireless sensor network application [28]. Given two similar data points i,jV, which are connected by an edge {i,j}E, we will quantify the strength of their connection by the edge weight Wi, j > 0 which we collect in the symmetric weight matrix W+N×N. The absence of an edge between nodes i,jV is encoded by a zero weight Wi, j = 0. Thus the edge structure of the data graph G is fully specified by the support (locations of the non-zero entries) of the weight matrix W.

2.1. Graph Signals

Beside the network structure, encoded in the data graph G, datasets typically also contain additional labeling information. We represent this additional label information by a graph signal defined over G. A graph signal x[·] is a mapping V, which associates every node iV with the signal value x[i]∈ℝ (which might representing a label characterizing the data point). We denote the set of all graph signals defined over a graph G=(V,E,W) by V.

Many machine learning methods for network structured data rely on a “cluster hypothesis” [4]. In particular, we assume the graph signals x[·] representing the label information of a dataset conforms with the cluster structure of the underlying data graph. Thus, any two nodes i,jV out of a well-connected region (“cluster”) of the data graph tend to have similar signal values, i.e., x[i] ≈ x[j]. Two important application domains where this cluster hypothesis has been applied successfully are digital signal processing where time samples at adjacent time instants are strongly correlated for sufficiently high sampling rate (cf. Figure 1A) as well as processing of natural images whose close-by pixels tend to be colored likely (cf. Figure 1B). The cluster hypothesis is verified also often in social networks where the clusters are cliques of individuals having similar properties (cf. Figure 1C and Newman [29, Chap. 3]).

FIGURE 1
www.frontiersin.org

Figure 1. Graph signals defined over (A) a chain graph (representing, e.g., discrete time signals), (B) grid graph (representing, e.g., 2D-images) and (C) a general graph (representing, e.g., social network data), whose edges {i,j}E are captioned by edge weights Wi, j.

In what follows, we quantify the extend to which a graph signal x[·]V conforms with the clustering structure of the data graph G=(V,E,W) using its total variation (TV)

x[·]TV: ={i,j}EWi,j|x[j ] x[i]|.    (1)

For a subset of edges SE, we use the shorthand

x[·]S: ={i,j}SWi,j|x[j ] x[i]|.    (2)

For a supervised machine learning application, the signal values x[i] might represent class membership in a classification problem or the target (output) value in a regression problem. For the house price example considered in Hallac et al. [12], the vector-valued graph signal x[i] corresponds to a regression weight vector for a local pricing model (used for the house market in a limited geographical area represented by the node i).

Consider a partition F={C1,,C|F|} of the data graph G into |F| disjoint subsets Cl of nodes (“clusters”) such that V=l=1|F|Cl. We associate a subset CV of nodes with a particular “indicator” graph signal

IC[i]:={1 if iC0 else.    (3)

A simple model of clustered graph signals is then obtained by piece-wise constant or clustered graph signals of the form

x[i]=l=1|F|alICl[i].    (4)

In Figure 2, we depict a clustered graph signal for a chain graph with 10 nodes which are partitioned into two clusters: C1 and C2.

FIGURE 2
www.frontiersin.org

Figure 2. A clustered graph signal x[i]=a1IC1[i]+a2IC2[i] (cf. Equation 4) defined over a chain graph which is partitioned into two equal-size clusters C1 and C2 which consist of consecutive nodes. The edges connecting nodes within the same cluster have weight 1, while the single edge connecting nodes from different clusters has weight 1/2.

It will be convenient to define, for a given partition F, its boundary FE as the set of edges {i,j}E which connect nodes iCa and jCb from different clusters, i.e., with CaCb. With a slight abuse of notation, we will use the same symbol F also to denote the set of nodes which are connected to a node from another cluster.

The TV of a clustered graph signal of the form (Equation 4) can be upper bounded as

x[·]TV2maxl{1,,|F|}|al|{i,j}FWi,j.    (5)

Thus, for a partition F with small weighted boundary {i,j}FWi,j, the associated clustered graph signals (Equation 4) have small TV ||x[·]||TV due to Equation (5).

The signal model (Equation 4), which also has been used in Sharpnack et al. [23] and Wang et al. [24], is closely related to the stochastic block model (SBM) [30]. Indeed, the SBM is obtained from Equation (4) by choosing the coefficients aC uniquely for each cluster, i.e., aC{1,,|F|}. Moreover, the SBM provides a generative (stochastic) model for the edges within and between the clusters Cl.

We highlight that the clustered signal model (Equation 4) is somewhat dual to the model of band-limited graph signals [1, 47, 17, 19]. The model of band-limited graph signals is obtained by the subspaces spanned by the eigenvectors of the graph Laplacian corresponding to the smallest (in magnitude) eigenvalues, i.e., the low-frequency components. Such band-limited graph signals are smooth in the sense of small values of the Laplacian quadratic form [31]

{i,j}EWi,j(x[j ] x[i])2=xTLx.    (6)

Here, we used the vector representation x = (x[1], …, x[N])T of the graph signal x[·] and the graph Laplacian matrix L ∈ ℝN×N defined element-wise as

Li,j={kVWi,k if i=jWi,j otherwise.    (7)

A band-limited graph signal x[·] is characterized by a clustering (within a small bandwidth) of their graph Fourier transform (GFT) coefficients [22]

x˜[l]:=ulTx, for l=1,,N,    (8)

with the orthonormal eigenvectors {ul}l=1N of the graph Laplacian matrix L. In particular, by the spectral decomposition of the psd graph Laplacian matrix L (cf. Equation 7), we have L = UΛUH with U = (u1, …, uN) and the diagonal matrix Λ having (in decreasing order) the non-negative eigenvalues λl of L on its diagonal.

In contrast to band-limited graph signals, a clustered graph signal of the form (Equation 4) will typically have GFT coefficients which are spread out over the entire (graph) frequency range. Moreover, while band-limited graph signals are characterized by having a sparse GFT, a clustered graph signal of the form (Equation 4) has a dense (non-sparse) GFT in general. On the other hand, while a clustered graph signal of the form (Equation 4) has sparse signal differences {x[i]-x[j]}{i,j}E, the signal differences of a band-limited graph signal are dense (non-sparse).

Let us illustrate the duality between the clustered graph signal model (Equation 4) and the model of band-limited graph signals (cf. [7, 17]) by considering a dataset representing a finite length segment of a time series. The data graph G0 underlying this time series data is chosen as a chain graph (cf. Figure 2), consisting of N = 100 nodes which represent the individual time samples. The time series is partitioned into two clusters C1,C2, each cluster consisting of 50 consecutive nodes (time samples). We model the correlations between successive time samples using edge weight Wi, j = 1 for data points i, j belonging to the same cluster and a smaller weight Wi, j = 1/2 for the single edge {i, j} connecting the two clusters C1 and C2.

A clustered graph signal (time series) x0[i]=a1IC1[i]+a2IC2[i] (cf. Equation 4) defined over G0 is characterized by very sparse signal differences {x0[i]-x0[j]}{i,j}E. Indeed the signal difference x0[i] − x0[j] of the clustered graph signal x0[·] is non-zero only for the single edge {i, j} which connects C1 and C2. In stark contrast, the GFT of x0[·] is spread out over the entire (graph) frequency range (cf. Figure 3), i.e., the graph signal x0[·] does not conform with the band-limited signal model.

FIGURE 3
www.frontiersin.org

Figure 3. The magnitudes of the GFT coefficients x~[l] (cf. Equation 8) of a clustered graph signal x0[·] defined over a chain graph (cf. Figure 2).

On the other hand, we illustrate in Figure 4 a graph signal xBL[·] with GFT coefficients x~BL[l]=1 (cf. Equation 8) for l = 1, 2 and x~BL[l]=0 otherwise. Thus, the graph signal is clearly band-limited (it has only two non-zero GFT coefficients) but the signal differences xBL[i] − xBL[j] across the edges {i,j}E are clearly non-sparse.

FIGURE 4
www.frontiersin.org

Figure 4. A band-limited graph signal defined over a chain graph with N = 100.

2.2. Recovery via nLasso

Given a dataset with data graph G=(V,E,W), we aim at recovering a graph signal x[·]V from its noisy values

y[i]=x[i]+e[i]    (9)

provided on a (small) sampling set

M:={i1,,iM}V.    (10)

Typically MN, i.e., the sampling set is a small subset of all nodes in the data graph G.

The recovered graph signal x^[·] should incur only a small empirical (or training) error

E^(x^[·]):=iM|x^[i] y[i]|.    (11)

Note that the definition (Equation 11) of the empirical error involves the ℓ1-norm of the deviation x^[·]i-y[i] between recovered and measured signal samples. This is different from the error criterion used in the ordinary Lasso, i.e., the squared-error loss iM(x^[i]-y[i])2 [32]. The definition (Equation 11) is beneficial for applications with measurement errors ei (cf. Equation 9) having mainly small values except for a few large outliers [18, 33]. However, by contrast to plain Lasso, the error function in Equation (11) does not satisfy a restricted strong convexity property [34], which might be detrimental for the convergence speed of the resulting recovery methods (cf. Section 4).

In order to recover a clustered graph signal with a small TV ||x^[·]||TV (cf. Equation 5) from the noisy signal samples {y[i]}iM it is sensible to consider the recovery problem

x^[·]arg minx˜[·]VE^(x˜[·])+λx˜[·]TV.    (12)

This recovery problem amounts to a convex optimization problem [35], which, as the notation already indicates, might have multiple solutions x^[·] (which form a convex set). In what follows, we will derive conditions on the sampling set M such that any solution x^[·] of Equation (12) allows to accurately recover clustered a graph signal x[·] of the form (Equation 4).

Any graph signal obtained from Equation (12) balances the empirical error E^(x^[·]) with the TV ||x^[·]||TV in an optimal manner. The parameter λ in Equation (12) allows to trade off a small empirical error against the amount to which the resulting signal is clustered, i.e., having a small TV. In particular, choosing a small value for λ enforces the solutions of Equation (12) to yield a small empirical error, whereas choosing a large value for λ enforces the solutions of Equation (12) to have small TV. Our analysis in section 3 provides a selection criterion for the parameter λ which is based on the location of the sampling set M (cf. Equation 10) and the partition F underlying the clustered graph signal model (Equation 4). Alternatively, for sufficiently large sampling sets one might choose λ using a cross-validation procedure [13].

Note that the recovery problem (Equation 12) is a particular instance of the generic nLasso problem studied in Hallac et al. [12]. There exist efficient convex optimization methods for solving the nLasso problem (Equation 12) (cf. [36] and the references therein). In particular, the alternating method of multipliers (ADMM) has been applied to the nLasso problem in Hallac et al. [12] to obtain a scalable learning algorithm which can cope with massive heterogeneous datasets.

3. When Is Network Lasso Accurate?

The accuracy of graph signal recovery methods based on the nLasso problem (Equation 12), depends on how close the solutions x^[·] of Equation (12) are to the true underlying graph signal x[·]V. In what follows, we present a condition which guarantees any solution x^[·] of Equation (12) to be close to the underlying graph signal x[·] if it is clustered of the form (Equation 4).

A main contribution of this paper is the insight that the accuracy of nLasso methods, aiming at solving Equation (12), depends on the topology of the underlying data graph via the existence of certain flows with demands [37]. Given a data graph G, we define a flow on it as a mapping h[·]:V×V which assigns each directed edge (i, j) the value h[(i, j)], which can be interpreted as the amount of some quantity flowing through the edge (i, j) [37]. A flow with demands has to satisfy the conservation law

jN(i)h(j,i)h(i,j)=d[i], for any i V    (13)

with a prescribed demand d[i] for each node iV. Moreover, we require flows to satisfy the capacity constraints

|h(i,j)|Wi,j for any edge (i,j) EF.    (14)

Note that the capacity constraint (Equation 14) applies only to intra-cluster edges and does not involve the boundary edges F. The flow values h(i, j) at the boundary edges (i,j)F take a special role in the following definition of the notion of resolving sampling sets.

Definition 1. Consider a dataset with data graph G=(V,E,W) which contains the sampling set MV. The sampling set M resolves a partition F={C1,,C|F|} with constants K and L if, for any bi, j {0, 1} with {i,j}F, there exists a flow h[·] on G (cf. Equations 13, 14) with

h(i,j)=bi,j·L·Wi,j, h(j,i)=b¯i,j·L·Wi,j    (15)

for every boundary edge {i,j}F and demands (cf. Equation 13) satisfying

|d[i]| K for i M, and d[i]=0 for i V M.    (16)

This definition requires nodes of a resolving sampling set to be sufficiently well connected with every boundary edge {i,j}F. In particular, we could think of injecting (absorbing) certain amounts of flow into (from) the data graph at the sampled nodes. At each sampled node iM, we can inject (absorb) a flow of level at most K (cf. Equation 16). The injected (absorbed) flow has to be routed from the sampled nodes via the intra-cluster edges to each boundary edge such that it carries a flow value L · Wi, j. Clearly, this is only possible if there are paths of sufficient capacity between sampled nodes and boundary edges available.

The definition of resolving sampling sets is quantitive as it involves the numerical constants K and L. Our main result stated below is an upper bound on the estimation error of nLasso methods which depends on the value of these constants. It will turn out that resolving sampling sets with a small values of K and large values of L are beneficial for the ability of nLasso to recover the entire graph signal from noisy samples observed on the sampling set. However, the constants K and L are coupled via the flow h[·] used in Definition 1, e.g., the constant K always has to satisfy

Kmax{i,j}FLWi,j.    (17)

Thus, the minimum possible value for K depends on the values of the edge weights Wi, j of the data graph. Moreover, the minimum possible value for L depends on the precise connectivity of sampled nodes with the boundary edges F. Indeed, Definition 1 requires to route (by satisfying the capacity constraints, Equation 14), an amount of flow given by LWi, j from a boundary edge {i,j}F to the sampled nodes in M.

In order to make (the somewhat abstract) Definition 1 more transparent, let us state an easy-to-check sufficient condition for a sampling set M such that it resolves a given partition F.

Lemma 2. Consider a partition F={C1,,C|F|} of the data graph G which contains the sampling set MV. If each boundary edge {i,j}F with iCa, jCb is connected to sampled nodes, i.e., {m,i}E and {n,j}E with mMCa, nMCb, and weights Wm,i, Wn,j ≥ LWi, j, then the sampling set M resolves the partition F with constants L and

K=L·maxiV|N(i)F|.    (18)

In Figure 1C we depict a data graph consisting of two clusters F={C1,C2}. The data graph contains the sampling set M={m,n} which resolves the partition F with constants K = L = 4 according to Lemma 2.

The sufficient condition provided by Lemma 2 can be used to guide the choice for the sampling set M. In particular Lemma 2 suggests to sample more densely near the boundary edges F which connect different clusters. This rationale allows to cope with applications where the underlying partition F is unknown. In particular, we could use highly scalable local clustering methods (cf. [38]) to find the cluster boundaries F and then select the sampled nodes in their vicinity. Another approach to cope with lack of information about F is based on using random walks to identify the subset of nodes with a large boundary which are sampled more densely [39].

We now state our main result which is that solutions of the nLasso problem (Equation 12) allow to accurately recover the true underlying clustered graph signal x[·] (conforming with the partition F (cf. Equation 4) from the noisy measurements (Equation 9) whenever the sampling set M resolves the partition F.

Theorem 3. Consider a clustered graph signal x[·] of the form (Equation 4), with underlying partition F={C1,,C|F|} of the data graph into disjoint clusters Cl. We observe the noisy signal values y[i] at the samples nodes MV (cf. Equation 9). If the sampling set M resolves the partition F with parameters K > 0, L > 1, any solution x^[·] of the nLasso problem (Equation 12) with λ: = 1/K satisfies

x^[·] x[·]TV (K+ 4/(L 1))iM|e[i]|.    (19)

Thus, if the sampling set M is chosen such that it resolves the partition F={C1,,C|F|} (cf. Definition 1), nLasso methods (cf. Equation 12) recover a clustered graph signal x[·] (cf. Equation 4) with an accuracy which is determined by the level of the measurement noise e[i] (cf. Equation 9).

Let us highlight that the knowledge of the partition F underlying the clustered graph signal model (Equation 4) is only needed for the analysis of nLasso methods leading to Theorem 3. In contrast, the actual implementation methods of nLasso methods based on Equation (12) does not require any knowledge of the underlying partition. What is more, if the true underlying graph signal x[·] is clustered according to Equation (4) with different signal values al for different clusters Cl, the solutions of the nLasso Equation (12) could be used for determining the clusters Cl which constitute the partition F.

We also note that the bound (Equation 19) characterizes the recovery error in terms of the semi-norm ||x^[·]-x[·]||TV which is agnostic toward a constant offset in the recovered graph signal x^[·]. In particular, having a small value of ||x^[·]-x[·]||TV does in general not imply a small squared error iV(x^[i]x[i])2] as there might be an arbitrarily large constant offset contained in the nLasso solution x^[·].

However, if the error ||x^[·]-x[·]||TV is sufficiently small, we might be able to identify the boundary edges {i,j}F of the partition F underlying a clustered graph signal of the form (Equation 4).

Indeed, for a clustered graph signal of the form (Equation 4), the signal difference x[i] − x[j] across edges is non-zero only for boundary edges {i,j}F. Lets assume the signal differences of x[·] across boundary edges {i,j}F are lower bounded by some positive constant η > 0 and the nLasso error satisfies ||x^[·]-x[·]||TV<η/2. As can be verified easily, we can then perfectly recover the boundary F of the partition F={C1,,C|F|} as precisely those edges {i,j}E for which |x^[i]-x^[j]|η/2. Given the boundary F, we can recover the partition F and, in turn, average the noisy observations y[i] over all sampled nodes iM belonging to the same cluster. This simple post-processing of the nLasso estimate x^[i] is summarized in Algorithm 1.

Algorithm 1
www.frontiersin.org

Algorithm 1. Post-Processing for nLasso.

Lemma 4. Consider the setting of Theorem 3 involving a clustered graph signal x[·] of the form (Equation 4) with coefficients al satisfying |al-al|>η for l ≠ l′ with a known positive threshold η > 0. We observe noisy signal samples y[i] (cf. Equation 9) over the sampling set M with a bounded error e[i] ≤ ϵ. If the sampling set M resolves the partition F with parameters K > 0, L > 1 such that

(K+ 4/(L 1))iM|e[i]|<η/2,    (20)

then the signal x~[·] delivered by Algorithm 1 satisfies

iV(x˜[i] x[i])2 Nε2.    (21)

4. Numerical Experiments

In order to illustrate the theoretical findings of section 3 we report the results of some illustrative numerical experiments involving the recovery of clustered graph signals of the form (Equation 4) from a small number of noisy measurements (Equation 9). To this end, we implemented the iterative method ADMM [16] to solve the nLasso (Equation 12) problem. We applied the resulting semi-supervised learning algorithm to two synthetically generated data sets. The first data set represents a time series, which can be represented as a graph signal over a chain graph. The nodes of the chain graph, which represent the discrete time instants are partitioned evenly into clusters of consecutive nodes. A second experiment is based on data sets generated using a recently proposed generative model for complex networks.

4.1. Chain Graph

Our first experiment, is based on a graph signal defined over a chain graph Gchain (cf. Figure 2) with N = 105 nodes V={1,2,,N}, connected by N − 1 undirected edges. The nodes of the data graph Gchain are partitioned into N/10 equal-sized clusters Cl, l = 1, …, N/10, each constituted by 10 consecutive nodes. The intrinsic clustering structure of the chain graph Gchain matches the partition Fchain={Cl}l=1N/10 via the edge weights Wi, j. In particular, the weights of the edges connecting nodes within the same cluster are chosen i.i.d. according to Wi,j~|N(2,1/4)| (i.e., the absolute value of a Gaussian random variable with mean 2 and variance 1/4). The weights of the edges connecting nodes from different clusters are chosen i.i.d. according to Wi,j~|N(1,1/4)|.

We then generate a clustered graph signal x[·] of the form (Equation 4) with coefficients al ∈ {1, 5}, where the coefficients al and al of consecutive clusters Cl and Cl are different. The graph signal x[·] is observed via noisy samples y[i] (cf. Equation 9 with e[i]~N(0,1/4)) obtained for the nodes iV belonging to a sampling set M. We consider two different choices for the sampling set, i.e., M=M1 and M=M2. Both choices contain the same number of nodes, i.e., |M1|=|M2|=2·104. The sampling set M1 contains neighbors of cluster boundaries Fchain and conforms to Lemma 2 with constants K = 5.39 and L = 2 (which have been determined numerically). In contrast, the sampling set M2 is obtained by selecting nodes uniformly at random from V and thereby completely ignoring the cluster structure Fchain of Gchain.

The noisy measurements y[i] are then input to an ADMM implementation for solving the nLasso problem (Equation 12) with λ = 1/K. We run ADMM for a fixed number of 300 iterations and using ADMM-parameter ρ = 0.01 [16]. In Figure 5 we illustrate the recovered graph signals (over the first 100 nodes of the chain graph) x^[·], obtained from noisy signal samples over either sampling set M1 or M2.

FIGURE 5
www.frontiersin.org

Figure 5. Clustered graph signal x[·] along with the recovered graph signals obtained from noisy signal samples set M1 (Lemma 2) and M2 (random).

As evident from Figure 5, the recovered signal obtained when using the sampling set M1, which takes the partition Fchain into account, better resembles the original graph signal x[·] than when using the randomly selected sampling set M2. The favorable performance of M1 is also reflected in the empirical normalized mean squared errors (NMSE) between the real and recovered graph signals, which are NMSEM1=3.3·10-2 and NMSEM2=2.192·10-1, respectively.

We have repeated the above experiment with the same parameters but considering noiseless initial samples y[i] for both sampling sets M1 and M2. The recovered graph signals x^[·] for the first 100 nodes of the chain are presented in Figure 6. It can be observed that the recovery starting from the sampling set M1 (conforming to the partition Fchain) perfectly resembles the original graph signal x[·], as expected according to our upper bound in Equation (19). The NMSE obtained after running ADMM for 300 iterations for solving the nLasso problem (Equation 12) are NMSEM1=7.5·10-6 and NMSEM2=1.475·10-1, respectively.

FIGURE 6
www.frontiersin.org

Figure 6. Clustered graph signal x[·] along with the recovered graph signals obtained from noiseless signal samples over sampling set M1 (Lemma 2) and M2 (random). The noiseless signal samples y[i] = x[i] are marked with dots.

4.2. Complex Network

In this second experiment, we generate a data graph Glfr using the generative model introduced by Lancichinetti et al. [40], in what follows referred to as LFR model. The LFR model aims at imitating some key characteristics of real-world networks such as power law distributions of node degrees and community sizes. The data graph Glfr contains a total of N = 105 nodes which are partitioned into 1,399 clusters, Flfr={C1,,C1399}. The nodes V of Glfr are connected by a total of 9.45·105 undirected edges E.

The edge weights Wi, j, which are also provided by the LFR model, conform to the cluster structure of Glfr, i.e., inter-cluster edges {i,j}E with i,jCl have larger weights compared to intra-cluster edges {i,j}E with iCl and jCl. Given the data graph Glfr and partition Flfr we generate a clustered graph signal according to Equation (4) as x[i]=j=11399ajICj[i] with coefficients aj randomly chosen i.i.d. according to a uniform distribution U(1,50).

We then try to recover the entire graph signal x[·] by solving the nLasso problem (Equation 12) using noisy measurements y[i], according to Equation (9) with i.i.d. measurement noise e[i]~N(0,1/4), obtained at the nodes in a sampling set M. As in section 4.1, we consider two different choices M1 and M2 for the sampling set which both contain the same number of nodes, i.e., |M1|=|M2|=104. The nodes in sampling set M1 are selected according to Lemma 2, i.e., by choosing nodes which are well connected (close) to boundary edges Flfr which connect different clusters of the partition Flfr. In contrast, the sampling set M2 is constructed by selecting nodes uniformly at random, i.e., the partition Flfr is not taken into account.

In order to construct the sampling set M1, we first sorted the edges {i,j}E of the data graph Glfr in ascending order according to their edge weight Wi, j. We then iterate over the the edges according to the list, starting with the edge having smallest weight, and for each edge {i,j}E we select the neighboring nodes of i and j with highest degree and add them to M1, if they are not already included there. This process continues until the sampling set M1 has reached the prescribed size of 104. Using Lemma 2, we then verified numerically that the sampling set M1 resolves Flfr with constants K = 142.6 and L = 2 (cf. Definition 1).

The measurements y[i] collected for each sampling sets M1 and M2 are fed into the ADMM algorithm (using parameters ρ = 1/100) for solving the nLasso problem (Equation 12) with λ = 1/K. The evolution of the NMSE achieved by the ADMM output for an increasing number the iterations is shown in Figure 7. According to Figure 7 the signal recovered from the sampling set M1 approximates the true graph signal x[·] more closely compared to when using the sampling set M2. The NMSE achieved after 300 iterations of ADMM is NMSEM1=1.56·10-2 and NMSEM2=4.25·10-2, respectively.

FIGURE 7
www.frontiersin.org

Figure 7. Evolution of the NMSE achieved by increasing number of nLasso-ADMM iterations when using sampling set M1 or M2, respectively.

Finally, we compare the recovery accuracy of nLasso to that of plain label propagation (LP) [41], which relies on a band-limited signal model (cf. section 2.1). In particular, LP quantifies signal smoothness by the Laplacian quadratic form (Equation 6) instead of the total variation (Equation 1), which underlies nLasso (Equation 12). The signals recovered after running the LP algorithm for 300 iterations for the two sampling sets M1 and M2 incur an NMSE of NMSEM1=3.1·10-2 and NMSEM2=7.43·10-2, respectively. Thus, the signals recovered using nLasso are more accurate compared to LP, as illustrated in Figure 8. However, our results indicate that LP also benefits by using the sampling set M1 whose construction is guided by our theoretical findings (cf. Lemma 2).

FIGURE 8
www.frontiersin.org

Figure 8. Evolution of the NMSE achieved by increasing number of nLasso-ADMM iterations and LP iterations. Both algorithms are fed with the same signal samples obtained either over sampling set M1 or M2, respectively.

5. Proofs

The high-level idea behind the proof of Theorem 3 is to adapt the concept of compatibility conditions for Lasso type estimators [32]. This concept has been championed for analyzing Lasso type methods [32]. Our main technical contribution is to verify the compatibility condition for a sampling set M which resolves the partition F underlying the signal model (Equation 4) (cf. Lemma 6 below).

5.1. The Network Compatibility Condition

As an intermediate step toward proving Theorem 3, we adopt the compatibility condition [42], which has been introduced to analyze Lasso methods for learning sparse signals, to the clustered graph signal model (Equation 4). In particular, we define the network compatibility condition for sampling graph signals with small total variation (cf. Equation 1).

Definition 5. Consider a data graph G=(V,E,W) whose nodes V are partitioned into disjoint clusters F={C1,,C|F|}. A sampling set MV is said to satisfy the network compatibility condition, with constants K, L > 0, if

KiM|z[i]|+z[·]EFLz[·]F    (22)

for any graph signal z[·]V.

It turns out that any sampling set M which resolves the partition F={C1,,C|F|} with constants K and L (cf. Definition 1) also satisfies the network compatibility condition (Equation 22) with the same constants.

Lemma 6. Any sampling set M which resolves the partition F with parameters K, L > 0 satisfies the network compatibility condition with parameters K, L.

Proof: Let us consider an arbitrary but fixed graph signal z[·]V. Since the sampling set M resolves the partition F there exists a flow h[e] on G with (cf. Definition 1)

jN(i)h(j,i)jN(i)h(i,j)=0 for all iM|jN(i)h(j,i)jN(i)h(i,j)| K for all iM|h(i,j)|Wi,j for (i,j)Fh(i,j)·h(j,i)=0 for{i,j}F    (23)

Moreover, due to Equation (15), we have the important identity

(h(i,j) h(j,i))(z[i] z[j])=LWi,j|z[i] z[j]|    (24)

which holds for all boundary edges {i,j}F. This yields, in turn,

Lz[·]F=(2){i,j}F|z[i]z[j]|LWi,j                           =(24)(i,j)F(z[i] z[j])h(i,j).    (25)

Since E = ∂F∪(E\∂F), we can develop (Equation 25) as

Lz[·]F=(i,j)E(z[i] z[j])h(i,j) (i,j)EF(z[i] z[j])h(i,j)=iVz[i]jN(i)(h(j,i)h(i,j))                      (i,j)EF(z[i] z[j])h(i,j)(23)KiM|z[i]|+{i,j}EF|z[i] z[j]|Wi,j=KiM|z[i]|+z[·]EF    (26)

which verifies (Equation 22).    

The next result shows that if the sampling set satisfies the network compatibility condition, any solution of the nLasso (Equation 12) allows to accurately recover a clustered graph signal (cf. Equation 4).

Lemma 7. Consider a clustered graph signal x[·] of the form (Equation 4) defined on the data graph G=(V,E,W) whose nodes V are partitioned into the clusters F={C1,,C|F|}. We observe the noisy signal values y[i] at the sampled nodes MV (cf. Equation 9). If the sampling set M satisfies the network compatibility condition with constants L > 1, K > 0, then any solution of the nLasso problem (Equation 12), for the choice λ: = 1/K, satisfies

x^[·]x[·]TV (K+ 4/(L 1))iM|e[i]|.    (27)

Proof: Consider a solution x^[·] of the nLasso problem (Equation 12) which is different from the true underlying clustered signal x[·] (cf. Equation 4). We must have (cf. Equation 9)

iM|x^[i] y[i]|+ λx^[·]TV iM|e[i]|+ λx[·]TV    (28)

since otherwise the true underlying signal x[·] would achieve a smaller objective value in Equation (12) which, in turn, would contradict the premise that x^[·] is optimal for the problem (Equation 12).

Let us denote the difference between the solution x^[·] of Equation (12) and the true underlying clustered signal x[·] by x~[·]:=x^[·]-x[·]. Since x[·] satisfies Equation (4),

x[·]EF=0, and x˜[·]EF=x^[·]EF.    (29)

Applying the decomposition property of the semi-norm || · ||TV to Equation (28) yields

iM|x^[i]y[i]|+λx^[·]EF        iM|e[i]|+λx[·]Fλx^[·]F.    (30)

Therefore, using Equation (29) and the triangle inequality,

iM|x^[i]y[i]|+λx˜[·]EF                                    λx˜[·]F+iM|e[i]|.    (31)

Since iM|x^[i]-y[i]|0, Equation (31) yields

λx˜[·]EFλx˜[·]F+iM|e[i]|,    (32)

i.e., for sufficiently small measurement noise e[i], the signal differences of the recovery error x~[·]=x^[·]-x[·] cannot be concentrated across the edges within the clusters Cl. Moreover, using

iM|x^[i]y[i]|=(9)iM|x^[i]x[i]e[i]|                                    iM|x˜[i]|iM|e[i]|,    (33)

the inequality Equation (31) becomes

iM|x˜[i]|+λx˜[·]EFλx˜[·]F+2iM|e[i]|.    (34)

Thus, since the sampling set M satisfies the network compatibility condition, we can apply Equation (22) to x~[·] yielding

iM|x˜[i]|+(1/K)x˜[·]EF(1/K)Lx˜[·]F.    (35)

Inserting Equation (35) into Equation (34), with λ = 1/K, yields

λ(L1)x˜[·]F2iM|e[i]|.    (36)

Combining Equations (32) and (36) yields

x˜[·]TV=x˜[·]EF+x˜[·]F                 (32)2x˜[·]F+(1/λ)iM|e[i]|                 (36)1+ 4λ/(L 1)λiM|e[i]|.    (37)

  

5.2. Proof of Theorem 3

Combine Lemma 6 with Lemma 7.

6. Conclusions

Given a known cluster structure of the data graph, we introduced the notion of resolving sampling sets. A sampling set resolves a cluster structure if there exists a sufficiently large network flow between the sampled nodes, with prescribed flow values over boundary edges which connect different clusters. Loosely speaking, this requires to choose the sampling set mainly in the boundary regions between different clusters in the data graph. Thus, we can leverage efficient clustering methods for identifying the cluster boundary regions in order to find sampling sets which resolve the intrinsic cluster structure of the network structure underlying a dataset.

The verification if a particular sampling set resolves a given partition requires to consider all possible sign patterns for the boundary edges, which is intractable for large graphs. An important avenue for follow-up work is the investigation if resolving sampling sets can be characterized easily using probabilistic models for the underlying network structure and sampling sets. Moreover, we plan to extend our analysis to nLasso methods using other loss functions, e.g., the squared error loss and also the logistic loss function in the context of classification problems.

Author Note

Parts of the work underlying this paper have been presented in Mara and Jung [43]. A preprint of this manuscript is available under https://arxiv.org/abs/1704.02107 [44].

Author Contributions

AJ initiated the research and provided first proofs for the main results. NT helped with proof reading and pointing to some typos in the proofs. AM took care of the numerical experiments.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

The authors are grateful to Madelon Hulsebos for a careful proof-reading of an early manuscript. Moreover, the constructive comments of reviewers are appreciated sincerely. This manuscript is available as a pre-print at the following address: https://arxiv.org/abs/1704.02107. Copyright of this pre-print version rests with the authors.

References

1. Sandryhaila A, Moura JMF. Classification via regularization on graphs. In 2013 IEEE Global Conference on Signal and Information Processing. Austin, TX (2013). p. 495–8. doi: 10.1109/GlobalSIP.2013.6736923

CrossRef Full Text | Google Scholar

2. Chen S, Sandryhaila A, Moura JMF, Kovačević J. Signal recovery on graphs: variation minimization. IEEE Trans Signal Process. (2015) 63:4609–24. doi: 10.1109/TSP.2015.2441042

CrossRef Full Text | Google Scholar

3. Bishop CM. Pattern Recognition and Machine Learning. New York, NY: Springer (2006).

Google Scholar

4. Chapelle O, Schölkopf B, Zien A, (eds.). Semi-Supervised Learning. Cambridge, MA: The MIT Press (2006). doi: 10.7551/mitpress/9780262033589.001.0001

CrossRef Full Text | Google Scholar

5. Zhou D, Schölkopf B. A regularization framework for learning from graph data. In: ICML Workshop on Statistical Relational Learning and Its Connections to Other Fields, vol. 15. Banff (2004). p. 67–8.

Google Scholar

6. Gadde A, Anis A, Ortega A. Active semi-supervised learning using sampling theory for graph signals. In: Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. KDD '14 (2014). p. 492–501. doi: 10.1145/2623330.2623760

CrossRef Full Text | Google Scholar

7. Ando RK, Zhang T. Learning on graph with laplacian regularization. In: Advances in Neural Information Processing Systems. Vancouver, BC (2007).

Google Scholar

8. Belkin M, Niyogi P, Sindhwani V. Manifold regularization: a geometric framework for learning from labeled and unlabeled examples. J Mach Lear Res. (2006) 7:2399–434.

Google Scholar

9. Sandryhaila A, Moura JMF. Big data analysis with signal processing on graphs: representation and processing of massive data sets with irregular structure. IEEE Signal Process Mag. (2014) 31:80–90. doi: 10.1109/MSP.2014.2329213

CrossRef Full Text | Google Scholar

10. Shuman DI, Narang SK, Frossard P, Ortega A, Vandergheynst P. The emerging field of signal processing on graphs: extending high-dimensional data analysis to networks and other irregular domains. IEEE Signal Process Mag. (2013) 30:83–98. doi: 10.1109/MSP.2012.2235192

CrossRef Full Text

11. Narang SK, Gadde A, Ortega A. Signal processing techniques for interpolation in graph structured data. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (2013). p. 5445–9. doi: 10.1109/ICASSP.2013.6638704

CrossRef Full Text

12. Hallac D, Leskovec J, Boyd S. Network Lasso: clustering and optimization in large graphs. In: Proceedings of SIGKDD (2015). p. 387–96. doi: 10.1145/2783258.2783313

CrossRef Full Text

13. Hastie T, Tibshirani R, Friedman J. The Elements of Statistical Learning. Springer Series in Statistics. New York, NY: Springer (2001).

14. Hastie T, Tibshirani R, Wainwright M. Statistical Learning with Sparsity. The Lasso and its Generalizations. Boca Raton FL: CRC Press (2015).

15. Yamada M, Koh T, Iwata T, Shawe-Taylor J, Kaski S. Localized lasso for high-dimensional regression. In: Proceedings of the 20th International Conference on Artificial Intelligence and Statistics. Fort Laud-erdale, FL (2017). p. 325–33.

16. Boyd S, Parikh N, Chu E, Peleato B, Eckstein J. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers. vol. 3 of Foundations and Trends in Machine Learning. Hanover, MA: Now Publishers (2010).

17. Romero D, Ma M, Giannakis GB. Kernel-based reconstruction of graph signals. IEEE Trans Signal Process. (2017) 65:764–78. doi: 10.1109/TSP.2016.2620116

CrossRef Full Text

18. Tsitsvero M, Barbarossa S, Lorenzo PD. Signals on graphs: uncertainty principle and sampling. IEEE Trans Signal Process. (2016) 64:4845–60. doi: 10.1109/TSP.2016.2573748

CrossRef Full Text

19. Chen S, Varma R, Sandryhaila A, Kovačević J. Discrete signal processing on graphs: sampling theory. IEEE Trans Signal Process. (2015) 63:6510–23. doi: 10.1109/TSP.2015.2469645

CrossRef Full Text

20. Chen S, Varma R, Singh A, Kovačević J. Signal recovery on graphs: fundamental limits of sampling strategies. IEEE Trans Signal Inform Process Over Netw. (2016) 2:539–54. doi: 10.1109/TSIPN.2016.2614903

CrossRef Full Text

21. Segarra S, Marques AG, Leus G, Ribeiro A. Reconstruction of graph signals through percolation from seeding nodes. IEEE Trans Signal Process. (2016) 64:4363–78. doi: 10.1109/TSP.2016.2552510

CrossRef Full Text

22. Wang X, Liu P, Gu Y. Local-set-based graph signal reconstruction. IEEE Trans Signal Process. (2015) 63:2432–44. doi: 10.1109/TSP.2015.2411217

CrossRef Full Text

23. Sharpnack J, Rinaldo A, Singh A. Sparsistency of the Edge Lasso over Graphs. AIStats (JMLR WCP). La Palma (2012).

24. Wang YX, Sharpnack J, Smola AJ, Tibshirani RJ. Trend filtering on graphs. J Mach Lear Res. (2016) 17:1–41. Available online at: http://jmlr.org/papers/v17/15-147.html

25. Golub GH, Van Loan CF. Matrix Computations. 3rd Edn. Baltimore, MD: Johns Hopkins University Press (1996).

26. Goodfellow I, Bengio Y, Courville A. Deep Learning. Cambridge, MA: MIT Press (2016).

27. Cui S, Hero A, Luo ZQ, Moura JMF, (eds.). Big Data Over Networks. Cambridge, UK: Cambridge University Press (2016).

28. Zhu X, Rabbat M. Graph spectral compressed sensing for sensor networks. In: Proceedings of IEEE ICASSP 2012. Kyoto (2012). p. 2865–8.

29. Newman MEJ. Networks: An Introduction. New York, NY: Oxford University Press (2010).

30. Mossel E, Neeman J, Sly A. Stochastic block models and reconstruction. ArXiv e-prints. (2012).

31. Bapat RB. Graphs and Matrices. London: Springer-Verlag (2014).

32. Bühlmann P, van de Geer S. Statistics for High-Dimensional Data. New York, NY: Springer (2011).

33. Chambolle A, Pock T. An introduction to continuous optimization for imaging. Acta Numer. (2016) 25:161–319. doi: 10.1017/S096249291600009X

CrossRef Full Text

34. Agarwal A, Negahban S, Wainwright MJ. Fast global convergence of gradient methods for high-dimensional statistical recovery. Ann Stat. (2012) 40:2452–82. doi: 10.1214/12-AOS1032

CrossRef Full Text

35. Boyd S, Vandenberghe L. Convex Optimization. Cambridge: Cambridge University Press (2004).

36. Zhu Y. An augmented ADMM algorithm with application to the generalized lasso problem. J Comput Graph Stat. (2017) 26:195–204. doi: 10.1080/10618600.2015.1114491

CrossRef Full Text

37. Kleinberg J, Tardos E. Algorithm Design. New York, NY: Addison Wesley (2006).

38. Spielman DA, hua Teng S. A local clustering algorithm for massive graphs and its application to nearly-linear time graph partitioning. arXiv:0809.3232 (2008).

39. Basirian S, Jung A. Random walk sampling for big data over networks. In: Proceedings of International Conference on Sampling Theory and Applications. Tallinn (2017).

40. Lancichinetti A, Fortunato S, Radicchi F. Benchmark graphs for testing community detection algorithms. Phys Rev E (2008) 78:046110. doi: 10.1103/PhysRevE.78.046110

CrossRef Full Text

41. Zhu X, Ghahramani Z. Learning from Labeled and Unlabeled Data with Label Propagation. Technical Report CMU-CALD-02-107, Carnegie Mellon University (2002).

42. van de Geer SA, Bühlmann P. On the conditions used to prove oracle results for the Lasso. Electron J Stat. (2009) 3:1360–92. doi: 10.1214/09-EJS506

CrossRef Full Text

43. Mara A, Jung A. Recovery conditions and sampling strategies for network lasso. In: Proceedings of 51st Asilomar Conference Signals, Systems, Computers. Pacific Grove, CA (2017).

44. Jung A, Tran NQ, Mara A. When is network lasso accurate? arXiv:170402107 (2017).

Keywords: compressed sensing, big data, semi-supervised learning, complex networks, convex optimization, clustering

Citation: Jung A, Tran N and Mara A (2018) When Is Network Lasso Accurate? Front. Appl. Math. Stat. 3:28. doi: 10.3389/fams.2017.00028

Received: 09 October 2017; Accepted: 28 December 2017;
Published: 19 January 2018.

Edited by:

Juergen Prestin, University of Lübeck, Germany

Reviewed by:

Katerina Hlavackova-Schindler, University of Vienna, Austria
Valeriya Naumova, Simula Research Laboratory, Norway

Copyright © 2018 Jung, Tran and Mara. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Alexander Jung, alexander.jung@aalto.fi

Download