On the construction of sparse matrices from expander graphs

We revisit the asymptotic analysis of probabilistic construction of adjacency matrices of expander graphs proposed in [4]. With better bounds we derived a new reduced sample complexity for the number of nonzeros per column of these matrices, precisely $d = \mathcal{O}\left(\log_s(N/s) \right)$; as opposed to the standard $d = \mathcal{O}\left(\log(N/s) \right)$. This gives insights into why using small $d$ performed well in numerical experiments involving such matrices. Furthermore, we derive quantitative sampling theorems for our constructions which show our construction outperforming the existing state-of-the-art. We also used our results to compare performance of sparse recovery algorithms where these matrices are used for linear sketching.


Introduction
Sparse binary matrices, say A ∈ {0, 1} n×N , with n ≪ N are widely used in applications including graph sketching [1,21], network tomography [32,14], data streaming [31,25], breaking privacy of databases via aggregate queries [19], compressed imaging of intensity patterns [16], and more generally combinatorial compressed sensing [15,33,28,8,29,30], linear sketching [25], and group testing [18,22]. In all these areas we are interested in the case where n ≪ N , in which case A is used as an efficient encoder of sparse signals x ∈ R N with sparsity s ≪ n, where they are known to preserve ℓ 1 distance of sparse vectors [7]. Conditions that guarantee that a given encoder, A, also referred to as a sensing matrix in compressed sensing, typically include the the nullspace, coherence, and the restricted isometry conditions, see [20] and references there in. The goal is for A to satisfy one or more of these conditions with the minimum possible n, the number of measurements. For uniform guarantees over all A, it has been established that n has to be Ω s 2 , but that with high probability on the draw of random A, n can be O (s log N/n) for A with entries drawn from a sub-gaussian distribution, see [20] for a review of such results. Matrices with entries drawn from a Bernoulli distribution fall in the family of sub-gaussian but these are dense as opposed the the sparse binary matrices considered here. For computational advantages, such as faster application and smaller storage, it is advantageous to use sparse A in application [7,4,29].
Herein we consider the n achievable when A is an adjacency matrix of a expander graph [7], expander graph will be defined in the next section. Hence the construction of such matrices can be construed as either a linear algebra problem or equivalently a graph theory one (in this manuscript we will focus more on the linear algebra discourse). There has been significant research on expander graphs in pure mathematics and theoretical computer science, see [24] and references therein. Both deterministic and probabilistic constructions of expander graphs have been suggested. The best known deterministic constructions achieve n = O s 1+α for α > 0 [23]. One the other hand random constructions, first proven in [5], achieve the optimal n = O (s log (N/s)), precisely n = O(sd), with d = O (log (N/s)), where d is the left degree of the expander graph but also the number of ones in each column of A, to be defined in the next section. However, to the best of our knowledge, it was [4] that proposed a probabilistic construction that is not only optimal but also more suitable to making quantitative statements where such matrices are applied.
This work follows the probabilistic construction proposed in [4] but with careful computation of the bounds, is able to achieve n = O (s log (N/s)) with d = O log(N/s) log s . We retain the complexity of n but got a smaller complexity for d, which is novel. Related results with a similar d were derived in [27,2] but for structure sparse signals in the framework of model-based compressing sensing or sketching. In that framework, one has second order information about x beyond simple sparsity, which is first order information about x. It is thus expected and established that it is possible to get a small n and hence a smaller d. Arguably, such a small complexity for d justifies in hindsight fixing d to a small number in simulations with such A as in [4,2,29], just to mention a few.
The results derive here are asymptotic, though finite dimensional bounds follow directly. We focus on for what ratios of the problem dimensions (s, n, N ) does these results hold. There is almost a standard way of interrogating such a question, i.e. phase transitions, probably introduced to the compressed sensing literature by [17]. In other words, we derive sampling theorems numerically depicted by phase transition plots about problem size spaces for which our construction holds. This is similar to what was done in [4] but for comparison purposes we include phase transition plots from probabilistic constructions by [11,6]. The plots show improvement over these earlier works. Furthermore, we show implications of our results for compressed sensing by using our results with the phase transition framework to compare the performance of selected combinatorial compressed sensing algorithms as is done in [4,29].
The manuscript is organized as follows. Section 1 gives the introduction; while Section 2 sets the notation and defines some useful terms. The main results are stated in Section 3 and the details of the construction is given in Section 4. This is followed by a discussion in Section 5 about our results, comparing them to existing results and using the results to compare the performance of some combinatorial compressed sensing algorithms. In Section 6 we state the remaining proofs of theorems, lemmas, corollaries, and propositions used in this manuscript. After this section is the conclusion in Section 7. We include an appendix in Section 8, where we summarized key relevant materials from [4], and showed the derivation of some bounds used in the proofs.

Notation
Scalars will be denoted by lowercase letters (e.g. k), vectors by lowercase boldface letters (e.g., x), sets by uppercase calligraphic letters (e.g., S) and matrices by uppercase boldface letters (e.g. A). The cardinality of a set S is denoted by |S| and [N ] := {1, . . . , N }. Given S ⊆ [N ], its complement is denoted by S c := [N ] \ S and x S is the restriction of x ∈ R N to S, i.e. (x S ) i = x i if i ∈ S and 0 otherwise. For a matrix A, the restriction of A to the columns indexed by S is denoted by A S . For a graph, Γ(S) denotes the set of neighbors of S, that is the nodes that are connected to the nodes in S, and e ij = (x i , y j ) represents an edge connecting node x i to node y j . The ℓ p norm of a vector x ∈ R N is defined as

Definitions
Below we give formal definitions that will be used in this manuscript. Definition 2.1 (ℓ p -norm restricted isometry property). A matrix A satisfies the ℓ p -norm restricted isometry property (RIP-p) of order s and constant δ s < 1 if it satisfies the following inequality.
The most popular case is RIP-2 and was first proposed in [12]. Typically when RIP is mentioned without qualification, it means RIP 2 . In the discourse of this work though RIP-1 is the most relevant. The RIP says that A is a near-isometry and it is a sufficient condition to guarantee exact sparse recovery in the noiseless setting (i.e. y = Ax); or recovery up to some error bound, also referred to as optimality condition, in the noisy setting (i.e. y = Ax + e, where e is the bounded noise vector). we define optimality condition more precisely below.
Definition 2.2 (Optimality condition). Given y = Ax+e and x = ∆ (Ax + e) for a reconstruction algorithm ∆, the optimal error guarantee is where C 1 , C 2 > 0 depend only on the RIP constant (RIC), i.e. δ s , and not the problem size, 1 ≤ q ≤ p ≤ 2, and σ s (x) q denote the error of the best s-term approximation in the ℓ q -norm, that is Equation (2) is also referred to as the ℓ p /ℓ q optimality condition (or error guarantee). Ideally, we would like ℓ 2 /ℓ 2 , but the best provable is ℓ 2 /ℓ 1 [12], weaker than this is the ℓ 1 /ℓ 1 [7], which is what is possible with the A considered in this work.
To aid translation between the terminology of graph theory and linear algebra we define the set of neighbors in both notation.  , E) be a left-regular bipartite graph with N left vertexes, n right vertexes, a set of edges E and left degree d. If, for any ǫ ∈ (0, 1/2) and any S ⊂ [N ] of size |S| ≤ k, we have that |Γ(S)| ≥ (1− ǫ)d|S|, then G is referred to as a (s, d, ǫ)-expander graph.
The ǫ is referred to as the expansion coefficient of the graph. A (s, d, ǫ)-expander graph, also called an unbalanced expander graph [7] or a lossless expander graph [13], is a highly connected bipartite graph. We denote the ensemble of n × N binary matrices with d ones per column by B(N, n; d), or just B to simplify notation. We also will denote the ensemble of n × N adjacency matrices of (s, d, ǫ)-expander graphs as E(N, n; s, d, ǫ) or simply E. Theorem 3.1. Consider ǫ ∈ (0, 1 2 ) and let d, s, n, N ∈ N, a random draw of an n × N matrix A from B, i.e. for each column of A uniformly assign ones in d out of n positions, as (s, n, N ) → ∞ while s/n ∈ (0, 1) and n/N ∈ (0, 1), with probability approaching 1 exponentially, the matrix A ∈ E with The proof of this theorem is found Section 6.1. It is worth emphasizing that the complexity of d is novel and it is the main contribution of this work.
The proof of this lemma is given in Section 6.2. The phase transition function ρ BT (δ; d, ǫ) turned out to be significantly higher that those derived from existing probabilistic constructions, hence our results are significant improvement over earlier works. This will be graphically demonstrated with some numerical simulations in Section 5.

Construction
The standard probabilistic construction is for each column of A to uniformly assign ones in d out of n positions; while the standard approach to derive the probability bounds is to randomly selected s columns of A indexed by S and compute the probability that |A s | < (1 − ǫ)ds, then do a union bound over all sets S of size s. Our work in [4] computed smaller bounds than previous works based on a dyadic splitting of S and derived the following bound. We changed the notation and format of Theorem 1.6 in [4] slightly to be consistent with the notation and format in this manuscript.
with a 1 := d, and the functions defined as Ψ n (a s , . . . , a 1 ) = 1 n 3s log(5d) + i∈Ω s 2i where and H(·) is the Shannon entropy in base e logarithms, and the index set Ω = {2 j } log 2 (s)−1 j=0 . expander asymptotics 5 a) If no restriction is imposed on a s then the a i for i > 1 take on their expected valueâ i given byâ b) If a s is restricted to be less thanâ s , then the a i for i > 1 are the unique solutions to the following polynomial system with a 2i ≥ a i for each i.
In this work, based on the same approach as in [4], we derive new expressions for the p n (s, d) and Ψ n (a s , . . . , a 1 ) in Theorem 4.1, i.e. (6) and (7) respectively, and provide a simpler bound for the improved expression of Ψ n (a s , . . . , a 1 ).
Ψ n (a s , . . . , a 1 ) = 1 n The proof of the lemma is given in Section 6.3. Asymptotically, the argument of the exponential term in the bound of the probability in (5) of Theorem 4.1, i.e. Ψ n (a s , . . . , a 1 ) in (12) is more important than the polynomial p n (s, d) in (11) since the exponential factor will dominate the polynomial factor. The significance of the lemma is that Ψ n (a s , . . . , a 1 ) in (12) is smaller than Ψ n (a s , . . . , a 1 ) in (7) since 3 log 2 2 log 2 2 s + log 2 s − 3 2 log a s in (12) is asymptotically smaller than 3s log(5d) in (7), because the former is O(polylogs) while the latter is O(s), since we consider a s = O(s).
Recall that we are interested in computing Prob (|A s | ≤ a s ) when a s = (1 − ǫ)ds. This means having to solve the polynomial equation (10) to compute as small a bound of Ψ n ((1 − ǫ)ds, . . . , d) as possible. We derive an asymptotic solution to (10) for a s = (1 − ǫ)ds and use that solution to get the following bounds.
Theorem 4.2. Consider d, s, n, N ∈ N, fix S with |S| ≤ s, for η > 0, β ≥ 1, and ǫ ∈ 0, 1 2 , let an n × N matrix A be drawn from B, then where The proof of this theorem is also found in Section 6.5. Since Theorem 4.2 holds for a fixed S of size at most s, if we want this to hold for all S of size at most s, we do a union bound over all S of size at most s. This leads to the following probability bound. Theorem 4.3. Consider d, s, n, N ∈ N, and all S with |S| ≤ s, for τ > 0, and ǫ ∈ 0, 1 2 , let an n × N matrix A be drawn from B, then where Proof. Applying the union bound over all S of size at most s to (13) leads to the following.
Then we used the upper bound of (143) to bound the combinatorial term N s in (19). After some algebraic manipulations, we separated the the polynomial term, given in (17), from the exponential terms whose exponent is We (15). The o(N ) decays to zeros with N and its a result of dividing the polylogarithmic terms of s in (15), and τ = ηβ −1 (β − 1) in (15). This concludes the proof.
The next corollary easily follows from Theorem 4.3 and it is equivalent to Theorem 3.1. Its statement is that if the conditions therein holds, then the probability that the cardinality of the set of neighbors of any S with |S| ≤ s is less than (1 − ǫ)ds goes to zero as dimensions of A grows. On the other hand, the probability that the cardinality of the set of neighbors of any S with |S| ≤ s is greater than (1 − ǫ)ds goes to one as dimensions of A grows. Implying that A is the adjacency matrix of a (s, d, ǫ)-expander graph. , with c d , c n > 0. Let an n × N matrix A be drawn from B, in the proportional growth asymptotics Proof. It suffice to focus on the exponent of (16), more precisely on the bound of We can ignore the o(N ) term as this goes to zero as N grows, and show that the remaining sum is negative. The remaining sum is Hence, we can further focus on the sum in the square brackets, and find conditions on d that will make it negative. We require Recall τ = ηβ −1 (β − 1) and β is a function of ǫ, with β(ǫ) ≈ 1 + ǫ. Therefore, τ is a function of ǫ, and τ (ǫ) = O(ǫ), hence there exists a c d > 0 for (25) to hold. With regards to the complexity of n, we go back to the right hand side (RHS) of (24) and we substitute C d log(N/s) ǫ log(s/2) with C d > 0 for d in the RHS of (24) to get the following.
Now we assume n = Cns log(N/s) with C n > 0 for n and substitute this in (26) to get the following.
Again since τ (ǫ) = O(ǫ), hence there exists a c n > 0 for (28) to hold. The bound of n in (28) agrees with our earlier assumption, thus concluding the proof.

Comparison to other constructions
In addition to being the first probabilistic construction of adjacency matrices of expander graphs with such a small degree, quantitatively our results compares favorably to existing probabilistic constructions.We use the standard tool of phase transitions to compare our construction to the construction proposed in [6] and those proposed in [11]. The phase transition curve ρ BT (δ; d, ǫ) we derived in Lemma 3.1 is the ρ that solves the following equation.
where c n > 0 is as in (28). Equation 29 comes from taking the limit, in the proportional growth asymptotics, of the bound in (18), setting that to zero and simplifying. Similarly, for any S with |S| ≤ s, Berinde in [6] derived the following bound on the set of neigbours of S, i.e. A s .
We then express the bound in (30) as the product of a polynomial term and an exponential term. A bound of the exponent is carefully derived as in the derivations above. We set the limit, in the proportional growth asymptotics, of this bound to zero and simplify to get the following.
We refer to the ρ that solves (31) as the phase transition for the construction proposed by Berinde in [6] and denote this ρ (the phase transition function) as ρ BI (δ; d, ǫ). Another probabilistic construction was proposed by Burhman et al. in [11]. In conforminty with the notation used in this manuscript their bound is equivalent to the following, also stated in a similar form by Indyk and Razenshteyn in [27].
where ν > 0. We again express the bound in (32) as the product of a polynomial term and an exponential term. A bound of the exponent is carefully derived as in the derivations above. We set the limit, in the proportional growth asymptotics, of this bound to zero and simplify to get the following.
Similarly, we refer to the ρ that solves (33) as the phase transition for the construction proposed by Burhman et al. in [11] and denote this ρ as ρ BM (δ; d, ǫ). We compute numerical solutions to (29), (31), and (33) to derive the phase transitions ρ BT (δ; d, ǫ), ρ BI (δ; d, ǫ), and ρ BM (δ; d, ǫ) respectively. These are plotted in the left panel of Figure 1. It is clear that our construction has a much higher phase transition than the others. Recall that the phase transition curves in these plots depict construction of adjacency matrices of (s, d, ǫ)-expanders with high probability for ratios of s, n and N (since ρ := s/n, and δ := n/N ) below the curve; and the failure to construct adjacency matrices of (s, d, ǫ)-expanders with high probability for ratios of s, n and N above the curve. Essentially, the larger the area under the curve the better.
Remark 5.1. It is easy to see that ρ BI (δ; d, ǫ) is a special case of ρ BM (δ; d, ǫ) since the two phase transitions will coincide, or equivalently (31) and (33) will be the same, when ν = e −1 . One could argue that Berinde's derivation in [6] suffers from over counting. Given that this work is an improvement of our work in [4] in terms of simplicity in computing ρ BT (δ; d, ǫ), for completeness we compare our new phase transition ρ BT (δ; d, ǫ) denoted as ρ BT 2 (δ; d, ǫ) to our previous ρ BT (δ; d, ǫ) denoted as ρ BT 1 (δ; d, ǫ) in the right panel of Figure 1. Each computation of ρ BT 1 (δ; d, ǫ) requires the specification of N , which is not needed in the computation of ρ BT 2 (δ; d, ǫ), hence the simplification. However, the simplification led to a lower phase transition as expected, which is confirmed by the plots in the right panel of Figure 1.
Remark 5.2. These simulations also inform us about the size of c n . See from the plots of ρ BT (δ; d, ǫ) and ρ BT 2 (δ; d, ǫ) that the smaller the value of c n the higher the phase transition but since ρ BT 2 (δ; d, ǫ) has to be a lower bound of ρ BT 1 (δ; d, ǫ), for values of c n much smaller than 2/3, the lower bound will fail to hold. This informed the choice of c n = 2 in the plot of ρ BT (δ; d, ǫ) in the left panel of Figure 1.

Implications for combinatorial compressed sensing
When the sensing matrices are restricted to the sparse binary matrices considered in this manuscript, compressed sensing is usually referred to as combinatorial compressed sensing a term introduced in [7] and used extensively in [29,30]. In this setting, compressed sensing is more-or-less equivalent to linear sketching. The implications of our results on combinatorial compressed sensing are two-fold. One is on the ℓ 1 -norm RIP, we donate as RIP-1; while the second is in the comparison of performance of recovery algorithms for combinatorial compressed sensing.

RIP-1
As can be seen from (2), the recovery errors in compressed sensing depend on the RIC, i.e. δ s . The following lemma deduced from Theorem 1 of [7] shows that a scaled A drawn from E have RIP with δ s = 2ǫ.
Lemma 5.1. Consider ǫ ∈ (0, 1 2 ) and let A be drawn from E, then Φ = A/d satisfies the following RIP-1 condition The interested reader is referred to the proof of Theorem 1 in [7] for the proof of this lemma. Key to the holding of Lemma 5.1 is the existence of (s, d, ǫ)-expander graphs, hence one can draw corollaries from our results on this.
Corollary 5.1. Consider ǫ ∈ (0, 1 2 ) and let d, s, n, N ∈ N. In the proportional growth asymptotics with a random draw of an n × N matrix A from B, the matrix Φ := A/d has RIP-1 with probability approaching 1 exponentially, if Proof. Note that the upper bound of (34) holds trivially for any Φ = A/d where A has d ones per column, i.e. A ∈ B. But for the lower bound of (34) to hold for any Φ = A/d, we need A to be an (s, d, ǫ)-expander matrix, i.e. A ∈ E. Note that the event |A s | ≥ (1 − ǫ)ds is equal to the event , for a fixed S, with |S| ≤ s. For A to be in E, we need expansion for all sets S, with |S| ≤ s, i.e. A ∈ E. The key thing to remember is that The probability in (36) going to 1 exponentially in the proportional growth asymptotics, i.e. the existence of A ∈ E with parameters as given in (35), is what is stated in Theorem 3.1. Therefore, the rest of the proof follows from the proof of Theorem 3.1, hence concluding the proof of the corollary.
Notably, Lemma 5.1 holds with Φ having much smaller number of nonzeros per column due to our construction. More over, we can derive sampling theorems for which Lemma 5.1 holds as thus.
Proof. The proof of this corollary follows from the proof of Corollary 5.1 above, and it is related to the proof of Lemma 3.1 as the proof of Corollary 5.1 is to the proof of Theorem 3.1. The details of the proof is thus skipped.

Performance of algorithms
We wish to compare the performance of selected combinatorial compressed sensing algorithms in terms of the possible problem sizes (s, n, N ) that these algorithms can reconstruct sparse/compressible signals/vectors up to their respective error guarantees. The comparison is typically done in the framework of phase transitions, which depict a boundary curve where ratios of problems sizes above this curve are recovered with probability approaching 0 exponentially; while problems sizes below the curve are recovered with probability approaching 1 exponentially. The list of combinatorial compressed sensing algorithms includes Expander Matching Pursuit (EMP) [26], Sparse Matching Pursuit [9], Sequential Sparse Matching Pursuit (SSMP) [8], Left Degree Dependent Signal Recovery (LDDSR) [33], Expander Recovery (ER) [28], Expander Iterative Hard-Thresholding (EIHT) [20,Section 13.4], and Expander ℓ 0 -decoding (ELD) with both serial and parallel versions [29]. For reason similar to those used in [4,29], we selected out of this list four of the algorithms: (i) SSMP, (ii) ER, (ii) EIHT, (iv) ELD. Descriptions of these algorithms is skipped here but the interested reader is referred to the original papers or their summarized details in [4,29]. We were also curious as to how ℓ 1 -minimization's performance compares to these selected combinatorial compressed sensing algorithms, since ℓ 1 -minimization (ℓ 1 -min) can be used to solve the combinatorial problem solved by these algorithms, see [7,Theorem 3].
The phase transitions are based on conditions on the RIC of the sensing matrices used. Consequent to Lemma 5.1, this becomes conditions on the expansion coefficient (i.e. ǫ) of the underlying (s, d, ǫ)-expander graphs of the sparse sensing matrices used. Where this condition on ǫ is not explicitly given it is easily deducible from the recovery guarantees given for each algorithms. The conditions are summarized in the table below.  [8] ǫ k < 1/16 k = (c + 1)s, c > 1 ǫ k = 1/16 − e k = ⌈(2 + e)s⌉, k = 3s ER [28] ǫ k < 1/4 k = 2s ǫ k = 1/4 − e k = 2s EIHT [20] ǫ k < 1/12 k = 3s ǫ k = 1/12 − e k = 3s ELD [29] ǫ k ≤ 1/4 k = s ǫ k = 1/4 k = s ℓ 1 -min [7] ǫ k < 1/6 k = 2s The theoretical values are what will be found in the reference given in the table; while the computational values are what we used in our numerical experiments to compute the phase transition curves of the algorithms. The value for e was set to be 10 −15 , to make the ǫ k as large as possible under the given condition. With these values we computed the phase transitions in Figure 2. The two figures are the same except for the different sparsity value used. The performance of the algorithms in this framework are thus ranked as follows: ELD, ER, ℓ 1 -min, EIHT, and SSMP. Remark 5.3. We point out that there are many way to compare performance of algorithms, this is just one way. For instance, we can compare runtime complexities or actual computational runtimes as in [29]; phase transitions of different probabilities, here the probability of recovery is 1 but this could be set to something else, like 1/2 in the simulations in [29]; one could also compare number of iterations and iteration cost as was also done in [29]. , with c d , c n > 0, hence concluding the proof.

Lemma 3.1
The phase transition curve ρ BT (δ; d, ǫ) is based the bound of the exponent of (16), which is In the propotional growth asymptotics (s, n, N ) → ∞ while s/n → ρ ∈ (0, 1) and n/N → δ ∈ (0, 1). This implies that o(N ) → 0 and (37) becomes where c n > 0 is as in (28). If (38) is negative then as the problem size grows we have Therefore, setting (38) to zero and solving for ρ gives us a critical ρ below which (38) is negative and positive above it. The critical ρ is the phase transition ρ, i.e. ρ BT (δ; d, ǫ), where below ρ BT (δ; d, ǫ) is parameterized by the γ in the lemma. This concludes the proof.

Lemma 4.1
By the dyadic splitting proposed in [4], we let and therefore In (41) (42) In a slight abuse of notation we write l j j∈ [m] to denote applying the sum m times. We also drop the limits of the summation indices henceforth. Now we use Lemma 8.1 in Appendix 8.1 to simplify (42) as follows.
Now we proceed with the splitting -note (43) stopped only at the first level. At the next level, the second, we will have q 2 sets with Q 2 columns and r 2 sets with R 2 columns which leads to the following expression.
We continue this splitting of each instance of Prob(·) for ⌈log 2 s⌉ − 1 levels until reaching sets with single columns where, by construction, the probability that the single column has d nonzeros is one. Note that at this point we drop the subscripts j i , as they are no longer needed. This process gives a complicated product of nested sums of P n (·) which we express as Using the expression for P n (·) in (133) of Lemma 8.2 (Appendix 8.1) we bound (45) by bounding each P n (·) as in (134) with a product of a polynomial, π(·), and an exponential with exponent ψ n (·).
Using Lemma 8.4 in Appendix 8.1 we maximize the ψ n (·) and hence the exponentials in (46). We maximize each ψ n (·) by choosing l (·) to be a (·) . Then (46) will be upper bounded by the following.
where ψ i is given by (8). The upper bound of Π (l s , . . . , l 2 , d) is given by the following proposition.
Properly aligning the π(·) with their relevant summations simplifies the right hand side (RHS) of (57) to the following.  We use the RHS of (59) to upper bound each term in (58), leading to the following bound.  For each Q i and R i , i = 1, . . . , ⌈log 2 s⌉ − 2, which means we have ⌈log 2 s⌉ − 2 pairs plus one Q 0 , hence (60) simplifies to the following.
From (61) to (62) we upper each sum by taking the largest possible value of l (·) , which is a (·) , and multiplied it with the total number terms in the summation given by Lemma 8.1 in Appendix 8.1.
We did upper bound the following two terms of (63).
From (66) to (67) we used the following upper bound.

Theorem 4.2
The following lemma is a key input in this proof.
Lemma 6.1. Let 0 < α ≤ 1, and ε n > 0 such that ε n → 0 as n → ∞. Then for a s <â s , where The proof of the lemma is found in Section 6.6. Recall from Theorem 4.1 that Ψ n (a s , . . . , a 1 ) = 1 n 3 log 2 2 log 2 2 s + log 2 s − where We use Lemma 6.1 to upper bound ψ i in (72) away from zero from above as n → 0. We formalize this bound in the following proposition. Proposition 6.2. Let η > 0 and β > 1 as defined in Lemma 6.1. Then The proof of Proposition 6.2 is found in Section 6.7. Using the bound of ψ i in Proposition 6.2, we upper Ψ (a s , . . . , a 1 ) as follows.
Ψ (a s , . . . , a 1 ) ≤ 1 n 3 log 2 2 log 2 2 s + log 2 s − Then setting a s = (1 − ǫ)ds and substituting in (75), the factor multiplying 1 n becomes 3 log 2 2 log 2 2 s + log 2 s − The last two terms of (79) become polynomial in s, d and ǫ, when exponentiated hence they are incorporated into p n (s, d, ǫ) in (14), which means which is (14). The first two terms of (79) will grow faster than a polynomial in s, d and ǫ when exponentiated, hence they replace in (75), the factor multiplying 1 n . Therefore, (79) is modified as thus The factor i∈Ω s 2i · a i n in (83) is lower bounded as follows, see proof in Section 6.8.

Lemma 6.1
Recall that we have a formula for the expected values of the a i aŝ which follow a relatively simple formulas, and then the coupled system of cubics as for when the final a s is constrained to be less thanâ s . To simplify the notation of the indexing in (86), observe that if i = 2 j for a fixed j, then 2i = 2 j+1 and 4i = 2 j+2 . Therefore, it suffice to use the index a j , a j+1 , and a j+2 rather than a i , a 2i , and a 4i . Moving the second two terms in (86) to the right and dividing the quadratic multiples we get the relation which is the same expression on the right and left, but with j increased by one on the left. This implies that the fraction is independent of j, so for some constant c independent of j (though not necessarily of n). This is in fact the relation (85), if we set c to be equal to −1/n. One can then wonder what is the behavior of c if we fix the final a s . Moreover, (88) is equivalent to which inductively leads to ca l + 1 = (ca 0 + 1) 2 l , l > 0, so that one has a relation of the l th stage in terms of the first stage. Note this does not require the a s to be fixed, (90) is how one simply computes all a l for l > 0 once one has a 0 and c. The point is that c to match the a s one has to select c appropriately. So the way we calculate c is by knowing a 0 and a s , then solving (90) for l = s. Unfortunately there is not an easy way to solve for c in (90) so we need to do some asymptotic approximation. Let's assume that a l is close toâ l . So we do an asymptotic expansion in terms of the difference fromâ l .
To simplify things a bit lets insert a 0 = d (since a 0 is a 1 in our standard notation) and then we insert what we know forâ l . Forâ l we have c = −n −1 , see (85). We then have from (90) that So if we write a l = (1 − ε n )â l and consider the case of ε n → 0 as n → 0. The point of this is that instead of working withâ l we can now work in terms of ε n . Setting a l = (1 − ε n )â l gives We now solve for c as a function of ε n and d. As ε n goes to zero we should have c converging to −n −1 . Let αn = 2 l , for 0 < α ≤ 1 and c = −β(ε n , d)/n, then, dropping the argument of β(·, ·), (92) becomes Multiplying through by β/n and performing a change of variables of k = αn, (93) becomes The left hand side of (94) simplifies to The right hand side of (94) simplifies to Matching powers of k in (95) and (96) for k 0 and k −1 yields the following.
Both of which respectively simplify to the following.
Multiply (99) by β and subtract the two equations, (99) and (100), to get This yields To be consistent with what c ought to be as ε n → 0, we choose as required -concluding the proof.

Proposition 6.2
We use Lemma 6.1 to express ψ i in (72) as follows Note that for regimes of small s/n considered We need the following expressions for the Shannon entropy and it's first and second derivatives .
From (106), we deduce the following ordering To simplify notation, let x 1 = a i n · 1+ca i 1− a i n , x 2 = a i n , and x 3 = −ca i , which implies that x 1 ≤ x 2 ≤ x 3 ≤ 1/2. Therefore, from (110), we have Observe that the expression in the square brackets on the right hand side of (116) is zero, which implies that This is very easy to check by substituting the values of x 1 , x 2 , and x 3 . So instead of bound (115), we alternatively upper bound (114) as follows where ξ 21 ∈ (x 1 , x 2 ), and ξ 32 ∈ (x 2 , x 3 ), which implies ξ 21 < ξ 32 , and H ′ (ξ 21 ) > H ′ (ξ 32 ) .
where η = ξ 32 − ξ 21 > 0, and the last bound is due to the fact that x 3 > ξ 31 . Going back to our normal notation, we rewrite bound (124) as follows.
This concludes the proof.

Inequality 84
The series bound (84) is derived as follows.
That is the required bounds, hence concluding the proof.

Conclusion
We considered the construction of sparse matrices that are invaluable for dimensionality reduction with application in diverse fields. These sparse matrices are more efficient computationally compared to their dense counterparts also used for the purpose of dimensionality reduction. Our construction is probabilistic based on the dyadic splitting method we introduced in [4]. By better approximation of the bounds we achieve a novel result, which is a reduced complexity of the sparsity per column of these matrices. Precisely, a complexity that is a state-of-the-art divided by log s, where s is the intrinsic dimension of the problem. Our approach is one of a few that gives quantitative sampling theorems for existence of such sparse matrices. Moreover, using the phase transition framework comparison, our construction is better than existing probabilistic constructions. We are also able to compare performance of combinatorial compressed sensing algorithms by comparing their phase transition curves. This is one perspective in algorithm comparison amongst a couple of others like runtime and iteration complexities.
Evidently, our results holds true for the construction of expander graphs, which is a graph theory problem and is of interest to communities in theoretical computer science and pure mathematics.
ACKNOWLEDGMENT BB acknowledges the support from the funding by the German Federal Ministry of Education and Research (BMBF) for the German Research Chair at AIMS South Africa, funding for which is administered by Alexander von Humboldt Foundation (AvH). JT acknowledges support from The Alan Turing Institute under the EPSRC grant EP/N510129/1.

Key relevant results from [4]
In order to make this manuscript self containing we include in this section key relevant lemmas, corollaries and definitions from [4]. Lemma 8.1 (Lemma 2.5, [4]). Let S be an index set of cardinality s. For any level j of the dyadic splitting, j = 0, . . . , ⌈log 2 s⌉ − 1, the set S is decomposed into disjoint sets each having cardinality Q j = s 2 j or R j = Q j − 1. Let q j sets have cardinality Q j and r j sets have cardinality R j , then q j = s − 2 j · s 2 j + 2 j , and r j = 2 j − q j . (132) Also let B 1 and B 2 be drawn uniformly at random, independent of each other, and define Definition 8.1. P n (x, y, z) defined in (133) satisfies the upper bound P n (x, y, z) ≤ π (x, y, z) exp(ψ n (x, y, z)) (134) with bounds of π (x, y, z) given in Lemma 8.3.
The following bound, used in [4], is deducible from an asymptotic series for the logarithms Stirling approximation of the factorial (!)

Inequality 64
By Lemma 8.1, the left hand side (LHS) of (64) is equal to the following.
We simplify (144) to get the following.

Inequality 65
Again by Lemma 8.1, the left hand side (LHS) of (65), i.e. (a Q 0 a Q 1 a R 1 a Q 2 a R 2 a Q 3 a R 3 · · · a 3 a 2 ) 1/2 is equal to the following.
In (153) we used the fact that 2 log 2 s−2 is a lower bound to 2 ⌈log 2 s⌉−2 . We fix a s = (1−ǫ)ds =: cs and we require expansion to hold for all |S| ≤ s, i.e. a s ′ = cs ′ for all s ′ ≤ s. Thus we can re-write (153) as follows. In total we have twice (log 2 s − 2) plus 1 factors of a s . We use this and the fact that c/a s = 1/s to simplify (156) to (157), which further simplifies to (158) by rearranging the terms in (157).