World-class research. Ultimate impact.
More on impact ›

Original Research ARTICLE

Front. Appl. Math. Stat., 18 December 2018 | https://doi.org/10.3389/fams.2018.00062

Randomized Distributed Mean Estimation: Accuracy vs. Communication

  • 1School of Mathematics, The University of Edinburgh, Edinburgh, United Kingdom
  • 2Moscow Institute of Physics and Technology, Dolgoprudny, Russia
  • 3King Abdullah University of Science and Technology, Thuwal, Saudi Arabia

We consider the problem of estimating the arithmetic average of a finite collection of real vectors stored in a distributed fashion across several compute nodes subject to a communication budget constraint. Our analysis does not rely on any statistical assumptions about the source of the vectors. This problem arises as a subproblem in many applications, including reduce-all operations within algorithms for distributed and federated optimization and learning. We propose a flexible family of randomized algorithms exploring the trade-off between expected communication cost and estimation error. Our family contains the full-communication and zero-error method on one extreme, and an ϵ-bit communication and O(1/(n)) error method on the opposite extreme. In the special case where we communicate, in expectation, a single bit per coordinate of each vector, we improve upon existing results by obtaining O(r/n) error, where r is the number of bits used to represent a floating point value.

1. Introduction

We address the problem of estimating the arithmetic mean of n vectors, X1,,Xnd, stored in a distributed fashion across n compute nodes, subject to a constraint on the communication cost.

In particular, we consider a star network topology with a single server at the centre and n nodes connected to it. All nodes send an encoded (possibly via a lossy randomized transformation) version of their vector to the server, after which the server performs a decoding operation to estimate the true mean

X=def1ni=1nXi.

The purpose of the encoding operation is to compress the vector so as to save on communication cost, which is typically the bottleneck in practical applications.

To better illustrate the setup, consider the naive approach in which all nodes send the vectors without performing any encoding operation, followed by the application of a simple averaging decoder by the server. This results in zero estimation error at the expense of maximum communication cost of ndr bits, where r is the number of bits needed to communicate a single floating point entry/coordinate of Xi.

This operation appears as a computational primitive in numerous cases, and the communication cost can be reduced at the expense of acurracy. Our proposal for balancing accuracy and communication is in practice relevant for any application that uses the MPI_Gather or MPI_Allgather routines [1], or their conceptual variants, for efficient implementation and can tolerate inexactness in compuation, such as many algorithms for distributed optimization.

1.1. Background and Contributions

The distributed mean estimation problem was recently studied in a statistical framework where it is assumed that the vectors Xi are independent and identicaly distributed samples from some specific underlying distribution. In such a setup, the goal is to estimate the true mean of the underlying distribution [25]. These works formulate lower and upper bounds on the communication cost needed to achieve the minimax optimal estimation error.

In contrast, we do not make any statistical assumptions on the source of the vectors, and study the trade-off between expected communication costs and mean square error of the estimate. Arguably, this setup is a more robust and accurate model of the distributed mean estimation problems arising as subproblems in applications such as reduce-all operations within algorithms for distributed and federated optimization [610]. In these applications, the averaging operations need to be done repeatedly throughout the iterations of a master learning/optimization algorithm, and the vectors {Xi} correspond to updates to a global model/variable. In such cases, the vectors evolve throughout the iterative process in a complicated pattern, typically approaching zero as the master algorithm converges to optimality. Hence, their statistical properties change, which renders fixed statistical assumptions not satisfied in practice.

For instance, when training a deep neural network model in a distributed environment, the vector Xi corresponds to a stochastic gradient based on a minibatch of data stored on node i. In this setup we do not have any useful prior statistical knowledge about the high-dimensional vectors to be aggregated. It has recently been observed that when communication cost is high, which is typically the case for commodity clusters, and even more so in a federated optimization framework, it is can be very useful to sacrifice on estimation accuracy in favor of reduced communication [11, 12].

In this paper we propose a parametric family of randomized methods for estimating the mean X, with parameters being a set of probabilities pij for i = 1, …, n and j = 1, 2, …, d and node centers μi ∈ ℝ for i = 1, 2, …, n. The exact meaning of these parameters is explained in section 3. By varying the probabilities, at one extreme, we recover the exact method described, enjoying zero estimation error at the expense of full communication cost. At the opposite extreme are methods with arbitrarily small expected communication cost, which is achieved at the expense of suffering an exploding estimation error. Practical methods appear somewhere on the continuum between these two extremes, depending on the specific requirements of the application at hand. Suresh et al. [13] propose a method combining a pre-processing step via a random structured rotation, followed by randomized binary quantization. Their quantization protocol arises as a suboptimal special case of our parametric family of methods1.

To illustrate our results, consider the special case presented in Example 7, in which we choose to communicate a single bit per element of Xi only. We then obtain an O(rnR) bound on the mean square error, where r is number of bits used to represent a floating point value, and R=1ni=1nXi-μi12 with μi ∈ ℝ being the average of elements of Xi, and 1 the all-ones vector in ℝd. Note that this bound improves upon the performance of the method of Suresh et al. [13] in two aspects. First, the bound is independent of d, improving from logarithmic dependence, as stated in Remark 4 in detail. Further, due to a preprocessing rotation step, their method requires O(dlogd) time to be implemented on each node, while our method is linear in d. This and other special cases are summarized in Table 1 in section 5.

TABLE 1
www.frontiersin.org

Table 1. Summary of achievable communication cost and estimation error, for various choices of probability p.

While the above already improves upon the state of the art, the improved results are in fact obtained for a suboptimal choice of the parameters of our method (constant probabilities pij, and node centers fixed to the mean μi). One can decrease the MSE further by optimizing over the probabilities and/or node centers (see section 6). However, apart from a very low communication cost regime in which we have a closed form expression for the optimal probabilities, the problem needs to be solved numerically, and hence we do not have expressions for how much improvement is possible. We illustrate the effect of fixed and optimal probabilities on the trade-off between communication cost and MSE experimentally on a few selected datasets in section 6 (see Figure 1).

FIGURE 1
www.frontiersin.org

Figure 1. Trade-off curves between communication cost and estimation error (MSE) for four protocols. The plots correspond to vectors Xi drawn in an i.i.d. fashion from Gaussian, Laplace, and χ2 distributions, from left to right. The black cross marks the performance of binary quantization (Example 4).

Remark 1. Since the initial version of this work, an updated version of Suresh et al. [13] contains a rate similar to Example 7, using variable length coding. That work also formulates lower bounds, which are attained by both their and our results. Other works that were published since, such as [14, 15], propose algorithms that can also be represented as a particular choice of protocols α, β, γ, demonstrating the versatility of our proposal.

1.2. Outline

In section 2 we formalize the concepts of encoding and decoding protocols. In section 3 we describe a parametric family of randomized (and unbiased) encoding protocols and give a simple formula for the mean squared error. Subsequently, in section 4 we formalize the notion of communication cost, and describe several communication protocols, which are optimal under different circumstances. We give simple instantiations of our protocol in section 5, illustrating the trade-off between communication costs and accuracy. In section 6 we address the question of the optimal choice of parameters of our protocol. Finally, in section 7 we comment on possible extensions we leave out to future work.

2. Three Protocols

In this work we consider (randomized) encoding protocols α, communication protocols β, and decoding protocols γ using which the averaging is performed inexactly as follows. Node i computes a (possibly stochastic) estimate of Xi using the encoding protocol, which we denote Yi=α(Xi)d, and sends it to the server using communication protocol β. By β(Yi) we denote the number of bits that need to be transferred under β. The server then estimates X using the decoding protocol γ of the estimates:

Y=defγ(Y1,,Yn).

The objective of this work is to study the trade-off between the (expected) number of bits that need to be communicated, and the accuracy of Y as an estimate of X.

In this work we focus on encoders which are unbiased, in the following sense.

Definition 2.1 (Unbiased and Independent Encoder): We say that encoder α is unbiased if Eα[α(Xi)] = Xi for all i = 1, 2, …, n. We say that it is independent, if α(Xi) is independent from α(Xj) for all ij.

Example 1 (Identity Encoder). A trivial example of an encoding protocol is the identity function: α(Xi) = Xi. It is both unbiased and independent. This encoder does not lead to any savings in communication that would be otherwise infeasible though.

Another examples of unbiased and independent Encoders include the protocols introduced in section 3, or other existing techniques [12, 14, 15].

We now formalize the notion of accuracy of estimating X via Y. Since Y can be random, the notion of accuracy will naturally be probabilistic.

Definition 2.2 (Estimation Error / Mean Squared Error): The mean squared error of protocol (α, γ) is the quantity

MSEα,γ(X1,,Xn)=Eα,γ[YX2]                                        =Eα,γ[γ(α(X1),,α(Xn))X]2.

To illustrate the above concept, we now give a few examples:

Example 2 (Averaging Decoder). If γ is the averaging function, i.e., γ(Y1,,Yn)=1ni=1nYi, then

MSEα,γ(X1,,Xn)=1n2Eα[i=1n(α(Xi)Xi)2].

The next example generalizes the identity encoder and averaging decoder.

Example 3 (Linear Encoder and Inverse Linear Decoder). Let A:ℝd → ℝd be linear and invertible. Then we can set Yi=α(Xi)=defAXi and γ(Y1,,Yn)=defA-1(1ni=1nYi). If A is random, then α and γ are random (e.g., a structured random rotation, see [16]). Note that

γ(Y1,,Yn)=1ni=1nA1Yi=1ni=1nXi=X,

and hence the MSE of (α, γ) is zero.

We shall now prove a simple result for unbiased and independent encoders used in subsequent sections.

Lemma 2.3 (Unbiased and Independent Encoder + Averaging Decoder): If the encoder α is unbiased and independent, and γ is the averaging decoder, then

MSEα,γ(X1,,Xn)=1n2i=1nEα[YiXi2]                                        =1n2i=1nVarα[α(Xi)].

Proof. Note that Eα[Yi] = Xi for all i. We have

MSEα(X1,,Xn)=Eα[YX2]                                     =1n2Eα[i=1nYiXi2]                                     =(*)1n2i=1nEα[YiEα[Yi]2]                                      =1n2i=1nVarα[α(Xi)],

where (*) follows from unbiasedness and independence.      

One may wish to define the encoder as a combination of two or more separate encoders: α(Xi) = α21(Xi)). See Suresh et al. [13] for an example where α1 is a random rotation and α2 is binary quantization.

3. A Family of Randomized Encoding Protocols

Let X1,,Xnd be given. We shall write Xi = (Xi(1), …, Xi(d)) to denote the entries of vector Xi. In addition, with each i we also associate a parameter μi ∈ ℝ. We refer to μi as the center of data at node i, or simply as node center. For now, we assume these parameters are fixed. As a special case, we recover for instance classical binary quantization, see section 5.1. We shall comment on how to choose the parameters optimally in section 6.

We shall define support of α on node i to be the set Si=def{j:Yi(j)μi}. We now define two parametric families of randomized encoding protocols. The first results in Si of random size, the second has Si of a fixed size.

3.1. Encoding Protocol With Variable-Size Support

With each pair (i, j) we associate a parameter 0 < pij ≤ 1, representing a probability. The collection of parameters {pij, μi} defines an encoding protocol α as follows:

Yi(j)={Xi(j)pij1pijpijμi with probability pij,μi with probability 1pij.    (1)

Remark 2. Enforcing the probabilities to be positive, as opposed to non-negative, leads to vastly simplified notation in what follows. However, it is more natural to allow pij to be zero, in which case we have Yi(j) = μi with probability 1. This raises issues such as potential lack of unbiasedness, which can be resolved, but only at the expense of a larger-than-reasonable notational overload.

In the rest of this section, let γ be the averaging decoder (Example 2). Since γ is fixed and deterministic, we shall for simplicity write Eα[·] instead of Eα, γ[·]. Similarly, we shall write MSEα(·) instead of MSEα, γ(·).

We now prove two lemmas describing properties of the encoding protocol α. Lemma 3.1 states that the protocol yields an unbiased estimate of the average X and Lemma 3.2 provides the expected mean square error of the estimate.

Lemma 3.1 (Unbiasedness): The encoder α defined in (1) is unbiased. That is, Eα[α(Xi)] = Xi for all i. As a result, Y is an unbiased estimate of the true average: Eα[Y] = X.

Proof. Due to linearity of expectation, it is enough to show that Eα[Y(j)] = X(j) for all j. Since Y(j)=1ni=1nYi(j) and X(j)=1ni=1nXi(j), it suffices to show that Eα[Yi(j)] = Xi(j):

Eα[Yi(j)]=pij(Xi(j)pij1pijpijμi(j))+(1pij)μi(j)=Xi(j),

and the claim is proved.       

Lemma 3.2 (Mean Squared Error): Let α = α(pij, μi) be the encoder defined in (1). Then

MSEα(X1,,Xn)=1n2i,j(1pij1)(Xi(j)μi)2.    (2)

Proof. Using Lemma 3.2, we have

MSEα(X1,,Xn)=1n2i=1nEα[YiXi2]                                    =1n2i=1nEα[j=1d(Yi(j)Xi(j))2]                                    =1n2i=1nj=1dEα[(Yi(j)Xi(j))2].    (3)

For any i, j we further have

Eα[(Yi(j)Xi(j))2]=pij(Xi(j)pij1pijpijμiXi(j))2                                            +(1pij)(μiXi(j))2                                            =(1pij)2pij(Xi(j)μi)2                                             +(1pij)(μiXi(j))2                                             =(1pijpij)(Xi(j)μi)2.

It suffices to substitute the above into (3).       

3.2. Encoding Protocol With Fixed-Size Support

Here we propose an alternative encoding protocol, one with deterministic support size. As we shall see later, this results in deterministic communication cost.

Let σk(d) denote the set of all subsets of {1, 2, …, d} containing k elements. The protocol α with a single integer parameter k is then working as follows: First, each node i samples Diσk(d) uniformly at random, and then sets

Yi(j)={dXi(j)kdkkμi if jDi,μi otherwise.    (4)

Note that due to the design, the size of the support of Yi is always k, i.e., |Si| = k. Naturally, we can expect this protocol to perform practically the same as the protocol (1) with pij = k/d, for all i, j. Lemma 3.4 indeed suggests this is the case. While this protocol admits a more efficient communication protocol (as we shall see in section 4.4), protocol (1) enjoys a larger parameters space, ultimately leading to better MSE. We comment on this tradeoff in subsequent sections.

As for the data-dependent protocol, we prove basic properties. The proofs are similar to those of Lemmas 3.1 and 3.2 and we defer them to Appendix A.

Lemma 3.3 (Unbiasedness): The encoder α defined in (4) is unbiased. That is, Eα[α(Xi)] = Xi for all i. As a result, Y is an unbiased estimate of the true average: Eα[Y] = X.

Lemma 3.4 (Mean Squared Error): Let α = α(k) be encoder defined as in (4). Then

MSEα(X1,,Xn)=1n2i=1nj=1d(dkk)(Xi(j)μi)2.    (5)

4. Communication Protocols

Having defined the encoding protocols α, we need to specify the way the encoded vectors Yi = α(Xi), for i = 1, 2, …, n, are communicated to the server. Given a specific communication protocol β, we write β(Yi) to denote the (expected) number of bits that are communicated by node i to the server. Since Yi = α(Xi) is in general not deterministic, β(Yi) can be a random variable.

Definition 4.1 (Communication Cost): The communication cost of communication protocol β under randomized encoding α is the total expected number of bits transmitted to the server:

Cα,β(X1,,Xn)=Eα[i=1nβ(α(Xi))].    (6)

Given Yi, a good communication protocol is able to encode Yi = α(Xi) using a few bits only. Let r denote the number of bits used to represent a floating point number. Let r̄ be the the number of bits representing μi.

In the rest of this section we describe several communication protocols β and calculate their communication cost.

4.1. Naive

Represent Yi = α(Xi) as d floating point numbers. Then for all encoding protocols α and all i we have β(α(Xi)) = dr, whence

Cα,β=Eα[i=1nβ(α(Xi))]=ndr.

4.2. Varying-Length

We will use a single variable for every element of the vector Yi, which does not have constant size. The first bit decides whether the value represents μi or not. If yes, end of variable, if not, next r bits represent the value of Yi(j). In addition, we need to communicate μi, which takes r̄ bits2. We thus have

β(α(Xi))=r¯+j=1d(1(Yi(j)=μi)+(r+1)×1(Yi(j)μi)),    (7)

where 1e is the indicator function of event e. The expected number of bits communicated is given by

Cα,β=Eα[i=1nβ(α(Xi))]=(7)nr¯+i=1nj=1d(1pij+(r+1)pij)                                                         =nr¯+i=1nj=1d(1+rpij)

In the special case when pij = p > 0 for all i, j, we get

Cα,β=n(r¯+d+pdr).

4.3. Sparse Communication Protocol for Encoder (1)

We can represent Yi as a sparse vector; that is, a list of pairs (j, Yi(j)) for which Yi(j) ≠ μi. The number of bits to represent each pair is ⌈log(d)⌉ + r. Any index not found in the list, will be interpreted by server as having value μi. Additionally, we have to communicate the value of μi to the server, which takes r̄ bits. We assume that the value d, size of the vectors, is known to the server. Hence,

β(α(Xi))=r¯+j=1d1(Yi(j)μi)×(log d+r).

Summing up through i and taking expectations, the the communication cost is given by

Cα,β=Eα[i=1nβ(α(Xi))]=nr¯+(log d+r)i=1nj=1dpij.    (8)

In the special case when pij = p > 0 for all i, j, we get

Cα,β=nr¯+(log d+r)ndp.

Remark 3. A practical improvement upon this could be to (without loss of generality) assume that the pairs (j, Yi(j)) are ordered by j, i.e., we have {(js,Yi(js))}s=1k for some k and j1 < j2 < ⋯ < jk. Further, let us denote j0 = 0. We can then use a variant of variable-length quantity [17] to represent the set {(js-js-1,Yi(js))}s=1k. With careful design one can hope to reduce the log(d) factor in the average case. Nevertheless, this does not improve the worst case analysis we focus on in this paper, and hence we do not delve deeper in this. After the first version of this work was posted on arXiv, such an idea was independently proposed and analyzed in Alistarh et al. [14].

4.4. Sparse Communication Protocol for Encoder (4)

We now describe a sparse communication protocol compatible with fixed length encoder defined in (4). Note that the selection of set Di is independent of the values Xi(j) being compressed. We can utilize this fact, and instead of communicating index-value pairs (j, Yi(j)) as above, we can only communicate the values Yi(j), and the indices they correspond to can be reconstructed from a shared random seed. This lets us avoid the log(d) factor in (8). Apart from protocol (4), this idea is also applicable to protocol (1) with uniform probabilities pij.

In particular, we represent Yi as a vector containing the list of the values for which Yi(j) ≠ μi, ordered by j. Additionally, we communicate the value μi (using r̄ bits) and a random seed (using r̄s bits), which can be used to reconstruct the indices j, corresponding to the communicated values. Note that for any fixed k defining protocol (4), we have |Si| = k. Hence, communication cost is deterministic:

Cα,β=i=1nβ(α(Xi))=n(r¯+r¯s)+nkr.    (9)

In the case of the variable-size-support encoding protocol (1) with pij = p > 0 for all i, j, the sparse communication protocol described here yields expected communication cost

Cα,β=Eα[i=1nβ(α(Xi))]=n(r¯+r¯s)+ndpr.    (10)

4.5. Binary

If the elements of Yi take only two different values, Yimin or Yimax, we can use a binary communication protocol. That is, for each node i, we communicate the values of Yimin and Yimax (using 2r bits), followed by a single bit per element of the array indicating whether Yimax or Yimin should be used. The resulting (deterministic) communication cost is

Cα,β=i=1nβ(α(Xi))=n(2r)+nd.    (11)

4.6. Discussion

In the above, we have presented several communication protocols of different complexity. However, it is not possible to claim any of them is the most efficient one. Which communication protocol is the best, depends on the specifics of the used encoding protocol. Consider the extreme case of encoding protocol (1) with pij = 1 for all i, j. The naive communication protocol is clearly the most efficient, as all other protocols need to send some additional information.

However, in the interesting case when we consider small communication budget, the sparse communication protocols are the most efficient. Therefore, in the following sections, we focus primarily on optimizing the performance using these protocols.

5. Examples

In this section, we highlight on several instantiations of our protocols, recovering existing techniques and formulating novel ones. We comment on the resulting trade-offs between communication cost and estimation error.

5.1. Binary Quantization

We start by recovering an existing method, which turns every element of the vectors Xi into a particular binary representation.

Example 4. If we set the parameters of protocol (1) as μi=Ximin and pij=Xi(j)-XiminΔi, where Δi=defXimax-Ximin (assume, for simplicity, that Δi ≠ 0), we exactly recover the quantization algorithm proposed in Suresh et al. [13]:

Yi(j)={Ximax with probability Xi(j)XiminΔi,Ximin with probability XimaxXi(j)Δi.    (12)

Using the formula (2) for the encoding protocol α, we get

MSEα=1n2i=1nj=1dXimaxXi(j)Xi(j)Ximin(Xi(j)Ximin)2             d2n·1ni=1nXi2.

This exactly recovers the MSE bound established in Suresh et al. [13, Theorem 1]. Using the binary communication protocol yields the communication cost of 1 bit per element of Xi, plus a two real-valued scalars (11).

Remark 4. If we use the above protocol jointly with randomized linear encoder and decoder (see Example 3), where the linear transform is the randomized Hadamard transform, we recover the method described in Suresh et al. [13, section 3] which yields improved MSEα=2logd+2n·1ni=1nXi2 and can be implemented in O(dlogd) time.

5.2. Sparse Communication Protocols

Now we move to comparing the communication costs and estimation error of various instantiations of the encoding protocols, utilizing the deterministic sparse communication protocol and uniform probabilities.

For the remainder of this section, let us only consider instantiations of our protocol where pij = p > 0 for all i, j, and assume that the node centers are set to the vector averages, i.e., μi=1dj=1dXi(j). Denote R=1ni=1nj=1d(Xi(j)-μi)2. For simplicity, we also assume that |S| = nd, which is what we can in general expect without any prior knowledge about the vectors Xi.

The properties of the following examples follow from Equations (2) to (10). When considering the communication costs of the protocols, keep in mind that the trivial benchmark is Cα,β = ndr, which is achieved by simply sending the vectors unmodified. Communication cost of Cα,β = nd corresponds to the interesting special case when we use (on average) one bit per element of each Xi.

Example 5 (Full communication). If we choose p = 1, we get

Cα,β=n(r¯s+r¯)+ndr,  MSEα,γ=0.

In this case, the encoding protocol is lossless, which ensures MSE = 0. Note that in this case, we could get rid of the n(r̄s+r̄) factor by using naive communication protocol.

Example 6 (Log MSE). If we choose p = 1/log d, we get

Cα,β=n(r¯s+r¯)+ndrlog d,  MSEα,γ=log(d)1nR.

This protocol order-wise matches the MSE of the method in Remark 4. However, as long as d > 2r, this protocol attains this error with smaller communication cost. In particular, this is on expectation less than a single bit per element of Xi. Finally, note that the factor R is always smaller or equal to the factor 1ni=1nXi2 appearing in Remark 4.

Example 7 (1-bit per element communication). If we choose p = 1/r, we get

Cα,β=n(r¯s+r¯)+nd,  MSEα,γ=r1nR.

This protocol communicates on expectation single bit per element of Xi (plus additional r̄s+r̄ bits per client), while attaining bound on MSE of O(r/n). To the best of out knowledge, this is the first method to attain this bound without additional assumptions.

Example 8 (Alternative 1-bit per element communication). If we choose p=d-r̄s-r̄dr, we get

Cα,β=nd,  MSEα,γ=drdr¯sr¯1nR.

This alternative protocol attains on expectation exactly single bit per element of Xi, with (a slightly more complicated) O(r/n) bound on MSE.

Example 9 (Below 1-bit communication). If we choose p = 1/d, we get

Cα,β=n(r¯s+r¯)+nr,  MSEα,γ=d1nR.

This protocol attains the MSE of protocol in Example 4 while at the same time communicating on average significantly less than a single bit per element of Xi.

We summarize these examples in Table 1.

Using the deterministic sparse protocol, there is an obvious lower bound on the communication cost — n(r̄s+r̄). We can bypass this threshold by using the sparse protocol, with a data-independent choice of μi, such as 0, setting r̄=0. By setting p = ϵ/d(⌈log d⌉+r), we get arbitrarily small expected communication cost of Cα,β = ϵ, and the cost of exploding estimation error MSEα,γ=O(1/ϵn).

Note that all of the above examples have random communication costs. What we present is the expected communication cost of the protocols. All the above examples can be modified to use the encoding protocol with fixed-size support defined in (4) with the parameter k set to the value of pd for corresponding p used above, to get the same results. The only practical difference is that the communication cost will be deterministic for each node, which can be useful for certain applications.

6. Optimal parameters for Encoder α(piji)

Here we consider (α, β, γ), where α = α(pij, μi) is the encoder defined in (1), β is the associated the sparse communication protocol, and γ is the averaging decoder. Recall from Lemma 2 and (8) that the mean square error and communication cost are given by:

MSEα,γ=1n2i,j(1pij1)(Xi(j)μi)2,      Cα,β=nr¯+(log d+r)i=1nj=1dpij.    (13)

Having these closed-form formulae as functions of the parameters {pij, μi}, we can now ask questions such as:

1. Given a communication budget, which encoding protocol has the smallest mean squared error?

2. Given a bound on the mean squared error, which encoder suffers the minimal communication cost?

Let us now address the first question; the second question can be handled in a similar fashion. In particular, consider the optimization problem

minimize  i,j(1pij1)(Xi(j)μi)2subject to  μi, i=1,2,,n                      i,jpijB    (14)
0<pij1, i=1,2,,n; j=1,2,,d,    (15)

where B > 0 represents a bound on the part of the total communication cost in (13) which depends on the choice of the probabilities pij.

Note that while the constraints in (14) are convex (they are linear), the objective is not jointly convex in {pij, μi}. However, the objective is convex in {pij} and convex in {μi}. This suggests a simple alternating minimization heuristic for solving the above problem:

1. Fix the probabilities and optimize over the node centers,

2. Fix the node centers and optimize over probabilities.

These two steps are repeated until a suitable convergence criterion is reached. Note that the first step has a closed form solution. Indeed, the problem decomposes across the node centers to n univariate unconstrained convex quadratic minimization problems, and the solution is given by

μi=jwijXi(j)jwij,  wij=def1pij1.    (16)

The second step does not have a closed form solution in general; we provide an analysis of this step in section 6.1.

Remark 5. Note that the upper bound i,j(Xi(j)-μi)2/pij on the objective is jointly convex in {pij, μi}. We may therefore instead optimize this upper bound by a suitable convex optimization algorithm.

Remark 6. An alternative and a more practical model to (14) is to choose per-node budgets B1, …, Bn and require jpijBi for all i. The problem becomes separable across the nodes, and can therefore be solved by each node independently. If we set B=iBi, the optimal solution obtained this way will lead to MSE which is lower bpunded by the MSE obtained through (14).

6.1. Optimal Probabilities for Fixed Node Centers

Let the node centers μi be fixed. Problem (14) (or, equivalently, step 2 of the alternating minimization method described above) then takes the form

minimize  i,j(Xi(j)μi)2pijsubject to  i,jpijB                       0<pij1, i=1,2,n, j=1,2,,d.    (17)

Let S = {(i, j) : Xi(j) ≠ μi}. Notice that as long as B ≥ |S|, the optimal solution is to set pij = 1 for all (i, j) ∈ S and pij = 0 for all (i, j) ∉ S.3 In such a case, we have MSEα,γ = 0. Hence, we can without loss of generality assume that B ≤ |S|.

While we are not able to derive a closed-form solution to this problem, we can formulate upper and lower bounds on the optimal estimation error, given a bound on the communication cost formulated via B.

Theorem 6.1 (MSE-Optimal Protocols subject to a Communication Budget): Consider problem (17) and fix any B ≤ |S|. Using the sparse communication protocol β, the optimal encoding protocol α has communication complexity

Cα,β=nr¯+(log d+r)B,    (18)

and the mean squared error satisfies the bounds

(1B1)RnMSEα,γ(|S|B1)Rn,    (19)

where R=1ni=1nj=1d(Xi(j)-μi)2=1ni=1nXi-μi12. Let aij = |Xi(j) − μi| and W=i,jaij. If, moreover, B(i,j)Saij/max(i,j)Saij (which is true, for instance, in the ultra-low communication regime with B ≤ 1), then

MSEα,γ=W2n2BRn.    (20)

Proof. Setting pij = B/|S| for all (i, j) ∈ S leads to a feasible solution of (17). In view of (13), one then has

MSEα,γ=1n2(|S|B1)(i,j)S(Xi(j)μi)2=(|S|B1)Rn,

where R=1ni=1nj=1d(Xi(j)-μi)2=1ni=1nXi-μi12. If we relax the problem by removing the constraints pij ≤ 1, the optimal solution satisfies aij/pij = θ > 0 for all (i, j) ∈ S. At optimality the bound involving B must be tight, which leads to (i,j)Saij/θ=B, whence θ=1B(i,j)Saij. So, pij=aijB/(i,j)Saij. The optimal MSE therefore satisfies the lower bound

MSEα,γ1n2(i,j)S(1pij1)(Xi(j)μi)2=1n2BW2Rn,

where W=def(i,j)Saij((i,j)Saij2)1/2=(nR)1/2. Therefore, MSEα,γ(1B-1)Rn. If B(i,j)Saij/max(i,j)Saij, then pij ≤ 1 for all (i, j) ∈ S, and hence we have optimality. (Also note that, by Cauchy-Schwarz inequality, W2nR|S|.)       

6.2. Trade-Off Curves

To illustrate the trade-offs between communication cost and estimation error (MSE) achievable by the protocols discussed in this section, we present simple numerical examples in Figure 1, on three synthetic data sets with n = 16 and d = 512. We choose an array of values for B, directly bounding the communication cost via (18), and evaluate the MSE (2) for three encoding protocols (we use the sparse communication protocol and averaging decoder). All these protocols have the same communication cost, and only differ in the selection of the parameters pij and μi. In particular, we consider

(i) uniform probabilities pij = p > 0 with average node centers μi=1dj=1dXi(j) (blue dashed line),

(ii) optimal probabilities pij with average node centers μi=1dj=1dXi(j) (green dotted line), and

(iii) optimal probabilities with optimal node centers, obtained via the alternating minimization approach described above (red solid line).

In order to put a scale on the horizontal axis, we assumed that r = 16. Note that, in practice, one would choose r to be as small as possible without adversely affecting the application utilizing our distributed mean estimation method. The three plots represent Xi with entries drawn in an i.i.d. fashion from Gaussian (N(0,1)), Laplace (L(0,1)), and chi-squared (χ2(2)) distributions, respectively. As we can see, in the case of non-symmetric distributions, it is not necessarily optimal to set the node centers to averages.

As expected, for fixed node centers, optimizing over probabilities results in improved performance, across the entire trade-off curve. That is, the curve shifts downwards. In the first two plots based on data from symmetric distributions (Gaussian and Laplace), the average node centers are nearly optimal, which explains why the red solid and green dotted lines coalesce. This can be also established formally. In the third plot, based on the non-symmetric chi-squared data, optimizing over node centers leads to further improvement, which gets more pronounced with increased communication budget. It is possible to generate data where the difference between any pair of the three trade-off curves becomes arbitrarily large.

Finally, the black cross represents performance of the quantization protocol from Example 4. This approach appears as a single point in the trade-off space due to lack of any parameters to be fine-tuned.

7. Further Considerations

In this section we outline further ideas worth consideration. However, we leave a detailed analysis to future work.

7.1. Beyond Binary Encoders

We can generalize the binary encoding protocol (1) to a k-ary protocol. To illustrate the concept without unnecessary notation overload, we present only the ternary (i.e., k = 3) case.

Let the collection of parameters {pij,pij,X̄i,X̄i} define an encoding protocol α as follows:

Yi(j)={X¯iwith probability pij,X¯iwith probability pij,11pijpij(Xi(j)pijX¯ipijX¯i)with probability 1pijpij.    (21)

It is straightforward to generalize Lemmas 3.1 and 3.2 to this case. We omit the proofs for brevity.

Lemma 7.1 (Unbiasedness): The encoder α defined in (21) is unbiased. That is, Eα[α(Xi)] = Xi for all i. As a result, Y is an unbiased estimate of the true average: Eα[Y] = X.

Lemma 7.2 (Mean Squared Error): Let α=α(pij,pij,X̄i,X̄i) be the protocol defined in (21). Then

MSEα(X1,,Xn)=1n2i=1nj=1d(pij(Xi(j)X¯i)2                                     +pij(Xi(j)X¯i)2+(pijX¯i+pijX¯i)2).

We expect the k-ary protocol to lead to better (lower) MSE bounds, but at the expense of an increase in communication cost. Whether or not the trade-off offered by k > 2 is better than that for the k = 2 case investigated in this paper is an interesting question to consider.

7.2. Preprocessing via Random Transformations

Following the idea proposed in Suresh et al. [13], one can explore an encoding protocol αQ which arises as the composition of a random mapping, Q, applied to Xi for all i, followed by the protocol α described in section 3. Letting Zi = QXi and Z=1niZi, we thus have

Yi=α(Zi),  i=1,2,,n.

With this protocol we associate the decoder γ(Y1,,Yn)=1ni=1nQ-1Yi. Note that

MSEα,γ=E[γ(Y1,,Yn)X2]                 =E[Q1γ(Y1,,Yn)Q1Z2]                 =E[γ(α(Z1),,α(Zn))Z2]                 =E[E[γ(α(Z1),,α(Zn))Z2 | Q]].

This approach is motivated by the following observation: a random rotation can be identified by a single random seed, which is easy to communicate to the server without the need to communicate all floating point entries defining Q. So, a random rotation pre-processing step implies only a minor communication overhead. It is important to stress that the use of Q and Q−1 in particular, can incur a significant computational overhead. The randomized Hadamard transform used in Suresh et al.[13] requires O(dlogd) to apply, but computation of an inverse matrix can be O(n3) is general. However, if the preprocessing step helps to dramatically reduce the MSE, we get an improvement. Note that the inner expectation above is the formula for MSE of our basic encoding-decoding protocol, given that the data is Zi = QXi instead of {Xi}. The outer expectation is over Q. Hence, we would like the to find a mapping Q which tends to transform the data {Xi} into new data {Zi} with better MSE, in expectation.

From now on, for simplicity assume the node centers are set to the average, i.e., Z̄i=1dj=1dZi(j). For any vector x ∈ ℝd, define

σ(x)=defj=1d(x(j)x¯)2=xx¯12,

where x̄=1djx(j) and 1 is the vector of all ones. Further, for simplicity assume that pij = p for all i, j. Then using Lemma 3.2, we get

MSE=1ppn2i=1nEQ[ZiZ¯i12]=1ppn2i=1nEQ[σ(QXi)].

It is interesting to investigate whether choosing Q as a random mapping, rather than identity (which is the implicit choice done in previous sections), leads to improvement in MSE, i.e., whether we can in some well-defined sense obtain an inequality of the type

iEQ[σ(QXi)]iσ(Xi).

If Q was a tight frame satisfying the uncertainty principle, this could perhaps be realized by computing the Kashin representation of the vectors to be quantized [18]. However, as pointed out above, depending on the tight frame, this might come at a significant additional comutational cost, and it is not obvious how much can the variance be reduced.

This is the case for the quantization protocol proposed in Suresh et al. [13], which arises as a special case of our more general protocol. This is because the quantization protocol is suboptimal within our family of encoders. Indeed, as we have shown, with a different choice of the parameter we can obtain results which improve, in theory, on the rotation + quantization approach. This suggests that perhaps combining an appropriately chosen rotation pre-processing step with our optimal encoder, it may be possible to achieve further improvements in MSE for any fixed communication budget. Finding suitable random mappings Q requires a careful study which we leave to future research.

Author Contributions

All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

JK acknowledges support from Google via a Google European Doctoral Fellowship. Work done while at University of Edinburgh, currently at Google. PR acknowledges support from Amazon, and the EPSRC Grant EP/K02325X/1, Accelerated Coordinate Descent Methods for Big Data Optimization and EPSRC Fellowship EP/N005538/1, Randomized Algorithms for Extreme Convex Optimization.

Footnotes

1. ^See Remark 4.

2. ^The distinction here is because μi can be chosen to be data independent, such as 0, so we don't have to communicate anything (i.e., r̄=0).

3. ^We interpret 0/0 as 0 and do not worry about infeasibility. These issues can be properly formalized by allowing pij to be zero in the encoding protocol and in (17). However, handling this singular situation requires a notational overload which we are not willing to pay.

References

1. The MPI Forum. MPI: A Message Passing Interface Standard. Version 3.1 (2015). Available online at: http://www.mpi-forum.org/

2. Zhang Y, Wainwright MJ, Duchi JC. Communication-efficient algorithms for statistical optimization. In: Advances in Neural Information Processing Systems. Lake Tahoe (2012). p. 1502–10.

Google Scholar

3. Zhang Y, Duchi J, Jordan MI, Wainwright MJ. Information-theoretic lower bounds for distributed statistical estimation with communication constraints. In: Advances in Neural Information Processing Systems, Vol. 26. Lake Tahoe (2013). p. 2328–36.

Google Scholar

4. Garg A, Ma T, Nguyen HL. On communication cost of distributed statistical estimation and dimensionality. In: Advances in Neural Information Processing Systems, Vol. 27. Montreal, QC (2014). p. 2726–34.

Google Scholar

5. Braverman M, Garg A, Ma T, Nguyen HL, Woodruff DP. Communication lower bounds for statistical estimation problems via a distributed data processing inequality. In: Proceedings of the Forty-Eighth Annual ACM Symposium on Theory of Computing. Cambridge, MA (2016). p. 1011–20.

Google Scholar

6. Richtárik P, Takáč M. Distributed coordinate descent method for learning with big data. J Mach Learn Res. (2016) 17:1–25. doi: 10.1007/s10107-015-0901-6

CrossRef Full Text | Google Scholar

7. Ma C, Smith V, Jaggi M, Jordan MI, Richtárik P, Takáč M. Adding vs. averaging in distributed primal-dual optimization. In: Proceedings of The 32nd International Conference on Machine Learning. Montreal, QC (2015). p. 1973–82.

Google Scholar

8. Ma C, Konečný J, Jaggi M, Smith V, Jordan MI, Richtárik P, et al. Distributed optimization with arbitrary local solvers. Optim Methods Softw. (2017) 32:813–48. doi: 10.1080/10556788.2016.1278445

CrossRef Full Text | Google Scholar

9. Reddi SJ, Konečný J, Richtárik P, Póczós B, Smola A. AIDE: Fast and communication efficient distributed optimization. arXiv[preprint] (2016). arXiv:160806879.

Google Scholar

10. Konečný J, McMahan HB, Ramage D, Richtárik P. Federated optimization: distributed machine learning for on-device intelligence. arXiv[preprint] (2016). arXiv:161002527.

Google Scholar

11. McMahan B, Moore E, Ramage D, Hampson S, Arcas BA. Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics. Fort Lauderdale, FL (2017). p. 1273–82.

Google Scholar

12. Konečný J, McMahan HB, Yu FX, Richtárik P, Suresh AT, Bacon D. Federated learning: strategies for improving communication efficiency. arXiv [preprint] (2016). arXiv:161005492.

Google Scholar

13. Suresh AT, Felix XY, Kumar S, McMahan HB. Distributed mean estimation with limited communication. In: International Conference on Machine Learning. Sydney, NSW (2017). p. 3329–37.

Google Scholar

14. Alistarh D, Grubic D, Li J, Tomioka R, Vojnovic M. QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding. In: Advances in Neural Information Processing Systems, Vol. 30 (2017). Available online at: http://papers.nips.cc/paper/6768-qsgd-communication-efficient-sgd-via-gradient-quantization-and-encoding.pdf

Google Scholar

15. Wen W, Xu C, Yan F, Wu C, Wang Y, Chen Y, et al. TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning. In: Advances in Neural Information Processing Systems, Vol. 30 (2017). Available online at: http://papers.nips.cc/paper/6749-terngrad-ternary-gradients-to-reduce-communication-in-distributed-deep-learning.pdf

Google Scholar

16. Yu FXX, Suresh AT, Choromanski KM, Holtmann-Rice DN, Kumar S. Orthogonal random features. In: Advances in Neural Information Processing Systems. Barcelona (2016) p. 1975–83.

Google Scholar

17. Wikipedia. Variable-Length Quantity[Online] (2016). Available online at: https://en.wikipedia.org/wiki/Variable-length_quantity (Accessed November 9, 2016).

18. Lyubarskii Y, Vershynin R. Uncertainty principles and vector quantization. IEEE Trans Inform Theor. (2010) 56:3491–501. doi: 10.1109/TIT.2010.2048458

CrossRef Full Text | Google Scholar

Appendix

A. Additional Proofs

In this section we provide proofs of Lemmas 3.3 and 3.4, describing properties of the encoding protocol α defined in (4). For completeness, we also repeat the statements.

Lemma A.1 (Unbiasedness): The encoder α defined in (1) is unbiased. That is, Eα[α(Xi)] = Xi for all i. As a result, Y is an unbiased estimate of the true average: Eα[Y] = X.

Proof. Since Y(j)=1ni=1nYi(j) and X(j)=1ni=1nXi(j), it suffices to show that Eα[Yi(j)] = Xi(j):

Eα[Yi(j)]=1|σk(d)|σσk(d)[1(jσ)(dXi(j)kdkkμi)+1(jσ)μi]                          =(dk)1[(d1k1)(dXi(j)kdkkμi)+(d1k)μi]                          =(dk)1[(d1k1)dkXi(j)+((d1k)(d1k1)dkk)μi]                          =Xi(j)

and the claim is proved.       

Lemma A.2 (Mean Squared Error): Let α = α(k) be encoder defined as in (4). Then

MSEα(X1,,Xn)=1n2i=1nj=1ddkk(Xi(j)μi)2.

Proof. Using Lemma 2.3, we have

MSEα(X1,,Xn)=1n2i=1nEα[YiXi2]                                    =1n2i=1nEα[j=1d(Yi(j)Xi(j))2]                                     =1n2i=1nj=1dEα[(Yi(j)Xi(j))2].    (A1)

Further,

Eα[(Yi(j)Xi(j))2]=(dk)1σσk(d)[1(jσ)(dXi(j)kdkkμiXi(j))2+1(jσ)(μiXi(j))2]                                            =(dk)1[(d1k1)(dk)2k2(Xi(j)μi)2+(d1k)(μiXi(j))2]                                            =dkk(Xi(j)μi)2.

It suffices to substitute the above into (A1).       

Keywords: communication efficiency, distributed mean estimation, accuracy-communication tradeoff, gradient compression, quantization

Citation: Konečný J and Richtárik P (2018) Randomized Distributed Mean Estimation: Accuracy vs. Communication. Front. Appl. Math. Stat. 4:62. doi: 10.3389/fams.2018.00062

Received: 11 October 2018; Accepted: 28 November 2018;
Published: 18 December 2018.

Edited by:

Yiming Ying, University at Albany, United States

Reviewed by:

Shiyin Qin, Beihang University, China
Shao-Bo Lin, Wenzhou University, China

Copyright © 2018 Konečný and Richtárik. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Jakub Konečný, konkey@google.com