Randomized Distributed Mean Estimation: Accuracy vs Communication

We consider the problem of estimating the arithmetic average of a finite collection of real vectors stored in a distributed fashion across several compute nodes subject to a communication budget constraint. Our analysis does not rely on any statistical assumptions about the source of the vectors. This problem arises as a subproblem in many applications, including reduce-all operations within algorithms for distributed and federated optimization and learning. We propose a flexible family of randomized algorithms exploring the trade-off between expected communication cost and estimation error. Our family contains the full-communication and zero-error method on one extreme, and an $\epsilon$-bit communication and ${\cal O}\left(1/(\epsilon n)\right)$ error method on the opposite extreme. In the special case where we communicate, in expectation, a single bit per coordinate of each vector, we improve upon existing results by obtaining $\mathcal{O}(r/n)$ error, where $r$ is the number of bits used to represent a floating point value.


Introduction
We address the problem of estimating the arithmetic mean of n vectors, X 1 , . . . , X n ∈ R d , stored in a distributed fashion across n compute nodes, subject to a constraint on the communication cost.
In particular, we consider a star network topology with a single server at the centre and n nodes connected to it. All nodes send an encoded (possibly via a lossy randomized transformation) version of their vector to the server, after which the server performs a decoding operation to estimate the true mean The purpose of the encoding operation is to compress the vector so as to save on communication cost, which is typically the bottleneck in practical applications.
To better illustrate the setup, consider the naive approach in which all nodes send the vectors without performing any encoding operation, followed by the application of a simple averaging decoder by the server. This results in zero estimation error at the expense of maximum communication cost of ndr bits, where r is the number of bits needed to communicate a single floating point entry/coordinate of X i .

Background and Contributions
The distributed mean estimation problem was recently studied in a statistical framework where it is assumed that the vectors X i are independent and identicaly distributed samples from some specific underlying distribution. In such a setup, the goal is to estimate the true mean of the underlying distribution [14,13,2,1]. These works formulate lower and upper bounds on the communication cost needed to achieve the minimax optimal estimation error.
In contrast, we do not make any statistical assumptions on the source of the vectors, and study the trade-off between expected communication costs and mean square error of the estimate. Arguably, this setup is a more robust and accurate model of the distributed mean estimation problems arising as subproblems in applications such as reduce-all operations within algorithms for distributed and federated optimization [9,6,5,8,3]. In these applications, the averaging operations need to be done repeatedly throughout the iterations of a master learning/optimization algorithm, and the vectors {X i } correspond to updates to a global model/variable. In these applications, the vectors evolve throughout the iterative process in a complicated pattern, typically approaching zero as the master algorithm converges to optimality. Hence, their statistical properties change, which renders fixed statistical assumptions not satisfied in practice.
For instance, when training a deep neural network model in a distributed environment, the vector X i corresponds to a stochastic gradient based on a minibatch of data stored on node i. In this setup we do not have any useful prior statistical knowledge about the high-dimensional vectors to be aggregated. It has recently been observed that when communication cost is high, which is typically the case for commodity clusters, and even more so in a federated optimization framework, it is can be very useful to sacrifice on estimation accuracy in favor of reduced communication [7,4].
In this paper we propose a parametric family of randomized methods for estimating the mean X, with parameters being a set of probabilities p ij for i = 1, . . . , n and j = 1, 2, . . . , d and node centers µ i ∈ R d for i = 1, 2, . . . , n. The exact meaning of these parameters is explained in Section 3. By varying the probabilities, at one extreme, we recover the exact method described, enjoying zero estimation error at the expense of full communication cost. At the opposite extreme are methods with arbitrarily small expected communication cost, which is achieved at the expense of suffering an exploding estimation error. Practical methods appear somewhere on the continuum between these two extremes, depending on the specific requirements of the application at hand. Suresh et al. [10] propose a method combining a pre-processing step via a random structured rotation, followed by randomized binary quantization. Their quantization protocol arises as a suboptimal special case of our parametric family of methods.
To illustrate our results, consider the special case in which we choose to communicate a single bit per element of X i only. We then obtain an O r n R bound on the mean square error, where r is number of bits used to represent a floating point value, and R = 1 n n i=1 X i − µ i 1 2 with µ i ∈ R being the average of elements of X i , and 1 the all-ones vector in R d (see Example 7 in Section 5). Note that this bound improves upon the performance of the method of [10] in two aspects. First, the bound is independent of d, improving from logarithmic dependence. Further, due to a preprocessing rotation step, their method requires O(d log d) time to be implemented on each node, while our method is linear in d. This and other special cases are summarized in Table 1 in Section 5.
While the above already improves upon the state of the art, the improved results are in fact obtained for a suboptimal choice of the parameters of our method (constant probabilities p ij , and node centers fixed to the mean µ i ). One can decrease the MSE further by optimizing over the probabilities and/or node centers (see Section 6). However, apart from a very low communication cost regime in which we have a closed form expression for the optimal probabilities, the problem needs to be solved numerically, and hence we do not have expressions for how much improvement is possible. We illustrate the effect of fixed and optimal probabilities on the trade-off between communication cost and MSE experimentally on a few selected datasets in Section 6 (see Figure 1).

Outline
In Section 2 we formalize the concepts of encoding and decoding protocols. In Section 3 we describe a parametric family of randomized (and unbiased) encoding protocols and give a simple formula for the mean squared error. Subsequently, in Section 4 we formalize the notion of communication cost, and describe several communication protocols, which are optimal under different circumstances. We give simple instantiations of our protocol in Section 5, illustrating the trade-off between communication costs and accuracy. In Section 6 we address the question of the optimal choice of parameters of our protocol. Finally, in Section 7 we comment on possible extensions we leave out to future work.

Three Protocols
In this work we consider (randomized) encoding protocols α, communication protocols β and decoding protocols γ using which the averaging is performed inexactly as follows. Node i computes a (possibly stochastic) estimate of X i using the encoding protocol, which we denote Y i = α(X i ) ∈ R d , and sends it to the server using communication protocol β. By β(Y i ) we denote the number of bits that need to be transferred under β. The server then estimates X using the decoding protocol γ of the estimates: The objective of this work is to study the trade-off between the (expected) number of bits that need to be communicated, and the accuracy of Y as an estimate of X.
In this work we focus on encoders which are unbiased, in the following sense.
Definition 2.1 (Unbiased and Independent Encoder). We say that encoder α is unbiased if E α [α(X i )] = X i for all i = 1, 2, . . . , n. We say that it is independent, if α(X i ) is independent from α(X j ) for all i = j.
Example 1 (Identity Encoder). A trivial example of an encoding protocol is the identity function: α(X i ) = X i . It is both unbiased and independent. This encoder does not lead to any savings in communication that would be otherwise infeasible though.
We now formalize the notion of accuracy of estimating X via Y . Since Y can be random, the notion of accuracy will naturally be probabilistic. Definition 2.2 (Estimation Error / Mean Squared Error). The mean squared error of protocol (α, γ) is the quantity To illustrate the above concept, we now give a few examples: Example 2 (Averaging Decoder). If γ is the averaging function, i.e., γ(Y 1 , . . . , The next example generalizes the identity encoder and averaging decoder. Example 3 (Linear Encoder and Inverse Linear Decoder). Let A : R d → R d be linear and invertible.
If A is random, then α and γ are random (e.g., a structured random rotation, see [12]). Note that and hence the MSE of (α, γ) is zero.
We shall now prove a simple result for unbiased and independent encoders used in subsequent sections.

Lemma 2.3 (Unbiased and Independent
Encoder + Averaging Decoder). If the encoder α is unbiased and independent, and γ is the averaging decoder, then where (*) follows from unbiasedness and (**) from independence.
One may wish to define the encoder as a combination of two or more separate encoders: α(X i ) = α 2 (α 1 (X i )). See [10] for an example where α 1 is a random rotation and α 2 is binary quantization.

A Family of Randomized Encoding Protocols
Let X 1 , . . . , X n ∈ R d be given. We shall write X i = (X i (1), . . . , X i (d)) to denote the entries of vector X i . In addition, with each i we also associate a parameter µ i ∈ R. We refer to µ i as the center of data at node i, or simply as node center. For now, we assume these parameters are fixed and we shall later comment on how to choose them optimally.
We shall define support of α on node i to be the set S i We now define two parametric families of randomized encoding protocols. The first results in S i of random size, the second has S i of a fixed size.

Encoding Protocol with Variable-size Support
With each pair (i, j) we associate a parameter 0 < p ij ≤ 1, representing a probability. The collection of parameters {p ij , µ i } defines an encoding protocol α as follows: Remark 1. Enforcing the probabilities to be positive, as opposed to nonnegative, leads to vastly simplified notation in what follows. However, it is more natural to allow p ij to be zero, in which case we have Y i (j) = µ i with probability 1. This raises issues such as potential lack of unbiasedness, which can be resolved, but only at the expense of a larger-than-reasonable notational overload.
In the rest of this section, let γ be the averaging decoder (Example 2). Since γ is fixed and deterministic, we shall for simplicity write We now prove two lemmas describing properties of the encoding protocol α. Lemma 3.1 states that the protocol yields an unbiased estimate of the average X and Lemma 3.2 provides the expected mean square error of the estimate.
and the claim is proved.
Proof. Using Lemma 2.3, we have For any i, j we further have It suffices to substitute the above into (3).

Encoding Protocol with Fixed-size Support
Here we propose an alternative encoding protocol, one with deterministic support size. As we shall see later, this results in deterministic communication cost. Let σ k (d) denote the set of all subsets of {1, 2, . . . , d} containing k elements. The protocol α with a single integer parameter k is then working as follows: First, each node i samples D i ∈ σ k (d) uniformly at random, and then sets Note that due to the design, the size of the support of Y i is always k, i.e., |S i | = k. Naturally, we can expect this protocol to perform practically the same as the protocol (1) with p ij = k/d, for all i, j. Lemma 3.4 indeed suggests this is the case. While this protocol admits a more efficient communication protocol (as we shall see in Section ), protocol (1) enjoys a larger parameters space, ultimately leading to better MSE. We comment on this tradeoff in subsequent sections.
As for the data-dependent protocol, we prove basic properties. The proofs are similar to those of Lemmas 3.1 and 3.2 and we defer them to Appendix A.

Communication Protocols
Having defined the encoding protocols α, we need to specify the way the encoded vectors Y i = α(X i ), for i = 1, 2, . . . , n, are communicated to the server. Given a specific communication protocol β, we write β(Y i ) to denote the (expected) number of bits that are communicated by node i to the server.
can be a random variable.
Given Y i , a good communication protocol is able to encode Y i = α(X i ) using a few bits only. Let r denote the number of bits used to represent a floating point number. Letr be the the number of bits representing µ i .
In the rest of this section we describe several communication protocols β and calculate their communication cost.

Naive
Represent Y i = α(X i ) as d floating point numbers. Then for all encoding protocols α and all i we have β(α(X i )) = dr, whence

Varying-length
We will use a single variable for every element of the vector Y i , which does not have constant size. The first bit decides whether the value represents µ i or not. If yes, end of variable, if not, next r bits represent the value of Y i (j). In addition, we need to communicate µ i , which takesr bits 1 . We thus have where 1 e is the indicator function of event e. The expected number of bits communicated is given by In the special case when p ij = p > 0 for all i, j, we get C α,β = n(r + d + pdr).

Sparse Communication Protocol for Encoder (1)
We can represent Y i as a sparse vector; that is, a list of pairs (j, Y i (j)) for which Y i (j) = µ i . The number of bits to represent each pair is log(d) + r. Any index not found in the list, will be interpreted by server as having value µ i . Additionally, we have to communicate the value of µ i to the server, which takesr bits. We assume that the value d, size of the vectors, is known to the server. Hence, Summing up through i and taking expectations, the the communication cost is given by In the special case when p ij = p > 0 for all i, j, we get C α,β = nr + ( log d + r)ndp.
Remark 2. A practical improvement upon this could be to (without loss of generality) assume that the pairs (j, Y i (j)) are ordered by j, i.e., we have {(j s , Y i (j s ))} k s=1 for some k and j 1 < j 2 < · · · < j k . Further, let us denote j 0 = 0. We can then use a variant of variable-length quantity [11] to represent the set {(j s − j s−1 , Y i (j s ))} k s=1 . With careful design one can hope to reduce the log(d) factor in the average case. Nevertheless, this does not improve the worst case analysis we focus on in this paper, and hence we do not delve deeper in this.

Sparse Communication Protocol for Encoder (4)
We now describe a sparse communication protocol compatible only with fixed length encoder defined in (4). Note that subset selection can be compressed in the form of a random seed, letting us avoid the log(d) factor in (8). This includes the protocol defined in (4) but also (1) with uniform probabilities p ij .
In particular, we can represent Y i as a sparse vector containing the list of the values for which Y i (j) = µ i , ordered by j. Additionally, we need to communicate the value µ i (usingr bits) and a random seed (usingr s bits), which can be used to reconstruct the indices j, corresponding to the communicated values. Note that for any fixed k defining protocol (4), we have |S i | = k. Hence, communication cost is deterministic: β(α(X i )) = n(r +r s ) + nkr.
In the case of the variable-size-support encoding protocol (1) with p ij = p > 0 for all i, j, the sparse communication protocol described here yields expected communication cost β(α(X i )) = n(r +r s ) + ndpr. (10)

Discussion
In the above, we have presented several communication protocols of different complexity. However, it is not possible to claim any of them is the most efficient one. Which communication protocol is the best, depends on the specifics of the used encoding protocol. Consider the extreme case of encoding protocol (1) with p ij = 1 for all i, j. The naive communication protocol is clearly the most efficient, as all other protocols need to send some additional information. However, in the interesting case when we consider small communication budget, the sparse communication protocols are the most efficient. Therefore, in the following sections, we focus primarily on optimizing the performance using these protocols.

Examples
In this section, we highlight on several instantiations of our protocols, recovering existing techniques and formulating novel ones. We comment on the resulting trade-offs between communication cost and estimation error.

Binary Quantization
We start by recovering an existing method, which turns every element of the vectors X i into a particular binary representation.
Example 4. If we set the parameters of protocol (1) as µ i = X min i and p ij = (assume, for simplicity, that ∆ i = 0), we exactly recover the quantization algorithm proposed in [10]: Using the formula (2) for the encoding protocol α, we get This exactly recovers the MSE bound established in [10,Theorem 1]. Using the binary communication protocol yields the communication cost of 1 bit per element if X i , plus a two real-valued scalars (11).

Sparse Communication Protocols
Now we move to comparing the communication costs and estimation error of various instantiations of the encoding protocols, utilizing the deterministic sparse communication protocol and uniform probabilities.
For the remainder of this section, let us only consider instantiations of our protocol where p ij = p > 0 for all i, j, and assume that the node centers are set to the vector averages, i.e., For simplicity, we also assume that |S| = nd, which is what we can in general expect without any prior knowledge about the vectors X i .
The properties of the following examples follow from Equations (2) and (10). When considering the communication costs of the protocols, keep in mind that the trivial benchmark is C α,β = ndr, which is achieved by simply sending the vectors unmodified. Communication cost of C α,β = nd corresponds to the interesting special case when we use (on average) one bit per element of each X i .
In this case, the encoding protocol is lossless, which ensures M SE = 0. Note that in this case, we could get rid of the n(r s +r) factor by using naive communication protocol.
This protocol order-wise matches the M SE of the method in Remark 3. However, as long as d > 2 r , this protocol attains this error with smaller communication cost. In particular, this is on expectation less than a single bit per element of X i . Finally, note that the factor R is always smaller or equal to the factor 1 n n i=1 X i 2 appearing in Remark 3.
Example 7 (1-bit per element communication). If we choose p = 1/r, we get This protocol communicates on expectation single bit per element of X i (plus additionalr s +r bits per client), while attaining bound on M SE of O(r/n). To the best of out knowledge, this is the first method to attain this bound without additional assumptions. This alternative protocol attains on expectation exactly single bit per element of X i , with (a slightly more complicated) O(r/n) bound on M SE.
Example 9 (Below 1-bit communication). If we choose p = 1/d, we get C α,β = n(r s +r) + nr, This protocol attains the MSE of protocol in Example 4 while at the same time communicating on average significantly less than a single bit per element of X i .
We summarize these examples in Table 1.
Using the deterministic sparse protocol, there is an obvious lower bound on the communication cost -n(r s +r). We can bypass this threshold by using the sparse protocol, with a data-independent choice of µ i , such as 0, settingr = 0. By setting p = /d( log d + r), we get arbitrarily small expected communication cost of C α,β = , and the cost of exploding estimation error M SE α,γ = O(1/ n).
Note that all of the above examples have random communication costs. What we present is the expected communication cost of the protocols. All the above examples can be modified to use the encoding protocol with fixed-size support defined in (4) with the parameter k set to the value of pd for corresponding p used above, to get the same results. The only practical difference is that the communication cost will be deterministic for each node, which can be useful for certain applications.

Optimal Encoders
Here we consider (α, β, γ), where α = α(p ij , µ i ) is the encoder defined in (1), β is the associated the sparse communication protocol, and γ is the averaging decoder. Recall from Lemma 2 and (8) that the mean square error and communication cost are given by: Having these closed-form formulae as functions of the parameters {p ij , µ i }, we can now ask questions such as: 1. Given a communication budget, which encoding protocol has the smallest mean squared error?
2. Given a bound on the mean squared error, which encoder suffers the minimal communication cost?
Let us now address the first question; the second question can be handled in a similar fashion. In particular, consider the optimization problem 0 < p ij ≤ 1, i = 1, 2, . . . , n; j = 1, 2, . . . , d, where B > 0 represents a bound on the part of the total communication cost in (13) which depends on the choice of the probabilities p ij . Note that while the constraints in (14) are convex (they are linear), the objective is not jointly convex in {p ij , µ i }. However, the objective is convex in {p ij } and convex in {µ i }. This suggests a simple alternating minimization heuristic for solving the above problem: 1. Fix the probabilities and optimize over the node centers, 2. Fix the node centers and optimize over probabilities.
These two steps are repeated until a suitable convergence criterion is reached. Note that the first step has a closed form solution. Indeed, the problem decomposes across the node centers to n univariate unconstrained convex quadratic minimization problems, and the solution is given by The second step does not have a closed form solution in general; we provide an analysis of this step in Section 6.1.
Remark 4. Note that the upper bound i,j (X i (j) − µ i ) 2 /p ij on the objective is jointly convex in {p ij , µ i }. We may therefore instead optimize this upper bound by a suitable convex optimization algorithm.
Remark 5. An alternative and a more practical model to (14) is to choose per-node budgets B 1 , . . . , B n and require j p ij ≤ B i for all i. The problem becomes separable across the nodes, and can therefore be solved by each node independently. If we set B = i B i , the optimal solution obtained this way will lead to MSE which is lower bpunded by the MSE obtained through (14).

Optimal Probabilities for Fixed Node Centers
Let the node centers µ i be fixed. Problem (14) (or, equivalently, step 2 of the alternating minimization method described above) then takes the form Notice that as long as B ≥ |S|, the optimal solution is to set p ij = 1 for all (i, j) ∈ S and p ij = 0 for all (i, j) / ∈ S. 2 In such a case, we have M SE α,γ = 0. Hence, we can without loss of generality assume that B ≤ |S|.
While we are not able to derive a closed-form solution to this problem, we can formulate upper and lower bounds on the optimal estimation error, given a bound on the communication cost formulated via B. Theorem 6.1 (MSE-Optimal Protocols subject to a Communication Budget). Consider problem (17) and fix any B ≤ |S|. Using the sparse communication protocol β, the optimal encoding protocol α has communication complexity and the mean squared error satisfies the bounds If, moreover, B ≤ (i,j)∈S a ij / max (i,j)∈S a ij (which is true, for instance, in the ultra-low communication regime with B ≤ 1), then Proof. Setting p ij = B/|S| for all (i, j) ∈ S leads to a feasible solution of (17). In view of (13), one then has If we relax the problem by removing the constraints p ij ≤ 1, the optimal solution satisfies a ij /p ij = θ > 0 for all (i, j) ∈ S. At optimality the bound involving B must be tight, which leads to (i,j)∈S a ij /θ = B, whence θ = 1 B (i,j)∈S a ij . So, p ij = a ij B/ (i,j)∈S a ij . The optimal MSE therefore satisfies the lower bound then p ij ≤ 1 for all (i, j) ∈ S, and hence we have optimality. (Also note that, by Cauchy-Schwarz inequality, W 2 ≤ nR|S|.)

Trade-off Curves
To illustrate the trade-offs between communication cost and estimation error (MSE) achievable by the protocols discussed in this section, we present simple numerical examples in Figure 1, on three synthetic data sets with n = 16 and d = 512. We choose an array of values for B, directly bounding the communication cost via (18), and evaluate the M SE (2) for three encoding protocols (we use the sparse communication protocol and averaging decoder). All these protocols have the same communication cost, and only differ in the selection of the parameters p ij and µ i . In particular, we consider (i) uniform probabilities p ij = p > 0 with average node centers µ i = 1 d d j=1 X i (j) (blue dashed line), (ii) optimal probabilities p ij with average node centers µ i = 1 d d j=1 X i (j) (green dotted line), and (iii) optimal probabilities with optimal node centers, obtained via the alternating minimization approach described above (red solid line).
In order to put a scale on the horizontal axis, we assumed that r = 16. Note that, in practice, one would choose r to be as small as possible without adversely affecting the application utilizing our distributed mean estimation method. The three plots represent X i with entries drawn in an i.i.d. fashion from Gaussian (N (0, 1)), Laplace (L(0, 1)) and chi-squared (χ 2 (2)) distributions, Figure 1: Trade-off curves between communication cost and estimation error (MSE) for four protocols. The plots correspond to vectors X i drawn in an i.i.d. fashion from Gaussian, Laplace and χ 2 distributions, from left to right. The black cross marks the performance of binary quantization (Example 4).
respectively. As we can see, in the case of non-symmetric distributions, it is not necessarily optimal to set the node centers to averages.
As expected, for fixed node centers, optimizing over probabilities results in improved performance, across the entire trade-off curve. That is, the curve shifts downwards. In the first two plots based on data from symmetric distributions (Gaussian and Laplace), the average node centers are nearly optimal, which explains why the red solid and green dotted lines coalesce. This can be also established formally. In the third plot, based on the non-symmetric chi-squared data, optimizing over node centers leads to further improvement, which gets more pronounced with increased communication budget. It is possible to generate data where the difference between any pair of the three trade-off curves becomes arbitrarily large.
Finally, the black cross represents performance of the quantization protocol from Example 4. This approach appears as a single point in the trade-off space due to lack of any parameters to be fine-tuned.

Further Considerations
In this section we outline further ideas worth consideration. However, we leave a detailed analysis to future work.

Beyond Binary Encoders
We can generalize the binary encoding protocol (1) to a k-ary protocol. To illustrate the concept without unnecessary notation overload, we present only the ternary (i.e., k = 3) case.
Let the collection of parameters {p ij , p ij ,X i ,X i } define an encoding protocol α as follows: with probability p ij , X i with probability p ij , It is straightforward to generalize Lemmas 3.1 and 3.2 to this case. We omit the proofs for brevity. wherex = 1 d j x(j) and 1 is the vector of all ones. Further, for simplicity assume that p ij = p for all i, j. Then using Lemma 3.2, we get It is interesting to investigate whether choosing Q as a random rotation, rather than identity (which is the implicit choice done in previous sections), leads to improvement in MSE, i.e., whether we can in some well-defined sense obtain an inequality of the type This is the case for the quantization protocol proposed in [10], which arises as a special case of our more general protocol. This is because the quantization protocol is suboptimal within our family of encoders. Indeed, as we have shown, with a different choice of the parameter we can obtained results which improve, in theory, on the rotation + quantization approach. This suggests that perhaps combining an appropriately chosen rotation pre-processing step with our optimal encoder, it may be possible to achieve further improvements in MSE for any fixed communication budget. Finding suitable random rotations Q requires a careful study which we leave to future research.