Semi-Stochastic Gradient Descent Methods

In this paper we study the problem of minimizing the average of a large number ($n$) of smooth convex loss functions. We propose a new method, S2GD (Semi-Stochastic Gradient Descent), which runs for one or several epochs in each of which a single full gradient and a random number of stochastic gradients is computed, following a geometric law. The total work needed for the method to output an $\varepsilon$-accurate solution in expectation, measured in the number of passes over data, or equivalently, in units equivalent to the computation of a single gradient of the loss, is $O((\kappa/n)\log(1/\varepsilon))$, where $\kappa$ is the condition number. This is achieved by running the method for $O(\log(1/\varepsilon))$ epochs, with a single gradient evaluation and $O(\kappa)$ stochastic gradient evaluations in each. The SVRG method of Johnson and Zhang arises as a special case. If our method is limited to a single epoch only, it needs to evaluate at most $O((\kappa/\varepsilon)\log(1/\varepsilon))$ stochastic gradients. In contrast, SVRG requires $O(\kappa/\varepsilon^2)$ stochastic gradients. To illustrate our theoretical results, S2GD only needs the workload equivalent to about 2.1 full gradient evaluations to find an $10^{-6}$-accurate solution for a problem with $n=10^9$ and $\kappa=10^3$.


Introduction
Many problems in data science (e.g., machine learning, optimization and statistics) can be cast as loss minimization problems of the form min where Here d typically denotes the number of features / coordinates, n the number of examples, and f i (x) is the loss incurred on example i. That is, we are seeking to find a predictor x ∈ R d minimizing the average loss f (x). In big data applications, n is typically very large; in particular, n d.

Motivation
Let us now briefly review two basic approaches to solving problem (1).
1. Gradient Descent. Given x k ∈ R d , the gradient descent (GD) method sets where h is a stepsize parameter and f (x k ) is the gradient of f at x k . We will refer to f (x) by the name full gradient. In order to compute f (x k ), we need to compute the gradients of n functions. Since n is big, it is prohibitive to do this at every iteration.
2. Stochastic Gradient Descent (SGD). Unlike gradient descent, stochastic gradient descent [7,17] instead picks a random i (uniformly) and updates Note that this strategy drastically reduces the amount of work that needs to be done in each iteration (by the factor of n). Since we have an unbiased estimator of the full gradient. Hence, the gradients of the component functions f 1 , . . . , f n will be referred to as stochastic gradients. A practical issue with SGD is that consecutive stochastic gradients may vary a lot or even point in opposite directions. This slows down the performance of SGD. On balance, however, SGD is preferable to GD in applications where low accuracy solutions are sufficient. In such cases usually only a small number of passes through the data (i.e., work equivalent to a small number of full gradient evaluations) are needed to find an acceptable x. For this reason, SGD is extremely popular in fields such as machine learning.
In order to improve upon GD, one needs to reduce the cost of computing a gradient. In order to improve upon SGD, one has to reduce the variance of the stochastic gradients. In this paper we propose and analyze a Semi-Stochastic Gradient Descent (S2GD) method. Our method combines GD and SGD steps and reaps the benefits of both algorithms: it inherits the stability and speed of GD and at the same time retains the work-efficiency of SGD.
The method in [9] is known as Random Coordinate Descent for Composite functions (RCDC), and can be either applied directly to (1)-in which case a single iteration requires O(n) work for a dense problem, and O(d log(1/ε)) iterations in total-or to a dual version of (1), which requires O(d) work per iteration and O((n + κ) log(1/ε)) iterations in total. Application of a coordinate descent method to a dual formulation of (1) is generally referred to as Stochastic Dual Coordinate Ascent (SDCA) [2]. The algorithm in [14] exhibits this duality, and the method in [15] extends the primal-dual framework to the parallel / mini-batch setting. Parallel and distributed stochastic coordinate descent methods were studied in [10,1,11].
Stochastic Average Gradient (SAG) [12] is one of the first SGD-type methods, other than coordinate descent methods, which were shown to exhibit linear convergence. The method of Johnson and Zhang [3], called Stochastic Variance Reduced Gradient (SVRG), arises as a special case in our setting for a suboptimal choice of a single parameter of our method. The Epoch Mixed Gradient Descent (EMGD) method [16] is similar in spirit to SVRG, but achieves a quadratic dependence on the condition number instead of a linear dependence, as is the case with SAG, SVRG and with our method.
For classical work on semi-stochastic gradient descent methods we refer 1 the reader to the papers of Murti and Fuchs [5,6].

Outline
We start in Section 2 by describing two algorithms: S2GD, which we analyze, and S2GD+, which we do not analyze, but which exhibits superior performance in practice. We then move to summarizing some of the main contributions of this paper in Section 3. Section 4 is devoted to establishing expectation and high probability complexity results for S2GD in the case of a strongly convex loss. The results are generic in that the parameters of the method are set arbitrarily. Hence, in Section 5 we study the problem of choosing the parameters optimally, with the goal of minimizing the total workload (# of processed examples) sufficient to produce a result of sufficient accuracy. In Section 6 we establish high probability complexity bounds for S2GD applied to a non-strongly convex loss function. Finally, in Section 7 we perform very encouraging numerical experiments on real and artificial problem instances. A brief conclusion can be found in Section 8.

Semi-Stochastic Gradient Descent
In this section we describe two novel algorithms: S2GD and S2GD+. We analyze the former only. The latter, however, has superior convergence properties in our experiments.
We assume throughout the paper that the functions f i are convex and L-smooth.
Assumption 1. The functions f 1 , . . . , f n have Lipschitz continuous gradients with constant L > 0 (in other words, they are L-smooth). That is, for all x, z ∈ R d and all i = 1, 2, . . . , n, (This implies that the gradient of f is Lipschitz with constant L, and hence f satisfies the same inequality.) In one part of the paper (Section 4) we also make the following additional assumption: 1 We thank Zaid Harchaoui who pointed us to these papers a few days before we posted our work to arXiv.

S2GD
The method has an outer loop, indexed by epoch counter j, and an inner loop, indexed by t. In each epoch j, the method first computes g j -the full gradient of f at x j . Subsequently, the method produces a random number t j ∈ [1, m] of steps, following a geometric law, where with only two stochastic gradients computed in each step 2 . For each t = 0, . . . , t j − 1, the stochastic gradient f i (x j ) is subtracted from g j , and f i (y j,t−1 ) is added to g j , which ensures that, one has where the expectation is with respect to the random variable i. Hence, the algorithm is stochastic gradient descent -albeit executed in a nonstandard way (compared to the traditional implementation described in the introduction).
Note that for all j, the expected number of iterations of the inner loop, E(t j ), is equal to Also note that ξ ∈ [ m+1 2 , m), with the lower bound attained for ν = 0, and the upper bound for νh → 1.

S2GD+
We also implement Algorithm 2, which we call S2GD+. In our experiments, the performance of this method is superior to all methods we tested, including S2GD. However, we do not analyze the complexity of this method and leave this as an open problem.

Algorithm 2 S2GD+
parameters: α ≥ 1 (e.g., α = 1) 1. Run SGD for a single pass over the data (i.e., n iterations); output x 2. Starting from x 0 = x, run a version of S2GD in which t j = αn for all j In brief, S2GD+ starts by running SGD for 1 epoch (1 pass over the data) and then switches to a variant of S2GD in which the number of the inner iterations, t j , is not random, but fixed to be n or a small multiple of n.
The motivation for this method is the following. It is common knowledge that SGD is able to progress much more in one pass over the data than GD (where this would correspond to a single gradient step). However, the very first step of S2GD is the computation of the full gradient of f . Hence, by starting with a single pass over data using SGD and then switching to S2GD, we obtain a superior method in practice. 3

Summary of Results
In this section we summarize some of the main results and contributions of this work.

1.
Complexity for strongly convex f . If f is strongly convex, S2GD needs work (measured as the total number of evaluations of the stochastic gradient, accounting for the full gradient evaluations as well) to output an ε-approximate solution (in expectation or in high probability), where κ = L/µ is the condition number. This is achieved by running S2GD with stepsize h = O(1/L), j = O(log(1/ε)) epochs (this is also equal to the number of full gradient evaluations) and m = O(κ) (this is also roughly equal to the number of stochastic gradient evaluations in a single epoch). The complexity results are stated in detail in Sections 4 and 5 (see Theorems 4, 5 and 6; see also (27) and (26)).

2.
Comparison with existing results. This complexity result (6) matches the best-known results obtained for strongly convex losses in recent work such as [12], [3] and [16]. Our treatment is most closely related to [3], and contains their method (SVRG) as a special case. However, our complexity results have better constants, which has a discernable effect in practice. In Table 1 we compare our results in the strongly convex case with other existing results for different algorithms.
We should note that the rate of convergence of Nesterov's algorithm [8] is a deterministic result. EMGD and S2GD results hold with high probability. The remaining results hold in Table 1: Comparison of performance of selected methods suitable for solving (1). The complexity/work is measured in the number of stochastic gradient evaluations needed to find an ε-solution.
expectation. Complexity results for stochastic coordinate descent methods are also typically analyzed in the high probability regime [9].

3.
Complexity for convex f . If f is not strongly convex, then we propose that S2GD be applied to a perturbed version of the problem, with strong convexity constant µ = O(L/ε). An ε-accurate solution of the original problem is recovered with arbitrarily high probability (see Theorem 8 in Section 6). The total work in this case is We believe this is the first time a reduced-variance version of SGD, other than a coordinate decent method, was analyzed for non-strongly convex loss functions with complexity better than O(1/ε 2 ). Rates of the O(1/ε) variety have already been proved for algorithms such as SAG [13] and MixedGrad [4].
4. Optimal parameters. We derive formulas for optimal parameters of the method which (approximately) minimize the total workload, measured in the number of stochastic gradients computed (counting a single full gradient evaluation as n evaluations of the stochastic gradient). In particular, we show that the method should be run for O(log(1/ε)) epochs, with stepsize h = O(1/L) and m = O(κ). No such results were derived for SVRG in [3].
5. One epoch. In the case when S2GD is run for 1 epoch only, effectively limiting the number of full gradient evaluations to 1, we show that S2GD with ν = µ needs work only (see Table 2). This compares favorably with the optimal complexity in the ν = µ case (which reduces to SVRG), where the work needed is 6. Special cases. GD and SVRG arise as special cases of S2GD, for m = 1 and ν = 0, respectively. 4

Parameters
Method Complexity 7. Low memory requirements. Note that SDCA and SAG, unlike SVRG and S2GD, need to store all gradients f i (or dual variables) throughout the iterative process. While this may not be a problem for a modest sized optimization task, this requirement makes such methods less suitable for problems with very large n.
In our experiments, however, it performs vastly superior to all other methods we tested, including GD, SGD, SAG and S2GD. S2GD alone is better than both GD and SGD if a highly accurate solution is required. The performance of S2GD and SAG is roughly comparable, even though in our experiments S2GD turned to have an edge.

Complexity Analysis: Strongly Convex Loss
For the purpose of the analysis, let be the σ-algebra generated by the relevant history of S2GD. We first isolate an auxiliary result.
Lemma 3. Consider the S2GD algorithm. For any fixed epoch number j, the following identity holds: Proof. By the tower property of expectations and the definition of x j+1 in the algorithm, we obtain We now state and prove the main result of this section.
Theorem 4. Let Assumptions 1 and 2 be satisfied. Consider the S2GD algorithm applied to solving problem (1). Choose 0 ≤ ν ≤ µ, 0 < h < 1 2L , and let m be sufficiently large so that Then we have the following convergence in expectation: Before we proceed to proving the theorem, note that in the special case with ν = 0, we recover the result of Johnson and Zhang [3] (with a minor improvement in the second term of c where L is replaced by L − µ), namely If we set ν = µ, then c can be written in the form Clearly, the latter c is a major improvement on the former one. We shall elaborate on this further later.
Proof. It is well-known [8, Theorem 2.1.5] that since the functions f i are L-smooth, they necessarily satisfy the following inequality: By summing these inequalities for i = 1, . . . , n, and using f (x * ) = 0, we get Let G j,t def = g j +f i (y j,t−1 )−f i (x j ) be the direction of update at j th iteration in the outer loop and t th iteration in the inner loop. Taking expectation with respect to i, conditioned on the σ-algebra F j,t−1 (7), we obtain 5 Above we have used the bound x + x 2 ≤ 2 x 2 + 2 x 2 and the fact that We now study the expected distance to the optimal solution (a standard approach in the analysis of gradient methods): By rearranging the terms in (16) and taking expectation over the σ-algebra F j,t−1 , we get the following inequality: . (17) Finally, we can analyze what happens after one iteration of the outer loop of S2GD, i.e., between two computations of the full gradient. By summing up inequalities (17) for t = 1, . . . , m, with inequality t multiplied by (1 − νh) m−t , we get the left-hand side and the right-hand side Since LHS ≤ RHS, we finally conclude with Since we have established linear convergence of expected values, a high probability result can be obtained in a straightforward way using Markov inequality.
Theorem 5. Consider the setting of Theorem 4. Then, for any 0 < ρ < 1, 0 < ε < 1 and we have Proof. This follows directly from Markov inequality and Theorem 4: This result will be also useful when treating the non-strongly convex case.

Optimal Choice of Parameters
The goal of this section is to provide insight into the choice of parameters of S2GD; that is, the number of epochs (equivalently, full gradient evaluations) j, the maximal number of steps in each epoch m, and the stepsize h. The remaining parameters (L, µ, n) are inherent in the problem and we will hence treat them in this section as given.
In particular, ideally we wish to find parameters j, m and h solving the following optimization problem: min j,m,hW subject to Note thatW(j, m, h) is the expected work, measured by the number number of stochastic gradient evaluations, performed by S2GD when running for j epochs. Indeed, the evaluation of g j is equivalent to n stochastic gradient evaluations, and each epoch further computes on average 2ξ(m, h) stochastic gradients (see (5)). Since m+1 2 ≤ ξ(m, h) < m, we can simplify and solve the problem with ξ set to the conservative upper estimate ξ = m.
In view of (10), accuracy constraint (21) is satisfied if c (which depends on h and m) and j satisfy c j ≤ ε.
We therefore instead consider the parameter fine-tuning problem min j,m,h In the following we (approximately) solve this problem in two steps. First, we fix j and find (nearly) optimal h = h(j) and m = m(j). The problem reduces to minimizing m subject to c ≤ ε 1/j by fine-tuning h. While in the ν = 0 case it is possible to obtain closed form solution, this is not possible for ν > µ. However, it is still possible to obtain a formula for h(j) leading to m(j) which depends on ε in the correct way. We then plug the formula for m(j) obtained this way back into (23), and study the quantity W(j, m(j), h(j)) = j(n + 2m(j)) as a function of j.
Theorem 6 (Choice of parameters). Fix the number of epochs j ≥ 1, error tolerance 0 < ε < 1, and let ∆ = ε 1/j . If we run S2GD with the stepsize In particular, if we choose j * = log(1/ε) , then 1 ∆ ≤ exp (1), and hence m(j * ) = O(κ), leading to the workload Proof. We only need to show that c ≤ ∆, where c is given by (12) for ν = µ and by (11) for ν = 0. The stepsize h is chosen so that and hence it only remains to verify that c − c 2 ≤ ∆ 2 . In the ν = 0 case, m(j) is chosen so that c − c 2 = ∆ 2 . In the ν = µ case, c − c 2 = ∆ 2 holds for m = log 2 ∆ + 2κ−1 κ−1 / log We only need to observe that c decreases as m increases, and apply the inequality log We now comment on the above result: 1. Workload. Notice that for the choice of parameters j * , h = h(j * ), m = m(j * ) and any ν ∈ [0, µ], the method needs O(n log(1/ε)) computations of the full gradient (note this is independent of κ), and O(κ log(1/ε)) computations of the stochastic gradient. This result, and special cases thereof, are summarized in Table 2. 2. Simpler formulas for m. If κ ≥ 2, we can instead of (25) use the following (slightly worse but) simpler expressions for m(j), obtained from (25) by using the bounds 1 ≤ κ−1, κ−1 ≤ κ and ∆ < 1 in appropriate places (e.g., 8κ 3. Optimal stepsize in the ν = 0 case. Theorem 6 does not claim to have solved problem (23); the problem in general does not have a closed form solution. However, in the ν = 0 case a closed-form formula can easily be obtained: Indeed, for fixed j, (23) is equivalent to finding h that minimizes m subject to the constraint c ≤ ∆. In view of (11), this is equivalent to searching for h > 0 maximizing the quadratic h → h(∆ − 2(∆L + L − µ)h), which leads to (28).
Note that both the stepsize h(j) and the resulting m(j) are slightly larger in Theorem 6 than in (28). This is because in the theorem the stepsize was for simplicity chosen to satisfy c 2 = ∆ 2 , and hence is (slightly) suboptimal. Nevertheless, the dependence of m(j) on ∆ is of the correct (optimal) order in both cases. That is, Stepsize choice. In cases when one does not have a good estimate of the strong convexity constant µ to determine the stepsize via (24), one may choose suboptimal stepsize that does not depend on µ and derive similar results to those above. For instance, one may choose h = ∆ 6L .
In Table 3 we provide comparison of work needed for small values of j, and different values of κ and ε. Note, for instance, that for any problem with n = 10 9 and κ = 10 3 , S2GD outputs a highly accurate solution (ε = 10 −6 ) in the amount of work equivalent to 2.12 evaluations of the full gradient of f !

Complexity Analysis: Convex Loss
If f is convex but not strongly convex, we definef i (x) def = f i (x)+ µ 2 x−x 0 2 , for small enough µ > 0 (we shall see below how the choice of µ affects the results), and consider the perturbed problem wheref Note thatf is µ-strongly convex and (L + µ)-smooth. In particular, the theory developed in the previous section applies. We propose that S2GD be instead applied to the perturbed problem, and show that an approximate solution of (29) is also an approximate solution of (1) (we will assume that this problem has a minimizer). Letx * be the (necessarily unique) solution of the perturbed problem (29). The following result describes an important connection between the original problem and the perturbed problem.  Table 3: Comparison of work sufficient to solve a problem with n = 10 9 , and various values of κ and ε. The work was computed using formula (23), with m(j) as in (27). The notation W ν (j) indicates what parameter ν was used.
Proof. The statement is almost identical to Lemma 9 in [9]; its proof follows the same steps with only minor adjustments.
We are now ready to establish a complexity result for non-strongly convex losses.
Theorem 8. Let Assumption 1 be satisfied. Choose µ > 0, 0 ≤ ν ≤ µ, stepsize 0 < h < 1 2(L+µ) , and let m be sufficiently large so that Pick x 0 ∈ R d and letx 0 = x 0 ,x 1 , . . . ,x j be the sequence of iterates produced by S2GD as applied to problem (29). Then, for any 0 < ρ < 1, 0 < ε < 1 and In Figure 1 we present a comparison of our theory and the practical performance of S2GD for two choices of the parameter ν: ν = µ and ν = 0. Recall that the latter choice is essentially the SVRG method of Johnson and Zhang [3].
The figure demonstrates linear convergence of S2GD in practice, with the convergence rate being significantly better than the already strong theoretical result. We can observe surprising convergence to machine precision in work equivalent to evaluating full gradient only 25 times. This is property standard SGD does not possess (as we show later) and for problems of certain size is impossible with gradient descent, since we cannot compute enough gradients to reach such a precision. Our method is also an improvement over [3], both in theory and practice.
Note that we have implemented the methods following the theory, without any engineering tricks. Hence, further improvement is possible.
We chose the parameters h and m of the method numerically in order to minimize total work W. The values are m = 11, 686 and h = 1/14.1L for ν = λ, resulting in c = 0.2548, and m = 17, 062, h = 1/13.9L in the case ν = 0, resulting in c = 0.4716. Note that the theoretical decrease factor c is much better for ν = µ than in the SVRG case (ν = 0). Indeed, the predicted drop in the residual is doubled, while at the same time only about two thirds of the stochastic gradient computations required by SVRG are needed.
Note that the theoretical decrease factor c is much better for ν = µ than in the SVRG case (ν = 0). Indeed, the predicted drop in the residual is doubled, while at the same time only about two thirds of the stochastic gradient computations required by SVRG are needed.
We include these in the same plot since we obtained the values as average of several hundreds of runs, and the average number of stochastic updates is almost the same in both cases. Note that due to this fact, the practical performance is essentially the same, perhaps with bigger variance with ν = 0. However, the important difference is in the theoretical bound, matching the theoretical insight from Table 3, saying that for κ n, the number of full gradient evaluations needed in case ν = 0 is about twice as big as in case ν = µ. Figure 2 presents a comparison of the theoretical rate and practical performance on a larger problem with artificial data, with a condition number we can control (and choose it to be poor). In particular, we consider the L2-regularized least squares with

L2-regularized least squares
for some a i ∈ R d , b i ∈ R and λ > 0 is the regularization parameter. We consider an instance with n = 100, 000, d = 1, 000 and κ = 10, 000. We run the algorithm with both parameters ν = 0 and ν = λ -our best estimate of µ. Again, we chose parameters m and h as a (numerical) solution of the work-minimization problem (20), obtaining m = 261, 063 and h = 1/11.4L for ν = λ and m = 426, 660 and h = 1/12.7L for ν = 0. The practical performance is obtained after single run of the S2GD algorithm. The experiments demonstrate a relationship between theory and practice similar to that in the previous section: Figure 2 confirms consistency of the results obtained in Figure 1.

SGD, S2GD, S2GD+ and SAG
Here we compare the practical performance of SGD, S2GD, S2GD+ and SAG on a larger, sparse (90% zeros) problem with n = 10 6 , κ = 10 5 and d = 200; see Figure 3. The size limitations are due to us running the experiments in MATLAB on a notebook. As before, we set the parameters m and h numerically in an optimal way. . Note that SAG and S2GD have similar performance, with S2GD slightly outperforming SAG. However, S2GD+ is vastly superior to all.
We can see, that S2GD and SAG, run with parameters as analyzed in theory, exhibit essentially the same performance. Both methods can benefit from engineering and implementation tricks, which we do not explore in this work.
The performance of SGD with relatively big step size is better in the first few iterations (1 pass over the data), but quickly slows down and oscillates at low precision -this is due to the inherent variance of the stochastic gradient. If we aim for a higher precision, one would need to decrease the step size and thus make SGD slower than S2GD or SAG. The advantage of S2GD and SAG is linear convergence at all distances from the solution. S2GD is slower than SGD in the beginning, because it has to compute the full gradient before actually making a single step.
Allowing deviations from theory, performance significantly superior to all other methods is offered by Algorithm 2 (S2GD+), where we first run SGD for a single pass through the data, to find a good solution quickly, and then continue with S2GD with constant size of the inner loop m = n. Note that we evaluate function value in S2GD only after each epoch, as illustrated with markers in the figures.

Conclusion
We have developed a new semi-stochastic gradient descent method (S2GD) and analyzed its complexity for smooth convex and strongly convex loss functions. Our methods need O((κ/n) log(1/ε)) work only, measured in units equivalent to the evaluation of the full gradient of the loss function, where κ = L/µ if the loss is L-smooth and µ-strongly convex, and κ ≤ 2L/ε if the loss is merely L-smooth.
To the best of our knowledge, stochastic gradient descent methods have not been previously shown to exhibit comparable convergence guarantees for non-strongly convex functions. Our results in the strongly convex case match or improve on a few very recent results, while at the same time generalizing and simplifying the analysis.
Additionally, we proposed S2GD+ -a method which equips S2GD with an SGD pre-processing step-which in our experiments exhibits superior performance to all methods we tested. We leave the analysis of this method as an open problem.