Skip to main content Skip to navigation

Algorithms & Computationally Intensive Inference seminars

The seminars will take place on Fridays 1 pm UK time in room MB0.08.

2023-2024 Organisers: Shenggang Hu and Andi Wang.

If you would like to speak, or you want to be included in any emails, please contact one of the organisers.

Website URL:

Mailing List Sign-Up:

Mailing List: (NB - only approved members can post)

Term 1:
Date Speaker Title
06/10 Artavazd Maranjyan GradSkip: Communication-Accelerated Local Gradient Methods with Better Computational Complexity

We study a class of distributed optimization algorithms that aim to alleviate high communication costs by allowing the clients to perform multiple local gradient-type training steps prior to communication. While methods of this type have been studied for about a decade, the empirically observed acceleration properties of local training eluded all attempts at theoretical understanding. In a recent breakthrough, Mishchenko et al. (ICML 2022) proved that local training, when properly executed, leads to provable communication acceleration, and this holds in the strongly convex regime without relying on any data similarity assumptions. However, their method ProxSkip requires all clients to take the same number of local training steps in each communication round. Inspired by a common sense intuition, we start our investigation by conjecturing that clients with ``less important'' data should be able to get away with fewer local training steps without this impacting the overall communication complexity of the method. It turns out that this intuition is correct: we managed to redesign the original ProxSkip method to achieve this. In particular, we prove that our modified method, for which coin the name GradSkip, converges linearly under the same assumptions, has the same accelerated communication complexity, while the number of local gradient steps can be reduced relative to a local condition number. We further generalize our method by extending the randomness of probabilistic alternations to arbitrary unbiased compression operators and considering a generic proximable regularizer. This generalization, which we call GradSkip+, recovers several related methods in the literature as special cases. Finally, we present an empirical study on carefully designed toy problems that confirm our theoretical claims.