Oct. 14, 2022, 1:13 a.m. | Alexander Tyurin, Peter Richtárik

cs.LG updates on arXiv.org arxiv.org

We present a new method that includes three key components of distributed
optimization and federated learning: variance reduction of stochastic
gradients, compressed communication, and partial participation. We prove that
the new method has optimal oracle complexity and state-of-the-art communication
complexity in the partial participation setting. Moreover, we observe that "1 +
1 + 1 is not 3": by mixing variance reduction of stochastic gradients with
compressed communication and partial participation, we do not obtain a fully
synergetic effect. We explain …

arxiv communication computation distributed

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Analyst - Associate

@ JPMorgan Chase & Co. | Mumbai, Maharashtra, India

Staff Data Engineer (Data Platform)

@ Coupang | Seoul, South Korea

AI/ML Engineering Research Internship

@ Keysight Technologies | Santa Rosa, CA, United States

Sr. Director, Head of Data Management and Reporting Execution

@ Biogen | Cambridge, MA, United States

Manager, Marketing - Audience Intelligence (Senior Data Analyst)

@ Delivery Hero | Singapore, Singapore