March 1, 2024, 5:44 a.m. | Feihu Huang, Xinrui Wang, Junyi Li, Songcan Chen

cs.LG updates on arXiv.org arxiv.org

arXiv:2211.07303v4 Announce Type: replace
Abstract: Federated learning is a popular distributed and privacy-preserving learning paradigm in machine learning. Recently, some federated learning algorithms have been proposed to solve the distributed minimax problems. However, these federated minimax algorithms still suffer from high gradient or communication complexity. Meanwhile, few algorithm focuses on using adaptive learning rate to accelerate these algorithms. To fill this gap, in the paper, we study a class of nonconvex minimax optimization, and propose an efficient adaptive federated minimax …

abstract algorithm algorithms arxiv communication complexities complexity cs.lg distributed federated learning gradient machine machine learning math.oc minimax optimization paradigm popular privacy solve type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Research Scientist (Computer Science)

@ Nanyang Technological University | NTU Main Campus, Singapore

Intern - Sales Data Management

@ Deliveroo | Dubai, UAE (Main Office)