April 4, 2024, 4:42 a.m. | Guangyu Sun, Umar Khalid, Matias Mendieta, Taojiannan Yang, Chen Chen

cs.LG updates on arXiv.org arxiv.org

arXiv:2210.01708v3 Announce Type: replace
Abstract: Federated learning (FL) has emerged as a promising paradigm for enabling the collaborative training of models without centralized access to the raw data on local devices. In the typical FL paradigm (e.g., FedAvg), model weights are sent to and from the server each round to participating clients. Recently, the use of small pre-trained models has been shown effective in federated learning optimization and improving convergence. However, recent state-of-the-art pre-trained models are getting more capable but …

abstract arxiv collaborative communication constraints cs.cv cs.lg data devices enabling federated learning paradigm pre-trained models raw server training type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne