Web: http://arxiv.org/abs/2201.11865

Jan. 31, 2022, 2:11 a.m. | Jianyu Wang, Hang Qi, Ankit Singh Rawat, Sashank Reddi, Sagar Waghmare, Felix X. Yu, Gauri Joshi

cs.LG updates on arXiv.org arxiv.org

In classical federated learning, the clients contribute to the overall
training by communicating local updates for the underlying model on their
private data to a coordinating server. However, updating and communicating the
entire model becomes prohibitively expensive when resource-constrained clients
collectively aim to train a large machine learning model. Split learning
provides a natural solution in such a setting, where only a small part of the
model is stored and trained on clients while the remaining large part of the …

arxiv federated learning learning

More from arxiv.org / cs.LG updates on arXiv.org

Director, Data Science (Advocacy & Nonprofit)

@ Civis Analytics | Remote

Data Engineer

@ Rappi | [CO] Bogotá

Data Scientist V, Marketplaces Personalization (Remote)

@ ID.me | United States (U.S.)

Product OPs Data Analyst (Flex/Remote)

@ Scaleway | Paris

Big Data Engineer

@ Risk Focus | Riga, Riga, Latvia

Internship Program: Machine Learning Backend

@ Nextail | Remote job