Oct. 6, 2022, 1:13 a.m. | Yaodong Yu, Alexander Wei, Sai Praneeth Karimireddy, Yi Ma, Michael I. Jordan

stat.ML updates on arXiv.org arxiv.org

State-of-the-art federated learning methods can perform far worse than their
centralized counterparts when clients have dissimilar data distributions. For
neural networks, even when centralized SGD easily finds a solution that is
simultaneously performant for all clients, current federated optimization
methods fail to converge to a comparable solution. We show that this
performance disparity can largely be attributed to optimization challenges
presented by nonconvexity. Specifically, we find that the early layers of the
network do learn useful features, but the final …

arxiv federated learning

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Praktikum im Bereich eMobility / Charging Solutions - Data Analysis

@ Bosch Group | Stuttgart, Germany

Business Data Analyst

@ PartnerRe | Toronto, ON, Canada

Machine Learning/DevOps Engineer II

@ Extend | Remote, United States

Business Intelligence Developer, Marketing team (Bangkok based, relocation provided)

@ Agoda | Bangkok (Central World)