April 4, 2024, 4:42 a.m. | Nika Haghtalab, Michael I. Jordan, Eric Zhao

cs.LG updates on arXiv.org arxiv.org

arXiv:2210.12529v3 Announce Type: replace
Abstract: Social and real-world considerations such as robustness, fairness, social welfare and multi-agent tradeoffs have given rise to multi-distribution learning paradigms, such as collaborative learning, group distributionally robust optimization, and fair federated learning. In each of these settings, a learner seeks to uniformly minimize its expected loss over $n$ predefined data distributions, while using as few samples as possible. In this paper, we establish the optimal sample complexity of these learning paradigms and give algorithms that …

abstract agent arxiv collaborative cs.cy cs.lg demand distribution fair fairness federated learning loss multi-agent multiple optimization robust robustness sampling social type welfare world

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Software Engineer, Generative AI (C++)

@ SoundHound Inc. | Toronto, Canada