March 5, 2024, 2:44 p.m. | Amr Abourayya, Jens Kleesiek, Kanishka Rao, Erman Ayday, Bharat Rao, Geoff Webb, Michael Kamp

cs.LG updates on arXiv.org arxiv.org

arXiv:2310.05696v2 Announce Type: replace
Abstract: In many applications, sensitive data is inherently distributed and may not be pooled due to privacy concerns. Federated learning allows us to collaboratively train a model without pooling the data by iteratively aggregating the parameters of local models. It is possible, though, to infer upon the sensitive data from the shared model parameters. We propose to use a federated co-training approach where clients share hard labels on a public unlabeled dataset instead of model parameters. …

abstract applications arxiv concerns cs.lg data distributed federated learning parameters pooling privacy through train training type

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Data Analyst

@ Alstom | Johannesburg, GT, ZA