all AI news
Privacy-preserving Data Filtering in Federated Learning Using Influence Approximation. (arXiv:2205.11518v1 [cs.CR])
May 25, 2022, 1:10 a.m. | Ljubomir Rokvic, Panayiotis Danassis, Boi Faltings
cs.LG updates on arXiv.org arxiv.org
Federated Learning by nature is susceptible to low-quality, corrupted, or
even malicious data that can severely degrade the quality of the learned model.
Traditional techniques for data valuation cannot be applied as the data is
never revealed. We present a novel technique for filtering, and scoring data
based on a practical influence approximation that can be implemented in a
privacy-preserving manner. Each agent uses his own data to evaluate the
influence of another agent's batch, and reports to the center …
approximation arxiv data federated learning filtering influence learning privacy
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Enterprise AI Architect
@ Oracle | Broomfield, CO, United States
Cloud Data Engineer France H/F (CDI - Confirmé)
@ Talan | Nantes, France