all AI news
Can collaborative learning be private, robust and scalable?. (arXiv:2205.02652v1 [cs.LG])
Web: http://arxiv.org/abs/2205.02652
May 6, 2022, 1:11 a.m. | Dmitrii Usynin, Helena Klause, Daniel Rueckert, Georgios Kaissis
cs.LG updates on arXiv.org arxiv.org
We investigate the effectiveness of combining differential privacy, model
compression and adversarial training to improve the robustness of models
against adversarial samples in train- and inference-time attacks. We explore
the applications of these techniques as well as their combinations to determine
which method performs best, without a significant utility trade-off. Our
investigation provides a practical overview of various methods that allow one
to achieve a competitive model performance, a significant reduction in model's
size and an improved empirical adversarial robustness …
More from arxiv.org / cs.LG updates on arXiv.org
Latest AI/ML/Big Data Jobs
Director, Applied Mathematics & Computational Research Division
@ Lawrence Berkeley National Lab | Berkeley, Ca
Business Data Analyst
@ MainStreet Family Care | Birmingham, AL
Assistant/Associate Professor of the Practice in Business Analytics
@ Georgetown University McDonough School of Business | Washington DC
Senior Data Science Writer
@ NannyML | Remote
Director of AI/ML Engineering
@ Armis Industries | Remote (US only), St. Louis, California
Digital Analytics Manager
@ Patagonia | Ventura, California