Web: http://arxiv.org/abs/2205.02652

May 6, 2022, 1:11 a.m. | Dmitrii Usynin, Helena Klause, Daniel Rueckert, Georgios Kaissis

cs.LG updates on arXiv.org arxiv.org

We investigate the effectiveness of combining differential privacy, model
compression and adversarial training to improve the robustness of models
against adversarial samples in train- and inference-time attacks. We explore
the applications of these techniques as well as their combinations to determine
which method performs best, without a significant utility trade-off. Our
investigation provides a practical overview of various methods that allow one
to achieve a competitive model performance, a significant reduction in model's
size and an improved empirical adversarial robustness …

arxiv collaborative learning scalable

More from arxiv.org / cs.LG updates on arXiv.org

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC

Senior Data Science Writer

@ NannyML | Remote

Director of AI/ML Engineering

@ Armis Industries | Remote (US only), St. Louis, California

Digital Analytics Manager

@ Patagonia | Ventura, California