Feb. 23, 2024, 5:43 a.m. | Giovanni Cherubin, Boris K\"opf, Andrew Paverd, Shruti Tople, Lukas Wutschitz, Santiago Zanella-B\'eguelin

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.14397v1 Announce Type: cross
Abstract: Machine learning models trained with differentially-private (DP) algorithms such as DP-SGD enjoy resilience against a wide range of privacy attacks. Although it is possible to derive bounds for some attacks based solely on an $(\varepsilon,\delta)$-DP guarantee, meaningful bounds require a small enough privacy budget (i.e., injecting a large amount of noise), which results in a large loss in utility. This paper presents a new approach to evaluate the privacy of machine learning models against specific …

abstract algorithms arxiv attacks budget cs.cr cs.lg delta form inference machine machine learning machine learning models privacy resilience small type

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Data Engineer (m/f/d)

@ Project A Ventures | Berlin, Germany

Principle Research Scientist

@ Analog Devices | US, MA, Boston