all AI news
Sample Complexity Bounds for Robustly Learning Decision Lists against Evasion Attacks. (arXiv:2205.06127v1 [cs.LG])
Web: http://arxiv.org/abs/2205.06127
May 13, 2022, 1:11 a.m. | Pascale Gourdeau, Varun Kanade, Marta Kwiatkowska, James Worrell
cs.LG updates on arXiv.org arxiv.org
A fundamental problem in adversarial machine learning is to quantify how much
training data is needed in the presence of evasion attacks. In this paper we
address this issue within the framework of PAC learning, focusing on the class
of decision lists. Given that distributional assumptions are essential in the
adversarial setting, we work with probability distributions on the input data
that satisfy a Lipschitz condition: nearby points have similar probability. Our
key results illustrate that the adversary's budget (that …
More from arxiv.org / cs.LG updates on arXiv.org
Latest AI/ML/Big Data Jobs
Director, Applied Mathematics & Computational Research Division
@ Lawrence Berkeley National Lab | Berkeley, Ca
Business Data Analyst
@ MainStreet Family Care | Birmingham, AL
Assistant/Associate Professor of the Practice in Business Analytics
@ Georgetown University McDonough School of Business | Washington DC
Senior Data Science Writer
@ NannyML | Remote
Director of AI/ML Engineering
@ Armis Industries | Remote (US only), St. Louis, California
Digital Analytics Manager
@ Patagonia | Ventura, California