Web: http://arxiv.org/abs/2205.06127

May 13, 2022, 1:10 a.m. | Pascale Gourdeau, Varun Kanade, Marta Kwiatkowska, James Worrell

stat.ML updates on arXiv.org arxiv.org

A fundamental problem in adversarial machine learning is to quantify how much
training data is needed in the presence of evasion attacks. In this paper we
address this issue within the framework of PAC learning, focusing on the class
of decision lists. Given that distributional assumptions are essential in the
adversarial setting, we work with probability distributions on the input data
that satisfy a Lipschitz condition: nearby points have similar probability. Our
key results illustrate that the adversary's budget (that …

arxiv attacks complexity decision learning lists

More from arxiv.org / stat.ML updates on arXiv.org

Data Analyst, Patagonia Action Works

@ Patagonia | Remote

Data & Insights Strategy & Innovation General Manager

@ Chevron Services Company, a division of Chevron U.S.A Inc. | Houston, TX

Faculty members in Research areas such as Bayesian and Spatial Statistics; Data Privacy and Security; AI/ML; NLP; Image and Video Data Analysis

@ Ahmedabad University | Ahmedabad, India

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC