Web: http://arxiv.org/abs/2102.07711

May 5, 2022, 1:11 a.m. | Anshuka Rangi, Long Tran-Thanh, Haifeng Xu, Massimo Franceschetti

stat.ML updates on arXiv.org arxiv.org

We study bandit algorithms under data poisoning attacks in a bounded reward
setting. We consider a strong attacker model in which the attacker can observe
both the selected actions and their corresponding rewards and can contaminate
the rewards with additive noise. We show that any bandit algorithm with regret
$O(\log T)$ can be forced to suffer a regret $\Omega(T)$ with an expected
amount of contamination $O(\log T)$. This amount of contamination is also
necessary, as we prove that there exists …

arxiv attacks data saving stochastic verification

More from arxiv.org / stat.ML updates on arXiv.org

Data Analyst, Patagonia Action Works

@ Patagonia | Remote

Data & Insights Strategy & Innovation General Manager

@ Chevron Services Company, a division of Chevron U.S.A Inc. | Houston, TX

Faculty members in Research areas such as Bayesian and Spatial Statistics; Data Privacy and Security; AI/ML; NLP; Image and Video Data Analysis

@ Ahmedabad University | Ahmedabad, India

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC