all AI news
Holistic Approach to Measure Sample-level Adversarial Vulnerability and its Utility in Building Trustworthy Systems. (arXiv:2205.02604v1 [cs.CV])
Web: http://arxiv.org/abs/2205.02604
May 6, 2022, 1:10 a.m. | Gaurav Kumar Nayak, Ruchit Rawal, Rohit Lal, Himanshu Patil, Anirban Chakraborty
cs.CV updates on arXiv.org arxiv.org
Adversarial attack perturbs an image with an imperceptible noise, leading to
incorrect model prediction. Recently, a few works showed inherent bias
associated with such attack (robustness bias), where certain subgroups in a
dataset (e.g. based on class, gender, etc.) are less robust than others. This
bias not only persists even after adversarial training, but often results in
severe performance discrepancies across these subgroups. Existing works
characterize the subgroup's robustness bias by only checking individual
sample's proximity to the decision boundary. …
More from arxiv.org / cs.CV updates on arXiv.org
Latest AI/ML/Big Data Jobs
Data Analyst, Patagonia Action Works
@ Patagonia | Remote
Data & Insights Strategy & Innovation General Manager
@ Chevron Services Company, a division of Chevron U.S.A Inc. | Houston, TX
Faculty members in Research areas such as Bayesian and Spatial Statistics; Data Privacy and Security; AI/ML; NLP; Image and Video Data Analysis
@ Ahmedabad University | Ahmedabad, India
Director, Applied Mathematics & Computational Research Division
@ Lawrence Berkeley National Lab | Berkeley, Ca
Business Data Analyst
@ MainStreet Family Care | Birmingham, AL
Assistant/Associate Professor of the Practice in Business Analytics
@ Georgetown University McDonough School of Business | Washington DC