all AI news
Evading Black-box Classifiers Without Breaking Eggs
Feb. 15, 2024, 5:43 a.m. | Edoardo Debenedetti, Nicholas Carlini, Florian Tram\`er
cs.LG updates on arXiv.org arxiv.org
Abstract: Decision-based evasion attacks repeatedly query a black-box classifier to generate adversarial examples. Prior work measures the cost of such attacks by the total number of queries made to the classifier. We argue this metric is flawed. Most security-critical machine learning systems aim to weed out "bad" data (e.g., malware, harmful content, etc). Queries to such systems carry a fundamentally asymmetric cost: queries detected as "bad" come at a higher cost because they trigger additional security …
abstract adversarial adversarial examples aim arxiv attacks box breaking classifier classifiers cost cs.cr cs.lg data decision eggs evasion examples generate learning systems machine machine learning malware prior query security stat.ml systems total type work
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote