Feb. 15, 2024, 5:43 a.m. | Edoardo Debenedetti, Nicholas Carlini, Florian Tram\`er

cs.LG updates on arXiv.org arxiv.org

arXiv:2306.02895v2 Announce Type: replace-cross
Abstract: Decision-based evasion attacks repeatedly query a black-box classifier to generate adversarial examples. Prior work measures the cost of such attacks by the total number of queries made to the classifier. We argue this metric is flawed. Most security-critical machine learning systems aim to weed out "bad" data (e.g., malware, harmful content, etc). Queries to such systems carry a fundamentally asymmetric cost: queries detected as "bad" come at a higher cost because they trigger additional security …

abstract adversarial adversarial examples aim arxiv attacks box breaking classifier classifiers cost cs.cr cs.lg data decision eggs evasion examples generate learning systems machine machine learning malware prior query security stat.ml systems total type work

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Principal Research Engineer - Materials

@ GKN Aerospace | Westlake, TX, US