Aug. 5, 2022, 1:12 a.m. | Ali Borji

cs.CV updates on arXiv.org arxiv.org

Almost all adversarial attacks are formulated to add an imperceptible
perturbation to an image in order to fool a model. Here, we consider the
opposite which is adversarial examples that can fool a human but not a model. A
large enough and perceptible perturbation is added to an image such that a
model maintains its original decision, whereas a human will most likely make a
mistake if forced to decide (or opt not to decide at all). Existing targeted
attacks …

arxiv cv example kind

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Lead Data Engineer

@ JPMorgan Chase & Co. | Jersey City, NJ, United States

Senior Machine Learning Engineer

@ TELUS | Vancouver, BC, CA

CT Technologist - Ambulatory Imaging - PRN

@ Duke University | Morriville, NC, US, 27560

BH Data Analyst

@ City of Philadelphia | Philadelphia, PA, United States