Aug. 26, 2022, 1:11 a.m. | Ali Borji

cs.LG updates on arXiv.org arxiv.org

Almost all adversarial attacks are formulated to add an imperceptible
perturbation to an image in order to fool a model. Here, we consider the
opposite which is adversarial examples that can fool a human but not a model. A
large enough and perceptible perturbation is added to an image such that a
model maintains its original decision, whereas a human will most likely make a
mistake if forced to decide (or opt not to decide at all). Existing targeted
attacks …

arxiv cv example kind

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US