Aug. 26, 2022, 1:14 a.m. | Ali Borji

cs.CV updates on arXiv.org arxiv.org

Almost all adversarial attacks are formulated to add an imperceptible
perturbation to an image in order to fool a model. Here, we consider the
opposite which is adversarial examples that can fool a human but not a model. A
large enough and perceptible perturbation is added to an image such that a
model maintains its original decision, whereas a human will most likely make a
mistake if forced to decide (or opt not to decide at all). Existing targeted
attacks …

arxiv cv example kind

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Engineer

@ Chubb | Simsbury, CT, United States

Research Analyst , NA Light Vehicle Powertrain Forecasting

@ S&P Global | US - MI - VIRTUAL

Sr. Data Scientist - ML Ops Job

@ Yash Technologies | Indore, IN

Alternance-Data Management

@ Keolis | Courbevoie, FR, 92400