April 25, 2024, 7:45 p.m. | Erh-Chung Chen, Pin-Yu Chen, I-Hsin Chung, Che-Rung Lee

cs.CV updates on arXiv.org arxiv.org

arXiv:2404.15881v1 Announce Type: new
Abstract: Latency attacks against object detection represent a variant of adversarial attacks that aim to inflate the inference time by generating additional ghost objects in a target image. However, generating ghost objects in the black-box scenario remains a challenge since information about these unqualified objects remains opaque. In this study, we demonstrate the feasibility of generating ghost objects in adversarial examples by extending the concept of "steal now, decrypt later" attacks. These adversarial examples, once produced, …

abstract adversarial adversarial attacks aim arxiv attacks box challenge cs.ai cs.cv detection ghost however image inference information latency object objects robustness type

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote