March 21, 2024, 4:45 a.m. | Roie Kazoom, Raz Birman, Ofer Hadar

cs.CV updates on arXiv.org arxiv.org

arXiv:2403.12988v1 Announce Type: new
Abstract: Adversarial patch attacks, crafted to compromise the integrity of Deep Neural Networks (DNNs), significantly impact Artificial Intelligence (AI) systems designed for object detection and classification tasks. The primary purpose of this work is to defend models against real-world physical attacks that target object detection and classification. We analyze attack techniques and propose a robust defense approach. We successfully reduce model confidence by over 20% using adversarial patch attacks that exploit object shape, texture and position. …

abstract adversarial ai models artificial artificial intelligence arxiv attacks classification cs.ai cs.cv detection impact improving integrity intelligence networks neural networks object robustness systems tasks type work world

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US