March 26, 2024, 4:48 a.m. | Haoheng Lan, Jindong Gu, Philip Torr, Hengshuang Zhao

cs.CV updates on arXiv.org arxiv.org

arXiv:2303.12054v3 Announce Type: replace
Abstract: When a small number of poisoned samples are injected into the training dataset of a deep neural network, the network can be induced to exhibit malicious behavior during inferences, which poses potential threats to real-world applications. While they have been intensively studied in classification, backdoor attacks on semantic segmentation have been largely overlooked. Unlike classification, semantic segmentation aims to classify every pixel within a given image. In this work, we explore backdoor attacks on segmentation …

abstract applications arxiv attacks backdoor behavior classification cs.cv dataset deep neural network inferences influencer network neural network samples segmentation semantic small threats training type world

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US