all AI news
Influencer Backdoor Attack on Semantic Segmentation
March 26, 2024, 4:48 a.m. | Haoheng Lan, Jindong Gu, Philip Torr, Hengshuang Zhao
cs.CV updates on arXiv.org arxiv.org
Abstract: When a small number of poisoned samples are injected into the training dataset of a deep neural network, the network can be induced to exhibit malicious behavior during inferences, which poses potential threats to real-world applications. While they have been intensively studied in classification, backdoor attacks on semantic segmentation have been largely overlooked. Unlike classification, semantic segmentation aims to classify every pixel within a given image. In this work, we explore backdoor attacks on segmentation …
abstract applications arxiv attacks backdoor behavior classification cs.cv dataset deep neural network inferences influencer network neural network samples segmentation semantic small threats training type world
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Senior Machine Learning Engineer
@ Samsara | Canada - Remote