March 19, 2024, 4:51 a.m. | Yuwei Sun, Hideya Ochiai, Jun Sakuma

cs.CV updates on arXiv.org arxiv.org

arXiv:2304.00436v2 Announce Type: replace
Abstract: Trojan attacks embed perturbations in input data leading to malicious behavior in neural network models. A combination of various Trojans in different modalities enables an adversary to mount a sophisticated attack on multimodal learning such as Visual Question Answering (VQA). However, multimodal Trojans in conventional methods are susceptible to parameter adjustment during processes such as fine-tuning. To this end, we propose an instance-level multimodal Trojan attack on VQA that efficiently adapts to fine-tuned models through …

abstract adversarial adversarial learning arxiv attacks behavior combination cs.ai cs.cv data embed however instance multimodal multimodal learning network neural network neuron question question answering space type via visual vqa

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Sr. Software Development Manager, AWS Neuron Machine Learning Distributed Training

@ Amazon.com | Cupertino, California, USA