all AI news
Diffusion Attack: Leveraging Stable Diffusion for Naturalistic Image Attacking
March 25, 2024, 4:44 a.m. | Qianyu Guo, Jiaming Fu, Yawen Lu, Dongming Gan
cs.CV updates on arXiv.org arxiv.org
Abstract: In Virtual Reality (VR), adversarial attack remains a significant security threat. Most deep learning-based methods for physical and digital adversarial attacks focus on enhancing attack performance by crafting adversarial examples that contain large printable distortions that are easy for human observers to identify. However, attackers rarely impose limitations on the naturalness and comfort of the appearance of the generated attack image, resulting in a noticeable and unnatural attack. To address this challenge, we propose a …
abstract adversarial adversarial attacks adversarial examples arxiv attacks cs.cv deep learning diffusion digital easy eess.iv examples focus however human identify image performance reality security stable diffusion threat type virtual virtual reality
More from arxiv.org / cs.CV updates on arXiv.org
Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs
2 days, 5 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Research Scientist
@ Meta | Menlo Park, CA
Principal Data Scientist
@ Mastercard | O'Fallon, Missouri (Main Campus)