all AI news
Revisiting Adversarial Training under Long-Tailed Distributions
March 18, 2024, 4:45 a.m. | Xinli Yue, Ningping Mou, Qian Wang, Lingchen Zhao
cs.CV updates on arXiv.org arxiv.org
Abstract: Deep neural networks are vulnerable to adversarial attacks, often leading to erroneous outputs. Adversarial training has been recognized as one of the most effective methods to counter such attacks. However, existing adversarial training techniques have predominantly been tested on balanced datasets, whereas real-world data often exhibit a long-tailed distribution, casting doubt on the efficacy of these methods in practical scenarios.
In this paper, we delve into adversarial training under long-tailed distributions. Through an analysis of …
abstract adversarial adversarial attacks adversarial training arxiv attacks cs.cv data datasets distribution however networks neural networks training type vulnerable world
More from arxiv.org / cs.CV updates on arXiv.org
Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs
1 day, 13 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Global Data Architect, AVP - State Street Global Advisors
@ State Street | Boston, Massachusetts
Data Engineer
@ NTT DATA | Pune, MH, IN