March 18, 2024, 4:45 a.m. | Xinli Yue, Ningping Mou, Qian Wang, Lingchen Zhao

cs.CV updates on arXiv.org arxiv.org

arXiv:2403.10073v1 Announce Type: new
Abstract: Deep neural networks are vulnerable to adversarial attacks, often leading to erroneous outputs. Adversarial training has been recognized as one of the most effective methods to counter such attacks. However, existing adversarial training techniques have predominantly been tested on balanced datasets, whereas real-world data often exhibit a long-tailed distribution, casting doubt on the efficacy of these methods in practical scenarios.
In this paper, we delve into adversarial training under long-tailed distributions. Through an analysis of …

abstract adversarial adversarial attacks adversarial training arxiv attacks cs.cv data datasets distribution however networks neural networks training type vulnerable world

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Global Data Architect, AVP - State Street Global Advisors

@ State Street | Boston, Massachusetts

Data Engineer

@ NTT DATA | Pune, MH, IN