Aug. 23, 2022, 1:11 a.m. | Xuwang Yin, Soheil Kolouri, Gustavo K. Rohde

cs.LG updates on arXiv.org arxiv.org

The vulnerabilities of deep neural networks against adversarial examples have
become a significant concern for deploying these models in sensitive domains.
Devising a definitive defense against such attacks is proven to be challenging,
and the methods relying on detecting adversarial samples are only valid when
the attacker is oblivious to the detection mechanism. In this paper we propose
a principled adversarial example detection method that can withstand
norm-constrained white-box attacks. Inspired by one-versus-the-rest
classification, in a K class classification problem, …

arxiv classification detection example lg training

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Program Control Data Analyst

@ Ford Motor Company | Mexico

Vice President, Business Intelligence / Data & Analytics

@ AlphaSense | Remote - United States