April 16, 2024, 4:44 a.m. | Yu-Yu Wu, Hung-Jui Wang, Shang-Tse Chen

cs.LG updates on arXiv.org arxiv.org

arXiv:2305.12118v2 Announce Type: replace
Abstract: In standard adversarial training, models are optimized to fit one-hot labels within allowable adversarial perturbation budgets. However, the ignorance of underlying distribution shifts brought by perturbations causes the problem of robust overfitting. To address this issue and enhance adversarial robustness, we analyze the characteristics of robust models and identify that robust models tend to produce smoother and well-calibrated outputs. Based on the observation, we propose a simple yet effective method, Annealing Self-Distillation Rectification (ADR), which …

abstract adversarial adversarial training analyze arxiv budgets cs.ai cs.lg distillation distribution hot however identify issue labels overfitting robust robust models robustness standard training type

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

AI Engineering Manager

@ M47 Labs | Barcelona, Catalunya [Cataluña], Spain