April 16, 2024, 4:44 a.m. | Yu-Yu Wu, Hung-Jui Wang, Shang-Tse Chen

cs.LG updates on arXiv.org arxiv.org

arXiv:2305.12118v2 Announce Type: replace
Abstract: In standard adversarial training, models are optimized to fit one-hot labels within allowable adversarial perturbation budgets. However, the ignorance of underlying distribution shifts brought by perturbations causes the problem of robust overfitting. To address this issue and enhance adversarial robustness, we analyze the characteristics of robust models and identify that robust models tend to produce smoother and well-calibrated outputs. Based on the observation, we propose a simple yet effective method, Annealing Self-Distillation Rectification (ADR), which …

abstract adversarial adversarial training analyze arxiv budgets cs.ai cs.lg distillation distribution hot however identify issue labels overfitting robust robust models robustness standard training type

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York