Feb. 6, 2024, 5:48 a.m. | Andy Zhou Jindong Wang Yu-Xiong Wang Haohan Wang

cs.LG updates on arXiv.org arxiv.org

We propose a conceptually simple and lightweight framework for improving the robustness of vision models through the combination of knowledge distillation and data augmentation. We address the conjecture that larger models do not make for better teachers by showing strong gains in out-of-distribution robustness when distilling from pretrained foundation models. Following this finding, we propose Discrete Adversarial Distillation (DAD), which leverages a robust teacher to generate adversarial examples and a VQGAN to discretize them, creating more informative samples than standard …

augmentation combination conjecture cs.ai cs.cv cs.lg data distillation distribution foundation framework knowledge language larger models robustness simple teachers through vision vision models

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Risk Models Methodology & IRB, Student in Nordea

@ Nordea | Stockholm, SE, 111 46