all AI news
Distilling Adversarial Robustness Using Heterogeneous Teachers
Feb. 27, 2024, 5:46 a.m. | Jieren Deng, Aaron Palmer, Rigel Mahmood, Ethan Rathbun, Jinbo Bi, Kaleel Mahmood, Derek Aguiar
cs.CV updates on arXiv.org arxiv.org
Abstract: Achieving resiliency against adversarial attacks is necessary prior to deploying neural network classifiers in domains where misclassification incurs substantial costs, e.g., self-driving cars or medical imaging. Recent work has demonstrated that robustness can be transferred from an adversarially trained teacher to a student model using knowledge distillation. However, current methods perform distillation using a single adversarial and vanilla teacher and consider homogeneous architectures (i.e., residual networks) that are susceptible to misclassify examples from similar adversarial …
abstract adversarial adversarial attacks arxiv attacks cars classifiers costs cs.cr cs.cv current distillation domains driving imaging knowledge medical medical imaging network neural network prior resiliency robustness self-driving teachers type work
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Reporting & Data Analytics Lead (Sizewell C)
@ EDF | London, GB
Data Analyst
@ Notable | San Mateo, CA