all AI news
Towards Adversarially Robust Dataset Distillation by Curvature Regularization
March 18, 2024, 4:41 a.m. | Eric Xue, Yijiang Li, Haoyang Liu, Yifan Shen, Haohan Wang
cs.LG updates on arXiv.org arxiv.org
Abstract: Dataset distillation (DD) allows datasets to be distilled to fractions of their original size while preserving the rich distributional information so that models trained on the distilled datasets can achieve a comparable accuracy while saving significant computational loads. Recent research in this area has been focusing on improving the accuracy of models trained on distilled datasets. In this paper, we aim to explore a new perspective of DD. We study how to embed adversarial robustness …
abstract accuracy arxiv computational cs.cv cs.lg dataset datasets distillation fractions information regularization research robust saving type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne