all AI news
Dataset Distillation using Parameter Pruning. (arXiv:2209.14609v2 [cs.CV] UPDATED)
Oct. 6, 2022, 1:16 a.m. | Guang Li, Ren Togo, Takahiro Ogawa, Miki Haseyama
cs.CV updates on arXiv.org arxiv.org
The acquisition of advanced models relies on large datasets in many fields,
which makes storing datasets and training models expensive. As a solution,
dataset distillation can synthesize a small dataset such that models trained on
it achieve high performance on par with the original large dataset. The
recently proposed dataset distillation method by matching network parameters
has been proved effective for several datasets. However, a few parameters in
the distillation process are difficult to match, which harms the distillation
performance. …
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Principal Engineer, Deep Learning
@ Outrider | Remote
Data Analyst (Bangkok based, relocation provided)
@ Agoda | Bangkok (Central World Office)
Data Scientist II
@ MoEngage | Bengaluru
Machine Learning Engineer
@ Sika AG | Welwyn Garden City, United Kingdom