Oct. 6, 2022, 1:16 a.m. | Guang Li, Ren Togo, Takahiro Ogawa, Miki Haseyama

cs.CV updates on arXiv.org arxiv.org

The acquisition of advanced models relies on large datasets in many fields,
which makes storing datasets and training models expensive. As a solution,
dataset distillation can synthesize a small dataset such that models trained on
it achieve high performance on par with the original large dataset. The
recently proposed dataset distillation method by matching network parameters
has been proved effective for several datasets. However, a few parameters in
the distillation process are difficult to match, which harms the distillation
performance. …

arxiv dataset distillation pruning

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Principal Engineer, Deep Learning

@ Outrider | Remote

Data Analyst (Bangkok based, relocation provided)

@ Agoda | Bangkok (Central World Office)

Data Scientist II

@ MoEngage | Bengaluru

Machine Learning Engineer

@ Sika AG | Welwyn Garden City, United Kingdom