all AI news
Dataset Pruning: Reducing Training Data by Examining Generalization Influence. (arXiv:2205.09329v1 [cs.LG])
May 20, 2022, 1:11 a.m. | Shuo Yang, Zeke Xie, Hanyu Peng, Min Xu, Mingming Sun, Ping Li
cs.LG updates on arXiv.org arxiv.org
The great success of deep learning heavily relies on increasingly larger
training data, which comes at a price of huge computational and infrastructural
costs. This poses crucial questions that, do all training data contribute to
model's performance? How much does each individual training sample or a
sub-training-set affect the model's generalization, and how to construct a
smallest subset from the entire training data as a proxy training set without
significantly sacrificing the model's performance? To answer these, we propose
dataset …
More from arxiv.org / cs.LG updates on arXiv.org
A Single-Loop Algorithm for Decentralized Bilevel Optimization
1 day, 6 hours ago |
arxiv.org
CLEANing Cygnus A deep and fast with R2D2
1 day, 6 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Alternant Data Engineering
@ Aspire Software | Angers, FR
Senior Software Engineer, Generative AI
@ Google | Dublin, Ireland