May 20, 2022, 1:11 a.m. | Shuo Yang, Zeke Xie, Hanyu Peng, Min Xu, Mingming Sun, Ping Li

cs.LG updates on arXiv.org arxiv.org

The great success of deep learning heavily relies on increasingly larger
training data, which comes at a price of huge computational and infrastructural
costs. This poses crucial questions that, do all training data contribute to
model's performance? How much does each individual training sample or a
sub-training-set affect the model's generalization, and how to construct a
smallest subset from the entire training data as a proxy training set without
significantly sacrificing the model's performance? To answer these, we propose
dataset …

arxiv data dataset influence pruning training training data

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Alternant Data Engineering

@ Aspire Software | Angers, FR

Senior Software Engineer, Generative AI

@ Google | Dublin, Ireland