Web: http://arxiv.org/abs/2206.07137

June 17, 2022, 1:12 a.m. | Sören Mindermann, Jan Brauner, Muhammed Razzak, Mrinank Sharma, Andreas Kirsch, Winnie Xu, Benedikt Höltgen, Aidan N. Gomez, Adrien Morisot,

cs.LG updates on arXiv.org arxiv.org

Training on web-scale data can take months. But most computation and time is
wasted on redundant and noisy points that are already learnt or not learnable.
To accelerate training, we introduce Reducible Holdout Loss Selection
(RHO-LOSS), a simple but principled technique which selects approximately those
points for training that most reduce the model's generalization loss. As a
result, RHO-LOSS mitigates the weaknesses of existing data selection methods:
techniques from the optimization literature typically select 'hard' (e.g. high
loss) points, but …

arxiv learning lg on training

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY