all AI news
Simple Techniques Work Surprisingly Well for Neural Network Test Prioritization and Active Learning (Replicability Study). (arXiv:2205.00664v2 [cs.LG] UPDATED)
May 25, 2022, 1:11 a.m. | Michael Weiss, Paolo Tonella
cs.LG updates on arXiv.org arxiv.org
Test Input Prioritizers (TIP) for Deep Neural Networks (DNN) are an important
technique to handle the typically very large test datasets efficiently, saving
computation and labeling costs. This is particularly true for large-scale,
deployed systems, where inputs observed in production are recorded to serve as
potential test or training data for the next versions of the system. Feng et.
al. propose DeepGini, a very fast and simple TIP, and show that it outperforms
more elaborate techniques such as neuron- and …
active learning arxiv learning network neural network study test work
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Scientist (m/f/x/d)
@ Symanto Research GmbH & Co. KG | Spain, Germany
(Fluent Ukrainian) ML Engineer
@ Outstaff Your Team | Warsaw, Masovian Voivodeship, Poland - Remote
Senior Back-end Engineer (Cargo Models)
@ Kpler | London
Senior Data Science Manager, Marketplace Foundations
@ Reddit | Remote - United States
Intermediate Data Engineer
@ JUMO | South Africa
Data Engineer ( remote )
@ AssistRx | Orlando, Florida, United States - Remote