all AI news
On the Fragility of Active Learners
March 26, 2024, 4:41 a.m. | Abhishek Ghose, Emma Nguyen
cs.LG updates on arXiv.org arxiv.org
Abstract: Active learning (AL) techniques aim to maximally utilize a labeling budget by iteratively selecting instances that are most likely to improve prediction accuracy. However, their benefit compared to random sampling has not been consistent across various setups, e.g., different datasets, classifiers. In this empirical study, we examine how a combination of different factors might obscure any gains from an AL technique.
Focusing on text classification, we rigorously evaluate AL techniques over around 1000 experiments that …
abstract accuracy active learning aim arxiv benefit budget classifiers consistent cs.cl cs.lg datasets however instances labeling prediction random sampling study type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Business Intelligence Architect - Specialist
@ Eastman | Hyderabad, IN, 500 008