Feb. 22, 2024, 5:41 a.m. | Nathan Beck, Adithya Iyer, Rishabh Iyer

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.13468v1 Announce Type: new
Abstract: As supervised fine-tuning of pre-trained models within NLP applications increases in popularity, larger corpora of annotated data are required, especially with increasing parameter counts in large language models. Active learning, which attempts to mine and annotate unlabeled instances to improve model performance maximally fast, is a common choice for reducing the annotation cost; however, most methods typically ignore class imbalance and either assume access to initial annotated data or require multiple rounds of active learning …

abstract active learning annotated data applications arxiv cs.cl cs.lg data fine-tuning information instances language language models large language large language models mine nlp performance pre-trained models supervised fine-tuning supervision type

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US