April 30, 2024, 4:42 a.m. | Yang Ba, Michelle V. Mancenido, Erin K. Chiou, Rong Pan

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.17582v1 Announce Type: cross
Abstract: As crowdsourcing emerges as an efficient and cost-effective method for obtaining labels for machine learning datasets, it is important to assess the quality of crowd-provided data, so as to improve analysis performance and reduce biases in subsequent machine learning tasks. Given the lack of ground truth in most cases of crowdsourcing, we refer to data quality as annotators' consistency and credibility. Unlike the simple scenarios where Kappa coefficient and intraclass correlation coefficient usually can apply, …

abstract analysis arxiv behavior behavior detection biases cost crowdsourcing cs.hc cs.lg data data quality datasets detection labels machine machine learning performance quality reduce spamming stat.ap tasks truth type

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US