May 8, 2023, 12:44 a.m. | Bingsheng Yao, Prithviraj Sen, Lucian Popa, James Hendler, Dakuo Wang

cs.CL updates on arXiv.org arxiv.org

Human-annotated labels and explanations are critical for training explainable
NLP models. However, unlike human-annotated labels whose quality is easier to
calibrate (e.g., with a majority vote), human-crafted free-form explanations
can be quite subjective, as some recent works have discussed. Before blindly
using them as ground truth to train ML models, a vital question needs to be
asked: How do we evaluate a human-annotated explanation's quality? In this
paper, we build on the view that the quality of a human-annotated explanation …

arxiv evaluation free human labels language natural natural language nlp nlp models quality training

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US