May 6, 2022, 1:11 a.m. | Sarah Wiegreffe, Jack Hessel, Swabha Swayamdipta, Mark Riedl, Yejin Choi

cs.CL updates on arXiv.org arxiv.org

Large language models are increasingly capable of generating fluent-appearing
text with relatively little task-specific supervision. But can these models
accurately explain classification decisions? We consider the task of generating
free-text explanations using human-written examples in a few-shot manner. We
find that (1) authoring higher quality prompts results in higher quality
generations; and (2) surprisingly, in a head-to-head comparison, crowdworkers
often prefer explanations generated by GPT-3 to crowdsourced explanations in
existing datasets. Our human studies also show, however, that while models …

ai ai collaboration arxiv collaboration free human text

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US