Web: http://arxiv.org/abs/2112.08674

May 6, 2022, 1:11 a.m. | Sarah Wiegreffe, Jack Hessel, Swabha Swayamdipta, Mark Riedl, Yejin Choi

cs.CL updates on arXiv.org arxiv.org

Large language models are increasingly capable of generating fluent-appearing
text with relatively little task-specific supervision. But can these models
accurately explain classification decisions? We consider the task of generating
free-text explanations using human-written examples in a few-shot manner. We
find that (1) authoring higher quality prompts results in higher quality
generations; and (2) surprisingly, in a head-to-head comparison, crowdworkers
often prefer explanations generated by GPT-3 to crowdsourced explanations in
existing datasets. Our human studies also show, however, that while models …

ai arxiv collaboration free human text

More from arxiv.org / cs.CL updates on arXiv.org

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC

Senior Data Science Writer

@ NannyML | Remote

Director of AI/ML Engineering

@ Armis Industries | Remote (US only), St. Louis, California

Digital Analytics Manager

@ Patagonia | Ventura, California