Web: http://arxiv.org/abs/2205.03401

May 9, 2022, 1:11 a.m. | Xi Ye, Greg Durrett

cs.CL updates on arXiv.org arxiv.org

How can prompting a large language model like GPT-3 with explanations improve
in-context learning? We focus specifically on two NLP tasks that involve
reasoning over text, namely question answering and natural language inference.
Including explanations in the prompt and having the model generate them does
not consistently improve performance in the settings we study, contrary to
recent results on symbolic reasoning tasks (Nye et al., 2021; Wei et al.,
2022). Despite careful prompting, explanations generated by GPT-3 may not even …

arxiv context learning

More from arxiv.org / cs.CL updates on arXiv.org

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC

Senior Data Science Writer

@ NannyML | Remote

Director of AI/ML Engineering

@ Armis Industries | Remote (US only), St. Louis, California

Digital Analytics Manager

@ Patagonia | Ventura, California