Web: http://arxiv.org/abs/2201.11903

Jan. 31, 2022, 2:10 a.m. | Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, Denny Zhou

cs.CL updates on arXiv.org arxiv.org

Although scaling up language model size has reliably improved performance on
a range of NLP tasks, even the largest models currently struggle with certain
reasoning tasks such as math word problems, symbolic manipulation, and
commonsense reasoning. This paper explores the ability of language models to
generate a coherent chain of thought -- a series of short sentences that mimic
the reasoning process a person might have when responding to a question.
Experiments show that inducing a chain of thought via …

arxiv language language models large language models models reasoning

More from arxiv.org / cs.CL updates on arXiv.org

Director, Data Science (Advocacy & Nonprofit)

@ Civis Analytics | Remote

Data Engineer

@ Rappi | [CO] Bogotá

Data Scientist V, Marketplaces Personalization (Remote)

@ ID.me | United States (U.S.)

Product OPs Data Analyst (Flex/Remote)

@ Scaleway | Paris

Big Data Engineer

@ Risk Focus | Riga, Riga, Latvia

Internship Program: Machine Learning Backend

@ Nextail | Remote job