all AI news
Large Language Models are Zero-Shot Reasoners. (arXiv:2205.11916v1 [cs.CL])
May 25, 2022, 1:12 a.m. | Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, Yusuke Iwasawa
cs.CL updates on arXiv.org arxiv.org
Pretrained large language models (LLMs) are widely used in many sub-fields of
natural language processing (NLP) and generally known as excellent few-shot
learners with task-specific exemplars. Notably, chain of thought (CoT)
prompting, a recent technique for eliciting complex multi-step reasoning
through step-by-step answer examples, achieved the state-of-the-art
performances in arithmetics and symbolic reasoning, difficult system-2 tasks
that do not follow the standard scaling laws for LLMs. While these successes
are often attributed to LLMs' ability for few-shot learning, we show …
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Marketing Data Analyst
@ Amazon.com | Amsterdam, North Holland, NLD
Senior Data Analyst
@ MoneyLion | Kuala Lumpur, Kuala Lumpur, Malaysia
Data Management Specialist - Office of the CDO - Chase- Associate
@ JPMorgan Chase & Co. | LONDON, LONDON, United Kingdom
BI Data Analyst
@ Nedbank | Johannesburg, ZA
Head of Data Science and Artificial Intelligence (m/f/d)
@ Project A Ventures | Munich, Germany
Senior Data Scientist - GenAI
@ Roche | Hyderabad RSS