May 23, 2022, 1:12 a.m. | Eric Zelikman, Yuhuai Wu, Jesse Mu, Noah D. Goodman

cs.CL updates on arXiv.org arxiv.org

Generating step-by-step "chain-of-thought" rationales improves language model
performance on complex reasoning tasks like mathematics or commonsense
question-answering. However, inducing language model rationale generation
currently requires either constructing massive rationale datasets or
sacrificing accuracy by using only few-shot inference. We propose a technique
to iteratively leverage a small number of rationale examples and a large
dataset without rationales, to bootstrap the ability to perform successively
more complex reasoning. This technique, the "Self-Taught Reasoner" (STaR),
relies on a simple loop: generate rationales …

arxiv bootstrapping reasoning

Data Engineer

@ Bosch Group | San Luis Potosí, Mexico

DATA Engineer (H/F)

@ Renault Group | FR REN RSAS - Le Plessis-Robinson (Siège)

Advisor, Data engineering

@ Desjardins | 1, Complexe Desjardins, Montréal

Data Engineer Intern

@ Getinge | Wayne, NJ, US

Software Engineer III- Java / Python / Pyspark / ETL

@ JPMorgan Chase & Co. | Jersey City, NJ, United States

Lead Data Engineer (Azure/AWS)

@ Telstra | Telstra ICC Bengaluru