Web: http://arxiv.org/abs/2201.11766

Jan. 31, 2022, 2:11 a.m. | Matthew Setzler, Scott Howland, Lauren Phillips

cs.LG updates on arXiv.org arxiv.org

Compositional generalization is a troubling blind spot for neural language
models. Recent efforts have presented techniques for improving a model's
ability to encode novel combinations of known inputs, but less work has focused
on generating novel combinations of known outputs. Here we focus on this latter
"decode-side" form of generalization in the context of gSCAN, a synthetic
benchmark for compositional generalization in grounded language understanding.
We present Recursive Decoding (RD), a novel procedure for training and using
seq2seq models, targeted …

arxiv cognition language recursive

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Product Manager (Europe, Remote)

@ FreshBooks | Germany

Field Operations and Data Engineer, ADAS

@ Lucid Motors | Newark, CA

Machine Learning Engineer - Senior

@ Novetta | Reston, VA

Analytics Engineer

@ ThirdLove | Remote

Senior Machine Learning Infrastructure Engineer - Safety

@ Discord | San Francisco, CA or Remote

Internship, Data Scientist

@ Everstream Analytics | United States (Remote)