Web: http://arxiv.org/abs/2206.07694

June 16, 2022, 1:12 a.m. | Kushal Arora, Kurt Shuster, Sainbayar Sukhbaatar, Jason Weston

cs.CL updates on arXiv.org arxiv.org

Current language models achieve low perplexity but their resulting
generations still suffer from toxic responses, repetitiveness and
contradictions. The standard language modeling setup fails to address these
issues. In this paper, we introduce a new architecture, {\sc Director}, that
consists of a unified generator-classifier with both a language modeling and a
classification head for each output token. Training is conducted jointly using
both standard language modeling data, and data labeled with desirable and
undesirable sequences. Experiments in several settings show …

arxiv generator language modeling

More from arxiv.org / cs.CL updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY