June 16, 2022, 1:12 a.m. | Kushal Arora, Kurt Shuster, Sainbayar Sukhbaatar, Jason Weston

cs.CL updates on arXiv.org arxiv.org

Current language models achieve low perplexity but their resulting
generations still suffer from toxic responses, repetitiveness and
contradictions. The standard language modeling setup fails to address these
issues. In this paper, we introduce a new architecture, {\sc Director}, that
consists of a unified generator-classifier with both a language modeling and a
classification head for each output token. Training is conducted jointly using
both standard language modeling data, and data labeled with desirable and
undesirable sequences. Experiments in several settings show …

arxiv classifiers director generator language modeling

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Staff Software Engineer, Generative AI, Google Cloud AI

@ Google | Mountain View, CA, USA; Sunnyvale, CA, USA

Expert Data Sciences

@ Gainwell Technologies | Any city, CO, US, 99999