June 29, 2022, 1:12 a.m. | Weizhou Shen, Yeyun Gong, Yelong Shen, Song Wang, Xiaojun Quan, Nan Duan, Weizhu Chen

cs.CL updates on arXiv.org arxiv.org

Due to exposure bias, most existing natural language generation (NLG) models
trained by maximizing the likelihood objective predict poor text results during
the inference stage. In this paper, to tackle this problem, we revisit the
generate-then-rank framework and propose a joint generator-ranker (JGR)
training algorithm for text generation tasks. In JGR, the generator model is
trained by maximizing two objectives: the likelihood of the training corpus and
the expected reward given by the ranker model. Meanwhile, the ranker model
takes …

arxiv generation generator language language generation learning natural natural language natural language generation

Senior Marketing Data Analyst

@ Amazon.com | Amsterdam, North Holland, NLD

Senior Data Analyst

@ MoneyLion | Kuala Lumpur, Kuala Lumpur, Malaysia

Data Management Specialist - Office of the CDO - Chase- Associate

@ JPMorgan Chase & Co. | LONDON, LONDON, United Kingdom

BI Data Analyst

@ Nedbank | Johannesburg, ZA

Head of Data Science and Artificial Intelligence (m/f/d)

@ Project A Ventures | Munich, Germany

Senior Data Scientist - GenAI

@ Roche | Hyderabad RSS