May 26, 2022, 1:12 a.m. | Sachin Kumar, Biswajit Paria, Yulia Tsvetkov

cs.CL updates on arXiv.org arxiv.org

Large pre-trained language models are well-established for their ability to
generate text seemingly indistinguishable from humans. In this work, we study
the problem of constrained sampling from such language models. That is,
generating text that satisfies user-defined constraints. Typical decoding
strategies which generate samples left-to-right are not always conducive to
imposing such constraints globally. Instead, we propose MuCoLa -- a sampling
procedure that combines the log-likelihood of the language model with arbitrary
differentiable constraints into a single energy function; and …

arxiv dynamics embedding language language models sampling

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Lead Software Engineer - Artificial Intelligence, LLM

@ OpenText | Hyderabad, TG, IN

Lead Software Engineer- Python Data Engineer

@ JPMorgan Chase & Co. | GLASGOW, LANARKSHIRE, United Kingdom

Data Analyst (m/w/d)

@ Collaboration Betters The World | Berlin, Germany

Data Engineer, Quality Assurance

@ Informa Group Plc. | Boulder, CO, United States

Director, Data Science - Marketing

@ Dropbox | Remote - Canada