Web: http://arxiv.org/abs/2205.06036

May 13, 2022, 1:11 a.m. | Shangda Wu, Maosong Sun

cs.CL updates on arXiv.org arxiv.org

The dominant approaches for controlling language models are based on
fine-tuning large language models or prompt engineering. However, these methods
often require condition-specific data or considerable hand-crafting. We propose
a new simple guided decoding method, Gamma Sampling, which does not require
complex engineering and any extra data. Gamma Sampling introduces
attribute-related information (provided by humans or language models
themselves) into the sampling process to guide language models to generate
texts with desired attributes. Experiments on controlling topics and sentiments
of …

arxiv information language language models models sampling

Data Analyst, Patagonia Action Works

@ Patagonia | Remote

Data & Insights Strategy & Innovation General Manager

@ Chevron Services Company, a division of Chevron U.S.A Inc. | Houston, TX

Faculty members in Research areas such as Bayesian and Spatial Statistics; Data Privacy and Security; AI/ML; NLP; Image and Video Data Analysis

@ Ahmedabad University | Ahmedabad, India

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC