all AI news
Gamma Sampling: Fine-grained Controlling Language Models without Training. (arXiv:2205.06036v4 [cs.CL] UPDATED)
cs.CL updates on arXiv.org arxiv.org
The dominant approaches for controlling language models achieve prominence in
controlling high-level attributes (e.g. topic and sentiment). However, these
methods often require condition-specific data or are computationally expensive.
We propose a new simple guided decoding method, Gamma Sampling, which does not
require any training data to achieve fine-grained controllable text generation
while maintaining a fast generation speed. Gamma Sampling introduces
attribute-related information (provided by humans or language models
themselves) into the sampling process to guide language models to generate
texts …
arxiv fine-grained language language models sampling training