April 9, 2024, 4:43 a.m. | Rohan Deepak Ajwani, Zining Zhu, Jonathan Rose, Frank Rudzicz

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.05143v1 Announce Type: cross
Abstract: Transformer-based Large Language Models (LLMs) have shown exceptional language generation capabilities in response to text-based prompts. However, controlling the direction of generation via textual prompts has been challenging, especially with smaller models. In this work, we explore the use of Prompt Tuning to achieve controlled language generation. Generated text is steered using prompt embeddings, which are trained using a small language model, used as a discriminator. Moreover, we demonstrate that these prompt embeddings can be …

abstract arxiv capabilities cs.ai cs.cl cs.lg explore however language language generation language models large language large language models llms prompt prompts prompt tuning text text generation textual transformer type via work

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Machine Learning Research Scientist

@ d-Matrix | San Diego, Ca