April 9, 2024, 4:43 a.m. | Rohan Deepak Ajwani, Zining Zhu, Jonathan Rose, Frank Rudzicz

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.05143v1 Announce Type: cross
Abstract: Transformer-based Large Language Models (LLMs) have shown exceptional language generation capabilities in response to text-based prompts. However, controlling the direction of generation via textual prompts has been challenging, especially with smaller models. In this work, we explore the use of Prompt Tuning to achieve controlled language generation. Generated text is steered using prompt embeddings, which are trained using a small language model, used as a discriminator. Moreover, we demonstrate that these prompt embeddings can be …

abstract arxiv capabilities cs.ai cs.cl cs.lg explore however language language generation language models large language large language models llms prompt prompts prompt tuning text text generation textual transformer type via work

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

Customer Data Analyst with Spanish

@ Michelin | Voluntari

HC Data Analyst - Senior

@ Leidos | 1662 Intelligence Community Campus - Bethesda MD

Healthcare Research & Data Analyst- Infectious, Niche, Rare Disease

@ Clarivate | Remote (121- Massachusetts)

Data Analyst (maternity leave cover)

@ Clarivate | R155-Belgrade

Sales Enablement Data Analyst (Remote)

@ CrowdStrike | USA TX Remote