all AI news
Strings from the Library of Babel: Random Sampling as a Strong Baseline for Prompt Optimisation
April 18, 2024, 4:47 a.m. | Yao Lu, Jiayi Wang, Raphael Tang, Sebastian Riedel, Pontus Stenetorp
cs.CL updates on arXiv.org arxiv.org
Abstract: Recent prompt optimisation approaches use the generative nature of language models to produce prompts -- even rivaling the performance of human-curated prompts. In this paper, we demonstrate that randomly sampling tokens from the model vocabulary as ``separators'' can be as effective as language models for prompt-style text classification. Our experiments show that random separators are competitive baselines, having less than a 1% difference compared to previous self-optimisation methods and showing a 12% average relative improvement …
abstract arxiv cs.ai cs.cl generative human language language models library nature optimisation paper performance prompt prompts random sampling strings tokens type
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Senior Data Engineer
@ Cint | Gurgaon, India
Data Science (M/F), setor automóvel - Aveiro
@ Segula Technologies | Aveiro, Portugal