April 18, 2024, 4:47 a.m. | Yao Lu, Jiayi Wang, Raphael Tang, Sebastian Riedel, Pontus Stenetorp

cs.CL updates on arXiv.org arxiv.org

arXiv:2311.09569v2 Announce Type: replace
Abstract: Recent prompt optimisation approaches use the generative nature of language models to produce prompts -- even rivaling the performance of human-curated prompts. In this paper, we demonstrate that randomly sampling tokens from the model vocabulary as ``separators'' can be as effective as language models for prompt-style text classification. Our experiments show that random separators are competitive baselines, having less than a 1% difference compared to previous self-optimisation methods and showing a 12% average relative improvement …

abstract arxiv cs.ai cs.cl generative human language language models library nature optimisation paper performance prompt prompts random sampling strings tokens type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Engineer

@ Cint | Gurgaon, India

Data Science (M/F), setor automóvel - Aveiro

@ Segula Technologies | Aveiro, Portugal