all AI news
Instances Need More Care: Rewriting Prompts for Instances with LLMs in the Loop Yields Better Zero-Shot Performance
March 12, 2024, 4:52 a.m. | Saurabh Srivastava, Chengyue Huang, Weiguo Fan, Ziyu Yao
cs.CL updates on arXiv.org arxiv.org
Abstract: Large language models (LLMs) have revolutionized zero-shot task performance, mitigating the need for task-specific annotations while enhancing task generalizability. Despite its advancements, current methods using trigger phrases such as ``Let's think step by step'' remain limited. This study introduces PRomPTed, an approach that optimizes the zero-shot prompts for individual task instances following an innovative manner of ``LLMs in the loop''. Our comprehensive evaluation across 13 datasets and 10 task types based on GPT-4 reveals that …
abstract annotations arxiv cs.cl current instances language language models large language large language models llms loop performance prompts study think type zero-shot
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Research Scientist - XR Input Perception
@ Meta | Sausalito, CA | Redmond, WA | Burlingame, CA
Sr. Data Engineer
@ Oportun | Remote - India