June 11, 2024, 4:41 a.m. | Jason Cai, Hang Su, Monica Sunkara, Igor Shalyminov, Saab Mansour

cs.CL updates on arXiv.org arxiv.org

arXiv:2406.05588v1 Announce Type: new
Abstract: Large Language Models (LLMs) are powerful models for generation tasks, but they may not generate good quality outputs in their first attempt. Apart from model fine-tuning, existing approaches to improve prediction accuracy and quality typically involve LLM self-improvement / self-reflection that incorporate feedback from models themselves. Despite their effectiveness, these methods are hindered by their high computational cost and lack of scalability. In this work, we propose CERET, a method for refining text generations by …

abstract accuracy arxiv cost cs.ai cs.cl cs.lg feedback fine-tuning generate good improvement language language models large language large language models llm llms model fine-tuning prediction quality self-improvement tasks text text generation type

Senior Data Engineer

@ Displate | Warsaw

Solution Architect

@ Philips | Bothell - B2 - Bothell 22050

Senior Product Development Engineer - Datacenter Products

@ NVIDIA | US, CA, Santa Clara

Systems Engineer - 2nd Shift (Onsite)

@ RTX | PW715: Asheville Site W Asheville Greenfield Site TBD , Asheville, NC, 28803 USA

System Test Engineers (HW & SW)

@ Novanta | Barcelona, Spain

Senior Solutions Architect, Energy

@ NVIDIA | US, TX, Remote