April 9, 2024, 4:50 a.m. | Yukti Makhija, Priyanka Agrawal, Rishi Saket, Aravindan Raghuveer

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.04817v1 Announce Type: new
Abstract: Large language models (LLMs) are being increasingly tuned to power complex generation tasks such as writing, fact-seeking, querying and reasoning. Traditionally, human or model feedback for evaluating and further tuning LLM performance has been provided at the response level, enabling faster and more cost-effective assessments. However, recent works (Amplayo et al. [2022], Wu et al. [2023]) indicate that sentence-level labels may provide more accurate and interpretable feedback for LLM optimization. In this work, we introduce …

abstract arxiv cost cs.cl enabling faster feedback fine-grained fractal however human labels language language models large language large language models llm llm performance llms performance power reasoning scoring tasks text type writing

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Software Engineer, Data Tools - Full Stack

@ DoorDash | Pune, India

Senior Data Analyst

@ Artsy | New York City