all AI news
How Reliable Are Automatic Evaluation Methods for Instruction-Tuned LLMs?
Feb. 19, 2024, 5:47 a.m. | Ehsan Doostmohammadi, Oskar Holmstr\"om, Marco Kuhlmann
cs.CL updates on arXiv.org arxiv.org
Abstract: Work on instruction-tuned Large Language Models (LLMs) has used automatic methods based on text overlap and LLM judgments as cost-effective alternatives to human evaluation. In this paper, we study the reliability of such methods across a broad range of tasks and in a cross-lingual setting. In contrast to previous findings, we observe considerable variability in correlations between automatic methods and human evaluators when scores are differentiated by task type. Specifically, the widely-used ROUGE-L metric strongly …
abstract arxiv cost cross-lingual cs.ai cs.cl evaluation human instruction-tuned language language models large language large language models llm llms paper reliability study tasks text type work
More from arxiv.org / cs.CL updates on arXiv.org
ALBA: Adaptive Language-based Assessments for Mental Health
2 days, 17 hours ago |
arxiv.org
PACE: Improving Prompt with Actor-Critic Editing for Large Language Model
2 days, 17 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US