all AI news
Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators
March 26, 2024, 4:44 a.m. | Yinhong Liu, Han Zhou, Zhijiang Guo, Ehsan Shareghi, Ivan Vulic, Anna Korhonen, Nigel Collier
cs.LG updates on arXiv.org arxiv.org
Abstract: Large Language Models (LLMs) have demonstrated promising capabilities as automatic evaluators in assessing the quality of generated natural language. However, LLMs still exhibit biases in evaluation and often struggle to generate coherent evaluations that align with human assessments. In this work, we first conduct a systematic study of the misalignment between LLM evaluators and human judgement, revealing that existing calibration methods aimed at mitigating biases are insufficient for effectively aligning LLM evaluators. Inspired by the …
abstract arxiv biases capabilities cs.ai cs.cl cs.lg evaluation generate generated however human judgement language language model language models large language large language model large language models llms natural natural language quality role struggle type work
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Senior ML Engineer
@ Carousell Group | Ho Chi Minh City, Vietnam
Data and Insight Analyst
@ Cotiviti | Remote, United States