all AI news
Large Language Models are Inconsistent and Biased Evaluators
May 6, 2024, 4:47 a.m. | Rickard Stureborg, Dimitris Alikaniotis, Yoshi Suhara
cs.CL updates on arXiv.org arxiv.org
Abstract: The zero-shot capability of Large Language Models (LLMs) has enabled highly flexible, reference-free metrics for various tasks, making LLM evaluators common tools in NLP. However, the robustness of these LLM evaluators remains relatively understudied; existing work mainly pursued optimal performance in terms of correlating LLM scores with human expert scores. In this paper, we conduct a series of analyses using the SummEval dataset and confirm that LLMs are biased evaluators as they: (1) exhibit familiarity …
abstract arxiv capability cs.ai cs.cl free however language language models large language large language models llm llm evaluators llms making metrics nlp performance reference robustness tasks terms tools type work zero-shot
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer
@ GPTZero | Toronto, Canada
ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)
@ HelloBetter | Remote
Doctoral Researcher (m/f/div) in Automated Processing of Bioimages
@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena
Seeking Developers and Engineers for AI T-Shirt Generator Project
@ Chevon Hicks | Remote
Data Architect
@ S&P Global | IN - HYDERABAD SKYVIEW
Data Architect I
@ S&P Global | US - VA - CHARLOTTESVILLE 212 7TH STREET