Feb. 19, 2024, 5:47 a.m. | Guiming Hardy Chen, Shunian Chen, Ziche Liu, Feng Jiang, Benyou Wang

cs.CL updates on arXiv.org arxiv.org

arXiv:2402.10669v1 Announce Type: new
Abstract: Adopting human and large language models (LLM) as judges (\textit{a.k.a} human- and LLM-as-a-judge) for evaluating the performance of existing LLMs has recently gained attention. Nonetheless, this approach concurrently introduces potential biases from human and LLM judges, questioning the reliability of the evaluation results. In this paper, we propose a novel framework for investigating 5 types of biases for LLM and human judges. We curate a dataset with 142 samples referring to the revised Bloom's Taxonomy …

abstract arxiv attention biases cs.cl evaluation human humans judge judges language language models large language large language models llm llms performance reliability study type

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US