April 2, 2024, 7:52 p.m. | ChaeHun Park, Minseok Choi, Dohyun Lee, Jaegul Choo

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.01015v1 Announce Type: new
Abstract: Building a reliable and automated evaluation metric is a necessary but challenging problem for open-domain dialogue systems. Recent studies proposed evaluation metrics that assess generated responses by considering their relevance to previous dialogue histories. Although effective, these metrics evaluate individual responses directly rather than considering their relative quality compared to other responses. To handle this, we propose PairEval, a novel dialogue evaluation metric for assessing responses by comparing their quality against responses in different conversations. …

abstract arxiv automated building comparison cs.cl dialogue domain evaluation evaluation metrics generated metrics quality responses studies systems type

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US