April 22, 2024, 4:47 a.m. | Clemencia Siro, Mohammad Aliannejadi, Maarten de Rijke

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.12994v1 Announce Type: cross
Abstract: In ad-hoc retrieval, evaluation relies heavily on user actions, including implicit feedback. In a conversational setting such signals are usually unavailable due to the nature of the interactions, and, instead, the evaluation often relies on crowdsourced evaluation labels. The role of user feedback in annotators' assessment of turns in a conversational perception has been little studied. We focus on how the evaluation of task-oriented dialogue systems (TDSs), is affected by considering user feedback, explicit or …

abstract arxiv conversational cs.cl cs.ir dialogue effects evaluation feedback interactions labels llms nature retrieval role systems type user feedback

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US