May 26, 2022, 1:12 a.m. | Jiao Sun, Yu Hou, Jiin Kim, Nanyun Peng

cs.CL updates on arXiv.org arxiv.org

Task-oriented dialogue systems aim to answer questions from users and provide
immediate help. Therefore, how humans perceive their helpfulness is important.
However, neither the human-perceived helpfulness of task-oriented dialogue
systems nor its fairness implication has been studied yet. In this paper, we
define a dialogue response as helpful if it is relevant & coherent, useful, and
informative to a query and study computational measurements of helpfulness.
Then, we propose utilizing the helpfulness level of different groups to gauge
the fairness …

arxiv fairness systems

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior AI & Data Engineer

@ Bertelsmann | Kuala Lumpur, 14, MY, 50400

Analytics Engineer

@ Reverse Tech | Philippines - Remote