Aug. 10, 2023, 4:48 a.m. | Xianfeng Zeng, Yijin Liu, Fandong Meng, Jie Zhou

cs.CL updates on arXiv.org arxiv.org

N-gram matching-based evaluation metrics, such as BLEU and chrF, are widely
utilized across a range of natural language generation (NLG) tasks. However,
recent studies have revealed a weak correlation between these matching-based
metrics and human evaluations, especially when compared with neural-based
metrics like BLEURT. In this paper, we conjecture that the performance
bottleneck in matching-based metrics may be caused by the limited diversity of
references. To address this issue, we propose to utilize \textit{multiple
references} to enhance the consistency between …

arxiv correlation data data leakage diversity evaluation evaluation metrics human language language generation metrics multiple natural natural language natural language generation nlg reference studies

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne