April 24, 2024, 4:48 a.m. | Zhaokun Jiang, Ziyin Zhang

cs.CL updates on arXiv.org arxiv.org

arXiv:2401.05176v2 Announce Type: replace
Abstract: Large language models have demonstrated parallel and even superior translation performance compared to neural machine translation (NMT) systems. However, existing comparative studies between them mainly rely on automated metrics, raising questions into the feasibility of these metrics and their alignment with human judgment. The present study investigates the convergences and divergences between automated metrics and human evaluation in assessing the quality of machine translation from ChatGPT and three NMT systems. To perform automatic assessment, four …

abstract arxiv assessment automated chatgpt cs.ai cs.cl evaluation generated however human insights language language models large language large language models machine machine translation metrics neural machine translation performance questions studies systems them translation type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne