Sept. 16, 2022, 1:16 a.m. | Jonas Belouadi, Steffen Eger

cs.CL updates on arXiv.org arxiv.org

The vast majority of evaluation metrics for machine translation are
supervised, i.e., (i) assume the existence of reference translations, (ii) are
trained on human scores, or (iii) leverage parallel data. This hinders their
applicability to cases where such supervision signals are not available. In
this work, we develop fully unsupervised evaluation metrics. To do so, we
leverage similarities and synergies between evaluation metric induction,
parallel corpus mining, and MT systems. In particular, we use an unsupervised
evaluation metric to mine …

arxiv evaluation machine machine translation metrics translation unsupervised

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

[Job - 14823] Senior Data Scientist (Data Analyst Sr)

@ CI&T | Brazil

Data Engineer

@ WorldQuant | Hanoi

ML Engineer / Toronto

@ Intersog | Toronto, Ontario, Canada

Analista de Business Intelligence (Industry Insights)

@ NielsenIQ | Cotia, Brazil