all AI news
Over-Generation Cannot Be Rewarded: Length-Adaptive Average Lagging for Simultaneous Speech Translation. (arXiv:2206.05807v2 [cs.CL] UPDATED)
June 17, 2022, 1:12 a.m. | Sara Papi, Marco Gaido, Matteo Negri, Marco Turchi
cs.CL updates on arXiv.org arxiv.org
Simultaneous speech translation (SimulST) systems aim at generating their
output with the lowest possible latency, which is normally computed in terms of
Average Lagging (AL). In this paper we highlight that, despite its widespread
adoption, AL provides underestimated scores for systems that generate longer
predictions compared to the corresponding references. We also show that this
problem has practical relevance, as recent SimulST systems have indeed a
tendency to over-generate. As a solution, we propose LAAL (Length-Adaptive
Average Lagging), a modified …
More from arxiv.org / cs.CL updates on arXiv.org
VAL: Interactive Task Learning with GPT Dialog Parsing
1 day, 5 hours ago |
arxiv.org
DBCopilot: Scaling Natural Language Querying to Massive Databases
1 day, 5 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Alternant Data Engineering
@ Aspire Software | Angers, FR
Senior Software Engineer, Generative AI
@ Google | Dublin, Ireland