March 1, 2024, 6:45 a.m. | /u/Capital_Reply_7838

Machine Learning www.reddit.com

[https://arxiv.org/abs/2402.19267](https://arxiv.org/abs/2402.19267)


Abstract : Employing extensive datasets enables the training of multilingual machine translation models; however, these models often fail to accurately translate sentences within specialized domains. Although obtaining and translating domain-specific data incurs high costs, it is inevitable for high-quality translations. Hence, finding the most 'effective' data with an unsupervised setting becomes a practical strategy for reducing labeling costs. Recent research indicates that this effective data could be found by selecting 'properly difficult data' based on its volume. This means …

abstract costs data datasets domain domains labeling machine machinelearning machine translation multilingual practical quality research strategy training translate translation unsupervised

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne