Sept. 8, 2022, 1:14 a.m. | Bryan Li, Ajay Patel, Chris Callison-Burch, Mohammad Sadegh Rasooli

cs.CL updates on arXiv.org arxiv.org

We propose a two-stage training approach for developing a single NMT model to
translate unseen languages both to and from English. For the first stage, we
initialize an encoder-decoder model to pretrained XLM-R and RoBERTa weights,
then perform multilingual fine-tuning on parallel data in 25 languages to
English. We find this model can generalize to zero-shot translations on unseen
languages. For the second stage, we leverage this generalization ability to
generate synthetic parallel data from monolingual datasets, then train with …

arxiv translation unsupervised

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

Enterprise Data Architect

@ Pathward | Remote

Diagnostic Imaging Information Systems (DIIS) Technologist

@ Nova Scotia Health Authority | Halifax, NS, CA, B3K 6R8

Intern Data Scientist - Residual Value Risk Management (f/m/d)

@ BMW Group | Munich, DE

Analytics Engineering Manager

@ PlayStation Global | United Kingdom, London

Junior Insight Analyst (PR&Comms)

@ Signal AI | Lisbon, Lisbon, Portugal