Sept. 13, 2022, 1:16 a.m. | Bryan Li, Mohammad Sadegh Rasooli, Ajay Patel, Chris Callison-Burch

cs.CL updates on arXiv.org arxiv.org

We propose a two-stage training approach for developing a single NMT model to
translate unseen languages both to and from English. For the first stage, we
initialize an encoder-decoder model to pretrained XLM-R and RoBERTa weights,
then perform multilingual fine-tuning on parallel data in 25 languages to
English. We find this model can generalize to zero-shot translations on unseen
languages. For the second stage, we leverage this generalization ability to
generate synthetic parallel data from monolingual datasets, then train with …

arxiv translation unsupervised

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Analytics & Insight Specialist, Customer Success

@ Fortinet | Ottawa, ON, Canada

Account Director, ChatGPT Enterprise - Majors

@ OpenAI | Remote - Paris