Web: http://arxiv.org/abs/2209.03316

Sept. 16, 2022, 1:16 a.m. | Changtong Zan, Liang Ding, Li Shen, Yu Cao, Weifeng Liu, Dacheng Tao

cs.CL updates on arXiv.org arxiv.org

Pre-Training (PT) of text representations has been successfully applied to
low-resource Neural Machine Translation (NMT). However, it usually fails to
achieve notable gains (sometimes, even worse) on resource-rich NMT on par with
its Random-Initialization (RI) counterpart. We take the first step to
investigate the complementarity between PT and RI in resource-rich scenarios
via two probing analyses, and find that: 1) PT improves NOT the accuracy, but
the generalization by achieving flatter loss landscapes than that of RI; 2) PT
improves …

arxiv machine machine translation pre-training random training translation

More from arxiv.org / cs.CL updates on arXiv.org

Research Scientists

@ ODU Research Foundation | Norfolk, Virginia

Embedded Systems Engineer (Robotics)

@ Neo Cybernetica | Bedford, New Hampshire

2023 Luis J. Alvarez and Admiral Grace M. Hopper Postdoc Fellowship in Computing Sciences

@ Lawrence Berkeley National Lab | San Francisco, CA

Senior Manager Data Scientist

@ NAV | Remote, US

Senior AI Research Scientist

@ Earth Species Project | Remote anywhere

Research Fellow- Center for Security and Emerging Technology (Multiple Opportunities)

@ University of California Davis | Washington, DC

Staff Fellow - Data Scientist

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Staff Fellow - Senior Data Engineer

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Tech Business Data Analyst

@ Fivesky | Alpharetta, GA

Senior Applied Scientist

@ Amazon.com | London, England, GBR

AI Researcher (Junior/Mid-level)

@ Charles River Analytics Inc. | Cambridge, MA

Data Engineer - Machine Learning & AI

@ Calabrio | Minneapolis, Minnesota, United States