Web: http://arxiv.org/abs/2209.11035

Sept. 23, 2022, 1:16 a.m. | Hugo Abonizio, Leandro Rodrigues de Souza, Roberto Lotufo, Rodrigo Nogueira

cs.CL updates on arXiv.org arxiv.org

The zero-shot cross-lingual ability of models pretrained on multilingual and
even monolingual corpora has spurred many hypotheses to explain this intriguing
empirical result. However, due to the costs of pretraining, most research uses
public models whose pretraining methodology, such as the choice of
tokenization, corpus size, and computational budget, might differ drastically.
When researchers pretrain their own models, they often do so under a
constrained budget, and the resulting models might underperform significantly
compared to SOTA models. These experimental differences …

arxiv language language models

More from arxiv.org / cs.CL updates on arXiv.org

Research Scientists

@ ODU Research Foundation | Norfolk, Virginia

Embedded Systems Engineer (Robotics)

@ Neo Cybernetica | Bedford, New Hampshire

2023 Luis J. Alvarez and Admiral Grace M. Hopper Postdoc Fellowship in Computing Sciences

@ Lawrence Berkeley National Lab | San Francisco, CA

Senior Manager Data Scientist

@ NAV | Remote, US

Senior AI Research Scientist

@ Earth Species Project | Remote anywhere

Research Fellow- Center for Security and Emerging Technology (Multiple Opportunities)

@ University of California Davis | Washington, DC

Staff Fellow - Data Scientist

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Staff Fellow - Senior Data Engineer

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Machine Learning Data Engineer Intern (Jyoti Dharna)

@ Benson Hill | St. Louis, Missouri

Software Engineer / SDE I, Chime SDK Video Research Engineering

@ Amazon.com | East Palo Alto, California, USA

IND (New) Senior ML Ops Engineer - WiQ

@ Quantium | Hyderabad

Data Engineer

@ LendingTree | Remote