Feb. 22, 2024, 5:43 a.m. | Evangelia Gogoulou, Timoth\'ee Lesort, Magnus Boman, Joakim Nivre

cs.LG updates on arXiv.org arxiv.org

arXiv:2311.01200v2 Announce Type: replace-cross
Abstract: The recent increase in data and model scale for language model pre-training has led to huge training costs. In scenarios where new data become available over time, updating a model instead of fully retraining it would therefore provide significant gains. We study the pros and cons of updating a language model when new data comes from new languages -- the case of continual learning under language shift. Starting from a monolingual English language model, we …

abstract arxiv become cons continual costs cs.cl cs.lg data language language model pre-training pros retraining scale shift study training training costs type

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Software Engineer, Generative AI (C++)

@ SoundHound Inc. | Toronto, Canada