all AI news
Lifelong Pretraining: Continually Adapting Language Models to Emerging Corpora. (arXiv:2110.08534v2 [cs.CL] UPDATED)
May 16, 2022, 1:11 a.m. | Xisen Jin, Dejiao Zhang, Henghui Zhu, Wei Xiao, Shang-Wen Li, Xiaokai Wei, Andrew Arnold, Xiang Ren
cs.CL updates on arXiv.org arxiv.org
Pretrained language models (PTLMs) are typically learned over a large, static
corpus and further fine-tuned for various downstream tasks. However, when
deployed in the real world, a PTLM-based model must deal with data
distributions that deviate from what the PTLM was initially trained on. In this
paper, we study a lifelong language model pretraining challenge where a PTLM is
continually updated so as to adapt to emerging data. Over a domain-incremental
research paper stream and a chronologically-ordered tweet stream, we …
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Healthcare Data Modeler/Data Architect - REMOTE
@ Perficient | United States
Data Analyst – Sustainability, Green IT
@ H&M Group | Stockholm, Sweden
RWE Data Analyst
@ Sanofi | Hyderabad
Machine Learning Engineer
@ JPMorgan Chase & Co. | Jersey City, NJ, United States