March 14, 2024, 4:42 a.m. | Adam Ibrahim, Benjamin Th\'erien, Kshitij Gupta, Mats L. Richter, Quentin Anthony, Timoth\'ee Lesort, Eugene Belilovsky, Irina Rish

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.08763v1 Announce Type: new
Abstract: Large language models (LLMs) are routinely pre-trained on billions of tokens, only to start the process over again once new data becomes available. A much more efficient solution is to continually pre-train these models, saving significant compute compared to re-training. However, the distribution shift induced by new data typically results in degraded performance on previous data or poor adaptation to the new data. In this work, we show that a simple and scalable combination of …

abstract arxiv compute cs.ai cs.cl cs.lg data distribution however language language models large language large language models llms process saving scalable shift simple solution strategies tokens train training type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Machine Learning Engineer - Sr. Consultant level

@ Visa | Bellevue, WA, United States