April 15, 2024, 4:42 a.m. | Sunny Sanyal, Sujay Sanghavi, Alexandros G. Dimakis

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.08634v1 Announce Type: cross
Abstract: We study the effectiveness of a simple approach to develop a small base language model (LM) starting from an existing large base LM: first inherit a few transformer blocks from the larger LM, and then train this smaller model on a very small subset (0.1\%) of the raw pretraining data of the larger model. We call our simple recipe Inheritune and first demonstrate it for building a small base LM with 1.5B parameters using 1B …

abstract arxiv cs.ai cs.cl cs.lg language language model lms pre-training raw simple small study tokens train training transformer type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne