April 12, 2024, 4:43 a.m. | Mengzhou Xia, Tianyu Gao, Zhiyuan Zeng, Danqi Chen

cs.LG updates on arXiv.org arxiv.org

arXiv:2310.06694v2 Announce Type: replace-cross
Abstract: The popularity of LLaMA (Touvron et al., 2023a;b) and other recently emerged moderate-sized large language models (LLMs) highlights the potential of building smaller yet powerful LLMs. Regardless, the cost of training such models from scratch on trillions of tokens remains high. In this work, we study structured pruning as an effective means to develop smaller LLMs from pre-trained, larger models. Our approach employs two key techniques: (1) targeted structured pruning, which prunes a larger model …

abstract arxiv building cost cs.ai cs.cl cs.lg highlights language language model language models large language large language models llama llms pre-training pruning scratch study tokens training type via work

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne