April 12, 2024, 4:43 a.m. | Mengzhou Xia, Tianyu Gao, Zhiyuan Zeng, Danqi Chen

cs.LG updates on arXiv.org arxiv.org

arXiv:2310.06694v2 Announce Type: replace-cross
Abstract: The popularity of LLaMA (Touvron et al., 2023a;b) and other recently emerged moderate-sized large language models (LLMs) highlights the potential of building smaller yet powerful LLMs. Regardless, the cost of training such models from scratch on trillions of tokens remains high. In this work, we study structured pruning as an effective means to develop smaller LLMs from pre-trained, larger models. Our approach employs two key techniques: (1) targeted structured pruning, which prunes a larger model …

abstract arxiv building cost cs.ai cs.cl cs.lg highlights language language model language models large language large language models llama llms pre-training pruning scratch study tokens training type via work

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US