Feb. 13, 2024, 5:43 a.m. | Jakub Krajewski Jan Ludziejewski Kamil Adamczewski Maciej Pi\'oro Micha{\l} Krutul Szymon Antoniak Kam

cs.LG updates on arXiv.org arxiv.org

Mixture of Experts (MoE) models have emerged as a primary solution for reducing the computational cost of Large Language Models. In this work, we analyze their scaling properties, incorporating an expanded range of variables. Specifically, we introduce a new hyperparameter, granularity, whose adjustment enables precise control over the size of the experts. Building on this, we establish scaling laws for fine-grained MoE, taking into account the number of training tokens, model size, and granularity. Leveraging these laws, we derive the …

analyze building computational control cost cs.ai cs.cl cs.lg experts fine-grained hyperparameter language language models large language large language models laws mixture of experts moe scaling solution variables work

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US