April 24, 2024, 4:42 a.m. | Xun Wu, Shaohan Huang, Wenhui Wang, Furu Wei

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.15045v1 Announce Type: cross
Abstract: Sparse Mixtures of Experts (SMoE) scales model capacity without significant increases in training and inference costs, but exhibits the following two issues: (1) Low expert activation, where only a small subset of experts are activated for optimization. (2) Lacking fine-grained analytical capabilities for multiple semantic concepts within individual tokens. We propose Multi-Head Mixture-of-Experts (MH-MoE), which employs a multi-head mechanism to split each token into multiple sub-tokens. These sub-tokens are then assigned to and processed by …

abstract arxiv capabilities capacity concepts costs cs.ai cs.cl cs.lg expert experts fine-grained head inference inference costs low multi-head multiple optimization semantic small tokens training type

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Data Engineer - Takealot Group (Takealot.com | Superbalist.com | Mr D Food)

@ takealot.com | Cape Town