Jan. 18, 2024, 7:19 p.m. | /u/APaperADay

Machine Learning www.reddit.com

**Paper**: [https://arxiv.org/abs/2310.15961](https://arxiv.org/abs/2310.15961)

**Code**: [https://github.com/llm-random/llm-random](https://github.com/llm-random/llm-random)

**Blog post**: [https://llm-random.github.io/posts/mixture\_of\_tokens/](https://llm-random.github.io/posts/mixture_of_tokens/)

**Abstract**:

>Despite the promise of Mixture of Experts (MoE) models in increasing parameter counts of Transformer models while maintaining training and inference costs, their application carries notable drawbacks. The key strategy of these models is to, for each processed token, activate at most a few experts - subsets of an extensive feed-forward layer. But this approach is not without its challenges. The operation of matching experts and tokens is discrete, which makes MoE …

abstract application costs experts inference inference costs key layer machinelearning mixture of experts moe strategy the key token training transformer transformer models

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US