Jan. 18, 2024, 7:19 p.m. | /u/APaperADay

Machine Learning www.reddit.com

**Paper**: [https://arxiv.org/abs/2310.15961](https://arxiv.org/abs/2310.15961)

**Code**: [https://github.com/llm-random/llm-random](https://github.com/llm-random/llm-random)

**Blog post**: [https://llm-random.github.io/posts/mixture\_of\_tokens/](https://llm-random.github.io/posts/mixture_of_tokens/)

**Abstract**:

>Despite the promise of Mixture of Experts (MoE) models in increasing parameter counts of Transformer models while maintaining training and inference costs, their application carries notable drawbacks. The key strategy of these models is to, for each processed token, activate at most a few experts - subsets of an extensive feed-forward layer. But this approach is not without its challenges. The operation of matching experts and tokens is discrete, which makes MoE …

abstract application costs experts inference inference costs key layer machinelearning mixture of experts moe strategy the key token training transformer transformer models

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Senior Applied Data Scientist

@ dunnhumby | London

Principal Data Architect - Azure & Big Data

@ MGM Resorts International | Home Office - US, NV