all AI news
[R] MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts
Jan. 10, 2024, 4 a.m. | /u/APaperADay
Machine Learning www.reddit.com
**Code**: [https://github.com/llm-random/llm-random](https://github.com/llm-random/llm-random)
**Abstract**:
>State Space Models (SSMs) have become serious contenders in the field of sequential modeling, challenging the dominance of Transformers. At the same time, Mixture of Experts (MoE) has significantly improved Transformer-based LLMs, including recent state-of-the-art open-source models. We propose that to unlock the potential of SSMs for scaling, they should be combined with MoE. We showcase this on Mamba, a recent SSM-based model that achieves remarkable, Transformer-like performance. Our model, **MoE-Mamba**, outperforms both Mamba and …
abstract art become experts llms machinelearning mamba mixture of experts modeling moe open-source models scaling space state transformer transformers
More from www.reddit.com / Machine Learning
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US