all AI news
Scattered Mixture-of-Experts Implementation
March 14, 2024, 4:41 a.m. | Shawn Tan, Yikang Shen, Rameswar Panda, Aaron Courville
cs.LG updates on arXiv.org arxiv.org
Abstract: We present ScatterMoE, an implementation of Sparse Mixture-of-Experts (SMoE) on GPUs. ScatterMoE builds upon existing implementations, and overcoming some of the limitations to improve inference and training speed, and memory footprint. This implementation achieves this by avoiding padding and making excessive copies of the input.
We introduce ParallelLinear, the main component we use to build our implementation and the various kernels used to speed up the operation. We benchmark our implementation against Megablocks, and show …
abstract arxiv cs.dc cs.lg experts gpus implementation inference limitations making memory padding speed training type
More from arxiv.org / cs.LG updates on arXiv.org
Sliced Wasserstein with Random-Path Projecting Directions
1 day, 9 hours ago |
arxiv.org
The Un-Kidnappable Robot: Acoustic Localization of Sneaking People
1 day, 9 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York