all AI news
Routers in Vision Mixture of Experts: An Empirical Study
April 22, 2024, 4:43 a.m. | Tianlin Liu, Mathieu Blondel, Carlos Riquelme, Joan Puigcerver
cs.LG updates on arXiv.org arxiv.org
Abstract: Mixture-of-Experts (MoE) models are a promising way to scale up model capacity without significantly increasing computational cost. A key component of MoEs is the router, which decides which subset of parameters (experts) process which feature embeddings (tokens). In this paper, we present a comprehensive study of routers in MoEs for computer vision tasks. We introduce a unified MoE formulation that subsumes different MoEs with two parametric routing tensors. This formulation covers both sparse MoE, which …
abstract arxiv capacity computational cost cs.ai cs.cv cs.lg embeddings experts feature key mixture of experts moe paper parameters process routers scale study tokens type vision
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Codec Avatars Research Engineer
@ Meta | Pittsburgh, PA