April 4, 2024, 4:46 a.m. | Jialin Wu, Xia Hu, Yaqing Wang, Bo Pang, Radu Soricut

cs.CV updates on arXiv.org arxiv.org

arXiv:2312.00968v2 Announce Type: replace
Abstract: Large multi-modal models (LMMs) exhibit remarkable performance across numerous tasks. However, generalist LMMs often suffer from performance degradation when tuned over a large collection of tasks. Recent research suggests that Mixture of Experts (MoE) architectures are useful for instruction tuning, but for LMMs of parameter size around O(50-100B), the prohibitive cost of replicating and storing the expert models severely limits the number of experts we can use. We propose Omni-SMoLA, an architecture that uses the …

abstract architectures arxiv boosting collection cs.cl cs.cv experts however lmms low mixture of experts modal moe multi-modal multimodal multimodal models performance research tasks type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

#13721 - Data Engineer - AI Model Testing

@ Qualitest | Miami, Florida, United States

Elasticsearch Administrator

@ ManTech | 201BF - Customer Site, Chantilly, VA