all AI news
LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models
March 25, 2024, 4:45 a.m. | Yuzhang Shang, Mu Cai, Bingxin Xu, Yong Jae Lee, Yan Yan
cs.CV updates on arXiv.org arxiv.org
Abstract: Large Multimodal Models (LMMs) have shown significant reasoning capabilities by connecting a visual encoder and a large language model. LMMs typically use a fixed amount of visual tokens, such as the penultimate layer features in the CLIP visual encoder, as the prefix content. Recent LMMs incorporate more complex visual inputs, such as high-resolution images and videos, which increase the number of visual tokens significantly. However, due to the design of the Transformer architecture, computational costs …
arxiv cs.ai cs.cl cs.cv large multimodal models llava multimodal multimodal models token type
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote