Feb. 22, 2024, 5:46 a.m. | Yunxin Li, Xinyu Chen, Baotian Hu, Haoyuan Shi, Min Zhang

cs.CV updates on arXiv.org arxiv.org

arXiv:2402.13561v1 Announce Type: cross
Abstract: Evaluating and Rethinking the current landscape of Large Multimodal Models (LMMs), we observe that widely-used visual-language projection approaches (e.g., Q-former or MLP) focus on the alignment of image-text descriptions yet ignore the visual knowledge-dimension alignment, i.e., connecting visuals to their relevant knowledge. Visual knowledge plays a significant role in analyzing, inferring, and interpreting information from visuals, helping improve the accuracy of answers to knowledge-based visual questions. In this paper, we mainly explore improving LMMs with …

abstract alignment arxiv cognitive cs.cl cs.cv current focus image knowledge landscape language large multimodal models lmms mlp multimodal multimodal models observe projection text type visual visuals

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Data Engineer (m/f/d)

@ Project A Ventures | Berlin, Germany

Principle Research Scientist

@ Analog Devices | US, MA, Boston