April 16, 2024, 4:44 a.m. | Jungin Park, Jiyoung Lee, Kwanghoon Sohn

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.09632v1 Announce Type: cross
Abstract: This paper introduces VLAP, a novel approach that bridges pretrained vision models and large language models (LLMs) to make frozen LLMs understand the visual world. VLAP transforms the embedding space of pretrained vision models into the LLMs' word embedding space using a single linear layer for efficient and general-purpose visual and language understanding. Specifically, we harness well-established word embeddings to bridge two modality embedding spaces. The visual and text representations are simultaneously assigned to a …

abstract arxiv cs.cv cs.lg embedding language language models large language large language models layer linear llms novel paper prediction space spaces type vision vision models visual word word embedding world

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote