April 22, 2024, 4:42 a.m. | Yuan Zang, Tian Yun, Hao Tan, Trung Bui, Chen Sun

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.12652v1 Announce Type: cross
Abstract: Do vision-language models (VLMs) pre-trained to caption an image of a "durian" learn visual concepts such as "brown" (color) and "spiky" (texture) at the same time? We aim to answer this question as visual concepts learned "for free" would enable wide applications such as neuro-symbolic reasoning or human-interpretable object classification. We assume that the visual concepts, if captured by pre-trained VLMs, can be extracted by their vision-language interface with text-based concept prompts. We observe that …

abstract aim applications arxiv color concepts cs.ai cs.cl cs.cv cs.lg free human image language language models learn neuro question reasoning symbolic reasoning texture type vision vision-language vision-language models visual visual concepts vlms

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York