April 22, 2024, 4:42 a.m. | Yuan Zang, Tian Yun, Hao Tan, Trung Bui, Chen Sun

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.12652v1 Announce Type: cross
Abstract: Do vision-language models (VLMs) pre-trained to caption an image of a "durian" learn visual concepts such as "brown" (color) and "spiky" (texture) at the same time? We aim to answer this question as visual concepts learned "for free" would enable wide applications such as neuro-symbolic reasoning or human-interpretable object classification. We assume that the visual concepts, if captured by pre-trained VLMs, can be extracted by their vision-language interface with text-based concept prompts. We observe that …

abstract aim applications arxiv color concepts cs.ai cs.cl cs.cv cs.lg free human image language language models learn neuro question reasoning symbolic reasoning texture type vision vision-language vision-language models visual visual concepts vlms

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Alternance DATA/AI Engineer (H/F)

@ SQLI | Le Grand-Quevilly, France