March 26, 2024, 4:47 a.m. | Dimity Miller, Niko S\"underhauf, Alex Kenna, Keita Mason

cs.CV updates on arXiv.org arxiv.org

arXiv:2403.16528v1 Announce Type: new
Abstract: Are vision-language models (VLMs) open-set models because they are trained on internet-scale datasets? We answer this question with a clear no - VLMs introduce closed-set assumptions via their finite query set, making them vulnerable to open-set conditions. We systematically evaluate VLMs for open-set recognition and find they frequently misclassify objects not contained in their query set, leading to alarmingly low precision when tuned for high recall and vice versa. We show that naively increasing the …

abstract age arxiv assumptions clear cs.cv datasets internet language language models making query question recognition scale set them type via vision vision-language models vlms vulnerable

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne