April 16, 2024, 4:48 a.m. | Mia Chiquier, Utkarsh Mall, Carl Vondrick

cs.CV updates on arXiv.org arxiv.org

arXiv:2404.09941v1 Announce Type: new
Abstract: Multimodal pre-trained models, such as CLIP, are popular for zero-shot classification due to their open-vocabulary flexibility and high performance. However, vision-language models, which compute similarity scores between images and class labels, are largely black-box, with limited interpretability, risk for bias, and inability to discover new visual concepts not written down. Moreover, in practical settings, the vocabulary for class names and attributes of specialized concepts will not be known, preventing these methods from performing well on …

abstract arxiv bias box class classification classifiers clip compute concepts cs.ai cs.cv flexibility however images interpretability labels language language models large language large language models multimodal performance popular pre-trained models risk type vision vision-language models visual visual concepts zero-shot

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York