April 16, 2024, 4:48 a.m. | Mia Chiquier, Utkarsh Mall, Carl Vondrick

cs.CV updates on arXiv.org arxiv.org

arXiv:2404.09941v1 Announce Type: new
Abstract: Multimodal pre-trained models, such as CLIP, are popular for zero-shot classification due to their open-vocabulary flexibility and high performance. However, vision-language models, which compute similarity scores between images and class labels, are largely black-box, with limited interpretability, risk for bias, and inability to discover new visual concepts not written down. Moreover, in practical settings, the vocabulary for class names and attributes of specialized concepts will not be known, preventing these methods from performing well on …

abstract arxiv bias box class classification classifiers clip compute concepts cs.ai cs.cv flexibility however images interpretability labels language language models large language large language models multimodal performance popular pre-trained models risk type vision vision-language models visual visual concepts zero-shot

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

#13721 - Data Engineer - AI Model Testing

@ Qualitest | Miami, Florida, United States

Elasticsearch Administrator

@ ManTech | 201BF - Customer Site, Chantilly, VA