April 22, 2024, 4:42 a.m. | Ali Rasekh, Sepehr Kazemi Ranjbar, Milad Heidari, Wolfgang Nejdl

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.12839v1 Announce Type: cross
Abstract: Large Vision Language Models (VLMs), such as CLIP, have significantly contributed to various computer vision tasks, including object recognition and object detection. Their open vocabulary feature enhances their value. However, their black-box nature and lack of explainability in predictions make them less trustworthy in critical domains. Recently, some work has been done to force VLMs to provide reasonable rationales for object recognition, but this often comes at the expense of classification accuracy. In this paper, …

abstract arxiv box clip computer computer vision contributed cs.ai cs.cv cs.lg detection domains explainability feature however language language models nature object predictions recognition tasks them trustworthy type value vision vlms work

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne