Feb. 23, 2024, 5:46 a.m. | Vasily Kostumov, Bulat Nutfullin, Oleg Pilipenko, Eugene Ilyushin

cs.CV updates on arXiv.org arxiv.org

arXiv:2402.14418v1 Announce Type: new
Abstract: Vision-Language Models like GPT-4, LLaVA, and CogVLM have surged in popularity recently due to their impressive performance in several vision-language tasks. Current evaluation methods, however, overlook an essential component: uncertainty, which is crucial for a comprehensive assessment of VLMs. Addressing this oversight, we present a benchmark incorporating uncertainty quantification into evaluating VLMs.
Our analysis spans 20+ VLMs, focusing on the multiple-choice Visual Question Answering (VQA) task. We examine models on 5 datasets that evaluate various …

abstract arxiv assessment benchmark cs.ai cs.cv current evaluation gpt gpt-4 language language models llava oversight performance quantification tasks type uncertainty vision vision-language models vlms

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US