March 8, 2024, 5:45 a.m. | Yizhe Zhang, He Bai, Ruixiang Zhang, Jiatao Gu, Shuangfei Zhai, Josh Susskind, Navdeep Jaitly

cs.CV updates on arXiv.org arxiv.org

arXiv:2403.04732v1 Announce Type: cross
Abstract: Vision-Language Models (VLMs) such as GPT-4V have recently demonstrated incredible strides on diverse vision language tasks. We dig into vision-based deductive reasoning, a more sophisticated but less explored realm, and find previously unexposed blindspots in the current SOTA VLMs. Specifically, we leverage Raven's Progressive Matrices (RPMs), to assess VLMs' abilities to perform multi-hop relational and deductive reasoning relying solely on visual clues. We perform comprehensive evaluations of several popular VLMs employing standard strategies such as …

abstract arxiv cs.ai cs.cl cs.cv current diverse gpt gpt-4v intelligent language language models reasoning sota tasks type vision vision-language models visual vlms

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Medical Technologist MT/MLT Full-time - Generalist Full-time Nights / Marlton & Voorhees

@ Virtua Health | Marlton, NJ

AI/ML Software Optimization Engineer, Platform Architecture

@ Apple | Cupertino, California, United States

AVP, Business Intelligence & Insights

@ Synchrony | Cincinnati West Chester Remote OH