all AI news
Towards Grounded Visual Spatial Reasoning in Multi-Modal Vision Language Models
March 7, 2024, 5:43 a.m. | Navid Rajabi, Jana Kosecka
cs.LG updates on arXiv.org arxiv.org
Abstract: Large vision-and-language models (VLMs) trained to match images with text on large-scale datasets of image-text pairs have shown impressive generalization ability on several vision and language tasks. Several recent works, however, showed that these models lack fine-grained understanding, such as the ability to count and recognize verbs, attributes, or relationships. The focus of this work is to study the understanding of spatial relations. This has been tackled previously using image-text matching (e.g., Visual Spatial Reasoning …
abstract arxiv count cs.cl cs.cv cs.lg datasets fine-grained however image images language language models match modal multi-modal reasoning scale spatial tasks text type understanding vision vision-and-language visual vlms
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Principal Data Engineering Manager
@ Microsoft | Redmond, Washington, United States
Machine Learning Engineer
@ Apple | San Diego, California, United States