all AI news
Seeing Beyond Classes: Zero-Shot Grounded Situation Recognition via Language Explainer
April 25, 2024, 7:45 p.m. | Jiaming Lei, Lin Li, Chunping Wang, Jun Xiao, Long Chen
cs.CV updates on arXiv.org arxiv.org
Abstract: Benefiting from strong generalization ability, pre-trained vision language models (VLMs), e.g., CLIP, have been widely utilized in zero-shot scene understanding. Unlike simple recognition tasks, grounded situation recognition (GSR) requires the model not only to classify salient activity (verb) in the image, but also to detect all semantic roles that participate in the action. This complex task usually involves three steps: verb recognition, semantic role grounding, and noun recognition. Directly employing class-based prompts with VLMs and …
abstract arxiv beyond clip cs.cv explainer image language language models recognition simple tasks type understanding via vision vlms zero-shot
More from arxiv.org / cs.CV updates on arXiv.org
Compact 3D Scene Representation via Self-Organizing Gaussian Grids
2 days, 11 hours ago |
arxiv.org
Fingerprint Matching with Localized Deep Representation
2 days, 11 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne