April 25, 2024, 7:45 p.m. | Jiaming Lei, Lin Li, Chunping Wang, Jun Xiao, Long Chen

cs.CV updates on arXiv.org arxiv.org

arXiv:2404.15785v1 Announce Type: new
Abstract: Benefiting from strong generalization ability, pre-trained vision language models (VLMs), e.g., CLIP, have been widely utilized in zero-shot scene understanding. Unlike simple recognition tasks, grounded situation recognition (GSR) requires the model not only to classify salient activity (verb) in the image, but also to detect all semantic roles that participate in the action. This complex task usually involves three steps: verb recognition, semantic role grounding, and noun recognition. Directly employing class-based prompts with VLMs and …

abstract arxiv beyond clip cs.cv explainer image language language models recognition simple tasks type understanding via vision vlms zero-shot

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne