April 29, 2024, 4:45 a.m. | Haibo Wang, Chenghang Lai, Yixuan Sun, Weifeng Ge

cs.CV updates on arXiv.org arxiv.org

arXiv:2401.10711v3 Announce Type: replace
Abstract: Video Question Answering (VideoQA) aims to answer natural language questions based on the information observed in videos. Despite the recent success of Large Multimodal Models (LMMs) in image-language understanding and reasoning, they deal with VideoQA insufficiently, by simply taking uniformly sampled frames as visual inputs, which ignores question-relevant visual clues. Moreover, there are no human annotations for question-critical timestamps in existing VideoQA datasets. In light of this, we propose a novel weakly supervised framework to …

abstract arxiv cs.ai cs.cl cs.cv deal image information language language understanding large multimodal models lmms multimodal multimodal models natural natural language question question answering questions reasoning success the information type understanding video videos

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US