April 3, 2024, 4:42 a.m. | Chuyi Shang, Amos You, Sanjay Subramanian, Trevor Darrell, Roei Herzig

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.01476v1 Announce Type: cross
Abstract: Recently, Large Multimodal Models (LMMs) have made significant progress in video question-answering using a frame-wise approach by leveraging large-scale, image-based pretraining in a zero-shot manner. While image-based methods for videos have shown impressive performance, a current limitation is that they often overlook how key timestamps are selected and cannot adjust when incorrect timestamps are identified. Moreover, they are unable to extract details relevant to the question, instead providing general descriptions of the frame. To overcome …

abstract agent arxiv cs.ai cs.cl cs.cv cs.lg current framework image key large multimodal models lmm lmms multimodal multimodal models performance pretraining progress question scale type video videos wise zero-shot

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote