April 10, 2024, 4:45 a.m. | Jiaxin Wu, Chong-Wah Ngo, Wing-Kwong Chan

cs.CV updates on arXiv.org arxiv.org

arXiv:2404.06173v1 Announce Type: new
Abstract: Aligning a user query and video clips in cross-modal latent space and that with semantic concepts are two mainstream approaches for ad-hoc video search (AVS). However, the effectiveness of existing approaches is bottlenecked by the small sizes of available video-text datasets and the low quality of concept banks, which results in the failures of unseen queries and the out-of-vocabulary problem. This paper addresses these two problems by constructing a new dataset and developing a multi-word …

abstract arxiv avs bank captions concept concepts cs.cv datasets embeddings generative however improving modal query search semantic small space text type video word

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Data Analyst

@ Alstom | Johannesburg, GT, ZA