March 26, 2024, 4:47 a.m. | Yunlong Tang, Daiki Shimada, Jing Bi, Chenliang Xu

cs.CV updates on arXiv.org arxiv.org

arXiv:2403.16276v1 Announce Type: new
Abstract: In everyday communication, humans frequently use speech and gestures to refer to specific areas or objects, a process known as Referential Dialogue (RD). While prior studies have investigated RD through Large Language Models (LLMs) or Large Multimodal Models (LMMs) in static contexts, the exploration of Temporal Referential Dialogue (TRD) within audio-visual media remains limited. Two primary challenges hinder progress in this field: (1) the absence of comprehensive, untrimmed audio-visual video datasets with precise temporal annotations, …

abstract alignment arxiv audio communication context cs.ai cs.cv dialogue gestures humans language language models large language large language models large multimodal models llm llms lmms multimodal multimodal models objects prior process speech studies temporal through type visual

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Field Sample Specialist (Air Sampling) - Eurofins Environment Testing – Pueblo, CO

@ Eurofins | Pueblo, CO, United States

Camera Perception Engineer

@ Meta | Sunnyvale, CA