March 19, 2024, 4:48 a.m. | Rao Fu, Jingyu Liu, Xilun Chen, Yixin Nie, Wenhan Xiong

cs.CV updates on arXiv.org arxiv.org

arXiv:2403.11401v1 Announce Type: new
Abstract: This paper introduces Scene-LLM, a 3D-visual-language model that enhances embodied agents' abilities in interactive 3D indoor environments by integrating the reasoning strengths of Large Language Models (LLMs). Scene-LLM adopts a hybrid 3D visual feature representation, that incorporates dense spatial information and supports scene state updates. The model employs a projection layer to efficiently project these features in the pre-trained textual embedding space, enabling effective interpretation of 3D visual information. Unique to our approach is the …

abstract agents arxiv cs.ai cs.cv embodied environments feature hybrid information interactive language language model language models large language large language models llm llms paper reasoning representation spatial state type understanding updates visual

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Reporting & Data Analytics Lead (Sizewell C)

@ EDF | London, GB

Data Analyst

@ Notable | San Mateo, CA