April 26, 2024, 4:45 a.m. | Ye Mao, Junpeng Jing, Krystian Mikolajczyk

cs.CV updates on arXiv.org arxiv.org

arXiv:2404.16538v1 Announce Type: new
Abstract: Recent advances in Vision and Language Models (VLMs) have improved open-world 3D representation, facilitating 3D zero-shot capability in unseen categories. Existing open-world methods pre-train an extra 3D encoder to align features from 3D data (e.g., depth maps or point clouds) with CAD-rendered images and corresponding texts. However, the limited color and texture variations in CAD images can compromise the alignment robustness. Furthermore, the volume discrepancy between pre-training datasets of the 3D encoder and VLM leads …

abstract advances arxiv cad capability cs.cv data encoder extra features however images language language models maps open-world representation train type vision vlms world zero-shot

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote