March 28, 2024, 4:46 a.m. | Monika Wysocza\'nska, Oriane Sim\'eoni, Micha\"el Ramamonjisoa, Andrei Bursuc, Tomasz Trzci\'nski, Patrick P\'erez

cs.CV updates on arXiv.org arxiv.org

arXiv:2312.12359v2 Announce Type: replace
Abstract: The popular CLIP model displays impressive zero-shot capabilities thanks to its seamless interaction with arbitrary text prompts. However, its lack of spatial awareness makes it unsuitable for dense computer vision tasks, e.g., semantic segmentation, without an additional fine-tuning step that often uses annotations and can potentially suppress its original open-vocabulary properties. Meanwhile, self-supervised representation methods have demonstrated good localization properties without human-made annotations nor explicit supervision. In this work, we take the best of both …

arxiv clip cs.cv segmentation semantic teaching tricks type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Reporting & Data Analytics Lead (Sizewell C)

@ EDF | London, GB

Data Analyst

@ Notable | San Mateo, CA