Feb. 2, 2024, 3:42 p.m. | Yiqi Lin Conghui He Alex Jinpeng Wang Bin Wang Weijia Li Mike Zheng Shou

cs.CV updates on arXiv.org arxiv.org

Despite CLIP being the foundation model in numerous vision-language applications, the CLIP suffers from a severe text spotting bias. Such bias causes CLIP models to `Parrot' the visual text embedded within images while disregarding the authentic visual semantics. We uncover that in the most popular image-text dataset LAION-2B, the captions also densely parrot (spell) the text embedded in images. Our analysis shows that around 50% of images are embedded with visual text content, and around 30% of captions words are …

applications authentic bias captions clip cs.ai cs.cv dataset embedded foundation foundation model image images laion language parrot popular semantics spot text vision visual

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne