all AI news
Bootstrap Fine-Grained Vision-Language Alignment for Unified Zero-Shot Anomaly Localization
Feb. 28, 2024, 5:47 a.m. | Hanqiu Deng, Zhaoxiang Zhang, Jinan Bao, Xingyu Li
cs.CV updates on arXiv.org arxiv.org
Abstract: Contrastive Language-Image Pre-training (CLIP) models have shown promising performance on zero-shot visual recognition tasks by learning visual representations under natural language supervision. Recent studies attempt the use of CLIP to tackle zero-shot anomaly detection by matching images with normal and abnormal state prompts. However, since CLIP focuses on building correspondence between paired text prompts and global image-level representations, the lack of fine-grained patch-level vision to text alignment limits its capability on precise visual anomaly localization. …
abstract alignment anomaly anomaly detection arxiv bootstrap clip cs.cv detection fine-grained image images language localization natural natural language normal performance pre-training prompts recognition state studies supervision tasks training type vision visual zero-shot
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Lead Data Modeler
@ Sherwin-Williams | Cleveland, OH, United States