Feb. 28, 2024, 5:47 a.m. | Hanqiu Deng, Zhaoxiang Zhang, Jinan Bao, Xingyu Li

cs.CV updates on arXiv.org arxiv.org

arXiv:2308.15939v2 Announce Type: replace
Abstract: Contrastive Language-Image Pre-training (CLIP) models have shown promising performance on zero-shot visual recognition tasks by learning visual representations under natural language supervision. Recent studies attempt the use of CLIP to tackle zero-shot anomaly detection by matching images with normal and abnormal state prompts. However, since CLIP focuses on building correspondence between paired text prompts and global image-level representations, the lack of fine-grained patch-level vision to text alignment limits its capability on precise visual anomaly localization. …

abstract alignment anomaly anomaly detection arxiv bootstrap clip cs.cv detection fine-grained image images language localization natural natural language normal performance pre-training prompts recognition state studies supervision tasks training type vision visual zero-shot

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US