all AI news
FairCLIP: Social Bias Elimination based on Attribute Prototype Learning and Representation Neutralization
May 31, 2024, 4:47 a.m. | Junyang Wang, Yi Zhang, Jitao Sang
cs.CV updates on arXiv.org arxiv.org
Abstract: The Vision-Language Pre-training (VLP) models like CLIP have gained popularity in recent years. However, many works found that the social biases hidden in CLIP easily manifest in downstream tasks, especially in image retrieval, which can have harmful effects on human society. In this work, we propose FairCLIP to eliminate the social bias in CLIP-based image retrieval without damaging the retrieval performance achieving the compatibility between the debiasing effect and the retrieval performance. FairCLIP is divided …
abstract arxiv bias biases clip cs.cv effects found hidden however human image language manifest pre-training prototype replace representation retrieval social society tasks training type vision vision-language
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
AI Focused Biochemistry Postdoctoral Fellow
@ Lawrence Berkeley National Lab | Berkeley, CA
Senior Quality Specialist - JAVA
@ SAP | Bengaluru, IN, 560066
Aktuar Financial Lines (m/w/d)
@ Zurich Insurance | Köln, DE
Senior Network Engineer
@ ManTech | 054H - 124TchnlgyPrkWy,SBurlington,VT
Pricing Analyst
@ EDF | Exeter, GB
Specialist IS Engineer
@ Amgen | US - California - Thousand Oaks - Field/Remote