Feb. 22, 2024, 5:45 a.m. | Ashutosh Sathe, Prachi Jain, Sunayana Sitaram

cs.CV updates on arXiv.org arxiv.org

arXiv:2402.13636v1 Announce Type: new
Abstract: Large vision-language models (VLMs) are widely getting adopted in industry and academia. In this work we build a unified framework to systematically evaluate gender-profession bias in VLMs. Our evaluation encompasses all supported inference modes of the recent VLMs, including image-to-text, text-to-text, text-to-image, and image-to-image. We construct a synthetic, high-quality dataset of text and images that blurs gender distinctions across professional actions to benchmark gender bias. In our benchmarking of recent vision-language models (VLMs), we observe …

abstract academia arxiv bias build cs.cl cs.cv cs.cy dataset evaluation framework gender gender bias image image-to-text industry inference language language models text text-to-image text-to-text type vision vision-language models vlms work

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US