Feb. 22, 2024, 5:45 a.m. | Ashutosh Sathe, Prachi Jain, Sunayana Sitaram

cs.CV updates on arXiv.org arxiv.org

arXiv:2402.13636v1 Announce Type: new
Abstract: Large vision-language models (VLMs) are widely getting adopted in industry and academia. In this work we build a unified framework to systematically evaluate gender-profession bias in VLMs. Our evaluation encompasses all supported inference modes of the recent VLMs, including image-to-text, text-to-text, text-to-image, and image-to-image. We construct a synthetic, high-quality dataset of text and images that blurs gender distinctions across professional actions to benchmark gender bias. In our benchmarking of recent vision-language models (VLMs), we observe …

abstract academia arxiv bias build cs.cl cs.cv cs.cy dataset evaluation framework gender gender bias image image-to-text industry inference language language models text text-to-image text-to-text type vision vision-language models vlms work

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Global Data Architect, AVP - State Street Global Advisors

@ State Street | Boston, Massachusetts

Data Engineer

@ NTT DATA | Pune, MH, IN