all AI news
Gender and Racial Bias in Visual Question Answering Datasets. (arXiv:2205.08148v2 [cs.CV] UPDATED)
May 19, 2022, 1:10 a.m. | Yusuke Hirota, Yuta Nakashima, Noa Garcia
cs.CV updates on arXiv.org arxiv.org
Vision-and-language tasks have increasingly drawn more attention as a means
to evaluate human-like reasoning in machine learning models. A popular task in
the field is visual question answering (VQA), which aims to answer questions
about images. However, VQA models have been shown to exploit language bias by
learning the statistical correlations between questions and answers without
looking into the image content: e.g., questions about the color of a banana are
answered with yellow, even if the banana in the image …
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Applied Scientist, Control Stack, AWS Center for Quantum Computing
@ Amazon.com | Pasadena, California, USA
Specialist Marketing with focus on ADAS/AD f/m/d
@ AVL | Graz, AT
Machine Learning Engineer, PhD Intern
@ Instacart | United States - Remote
Supervisor, Breast Imaging, Prostate Center, Ultrasound
@ University Health Network | Toronto, ON, Canada
Senior Manager of Data Science (Recommendation Science)
@ NBCUniversal | New York, NEW YORK, United States