all AI news
Exploring Gender Bias in Retrieval Models. (arXiv:2208.01755v2 [cs.CL] UPDATED)
Aug. 9, 2022, 1:12 a.m. | Dhanasekar Sundararaman, Vivek Subramanian
cs.CL updates on arXiv.org arxiv.org
Biases in culture, gender, ethnicity, etc. have existed for decades and have
affected many areas of human social interaction. These biases have been shown
to impact machine learning (ML) models, and for natural language processing
(NLP), this can have severe consequences for downstream tasks. Mitigating
gender bias in information retrieval (IR) is important to avoid propagating
stereotypes. In this work, we employ a dataset consisting of two components:
(1) relevance of a document to a query and (2) "gender" of …
More from arxiv.org / cs.CL updates on arXiv.org
Benchmarking LLMs via Uncertainty Quantification
1 day, 14 hours ago |
arxiv.org
CARE: Extracting Experimental Findings From Clinical Literature
1 day, 14 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Global Data Architect, AVP - State Street Global Advisors
@ State Street | Boston, Massachusetts
Data Engineer
@ NTT DATA | Pune, MH, IN