Aug. 9, 2022, 1:12 a.m. | Dhanasekar Sundararaman, Vivek Subramanian

cs.CL updates on arXiv.org arxiv.org

Biases in culture, gender, ethnicity, etc. have existed for decades and have
affected many areas of human social interaction. These biases have been shown
to impact machine learning (ML) models, and for natural language processing
(NLP), this can have severe consequences for downstream tasks. Mitigating
gender bias in information retrieval (IR) is important to avoid propagating
stereotypes. In this work, we employ a dataset consisting of two components:
(1) relevance of a document to a query and (2) "gender" of …

arxiv bias gender gender bias retrieval

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Global Data Architect, AVP - State Street Global Advisors

@ State Street | Boston, Massachusetts

Data Engineer

@ NTT DATA | Pune, MH, IN