Oct. 27, 2022, 1:16 a.m. | Francisco Vargas, Ryan Cotterell

cs.CL updates on arXiv.org arxiv.org

Bolukbasi et al. (2016) presents one of the first gender bias mitigation
techniques for word embeddings. Their method takes pre-trained word embeddings
as input and attempts to isolate a linear subspace that captures most of the
gender bias in the embeddings. As judged by an analogical evaluation task,
their method virtually eliminates gender bias in the embeddings. However, an
implicit and untested assumption of their method is that the bias sub-space is
actually linear. In this work, we generalize their …

arxiv bias gender gender bias hypothesis linear

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Software Engineering Manager, Generative AI - Characters

@ Meta | Bellevue, WA | Menlo Park, CA | Seattle, WA | New York City | San Francisco, CA

Senior Operations Research Analyst / Predictive Modeler

@ LinQuest | Colorado Springs, Colorado, United States