all AI news
Exploring the Linear Subspace Hypothesis in Gender Bias Mitigation. (arXiv:2009.09435v3 [cs.LG] UPDATED)
Oct. 27, 2022, 1:16 a.m. | Francisco Vargas, Ryan Cotterell
cs.CL updates on arXiv.org arxiv.org
Bolukbasi et al. (2016) presents one of the first gender bias mitigation
techniques for word embeddings. Their method takes pre-trained word embeddings
as input and attempts to isolate a linear subspace that captures most of the
gender bias in the embeddings. As judged by an analogical evaluation task,
their method virtually eliminates gender bias in the embeddings. However, an
implicit and untested assumption of their method is that the bias sub-space is
actually linear. In this work, we generalize their …
More from arxiv.org / cs.CL updates on arXiv.org
Benchmarking LLMs via Uncertainty Quantification
2 days, 6 hours ago |
arxiv.org
CARE: Extracting Experimental Findings From Clinical Literature
2 days, 6 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Software Engineering Manager, Generative AI - Characters
@ Meta | Bellevue, WA | Menlo Park, CA | Seattle, WA | New York City | San Francisco, CA
Senior Operations Research Analyst / Predictive Modeler
@ LinQuest | Colorado Springs, Colorado, United States