all AI news
Exploring the Linear Subspace Hypothesis in Gender Bias Mitigation. (arXiv:2009.09435v3 [cs.LG] UPDATED)
Oct. 27, 2022, 1:12 a.m. | Francisco Vargas, Ryan Cotterell
cs.LG updates on arXiv.org arxiv.org
Bolukbasi et al. (2016) presents one of the first gender bias mitigation
techniques for word embeddings. Their method takes pre-trained word embeddings
as input and attempts to isolate a linear subspace that captures most of the
gender bias in the embeddings. As judged by an analogical evaluation task,
their method virtually eliminates gender bias in the embeddings. However, an
implicit and untested assumption of their method is that the bias sub-space is
actually linear. In this work, we generalize their …
More from arxiv.org / cs.LG updates on arXiv.org
Digital Over-the-Air Federated Learning in Multi-Antenna Systems
2 days, 12 hours ago |
arxiv.org
Bagging Provides Assumption-free Stability
2 days, 12 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Research Scientist, Demography and Survey Science, University Grad
@ Meta | Menlo Park, CA | New York City
Computer Vision Engineer, XR
@ Meta | Burlingame, CA