all AI news
Don't Forget About Pronouns: Removing Gender Bias in Language Models Without Losing Factual Gender Information. (arXiv:2206.10744v1 [cs.CL])
June 23, 2022, 1:12 a.m. | Tomasz Limisiewicz, David Mareček
cs.CL updates on arXiv.org arxiv.org
The representations in large language models contain multiple types of gender
information. We focus on two types of such signals in English texts: factual
gender information, which is a grammatical or semantic property, and gender
bias, which is the correlation between a word and specific gender. We can
disentangle the model's embeddings and identify components encoding both types
of information with probing. We aim to diminish the stereotypical bias in the
representations while preserving the factual gender signal. Our filtering …
arxiv bias gender gender bias information language language models
More from arxiv.org / cs.CL updates on arXiv.org
Benchmarking LLMs via Uncertainty Quantification
1 day, 13 hours ago |
arxiv.org
CARE: Extracting Experimental Findings From Clinical Literature
1 day, 13 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Global Data Architect, AVP - State Street Global Advisors
@ State Street | Boston, Massachusetts
Data Engineer
@ NTT DATA | Pune, MH, IN