all AI news
Towards WinoQueer: Developing a Benchmark for Anti-Queer Bias in Large Language Models. (arXiv:2206.11484v1 [cs.CL])
Web: http://arxiv.org/abs/2206.11484
June 24, 2022, 1:12 a.m. | Virginia K. Felkner, Ho-Chun Herbert Chang, Eugene Jang, Jonathan May
cs.CL updates on arXiv.org arxiv.org
This paper presents exploratory work on whether and to what extent biases
against queer and trans people are encoded in large language models (LLMs) such
as BERT. We also propose a method for reducing these biases in downstream
tasks: finetuning the models on data written by and/or about queer people. To
measure anti-queer bias, we introduce a new benchmark dataset, WinoQueer,
modeled after other bias-detection benchmarks but addressing homophobic and
transphobic biases. We found that BERT shows significant homophobic bias, …
arxiv benchmark bias language language models large language models models
More from arxiv.org / cs.CL updates on arXiv.org
Latest AI/ML/Big Data Jobs
Machine Learning Researcher - Saalfeld Lab
@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia
Project Director, Machine Learning in US Health
@ ideas42.org | Remote, US
Data Science Intern
@ NannyML | Remote
Machine Learning Engineer NLP/Speech
@ Play.ht | Remote
Research Scientist, 3D Reconstruction
@ Yembo | Remote, US
Clinical Assistant or Associate Professor of Management Science and Systems
@ University at Buffalo | Buffalo, NY