all AI news
Using Natural Sentences for Understanding Biases in Language Models. (arXiv:2205.06303v1 [cs.CL])
May 16, 2022, 1:11 a.m. | Sarah Alnegheimish, Alicia Guo, Yi Sun
cs.LG updates on arXiv.org arxiv.org
Evaluation of biases in language models is often limited to synthetically
generated datasets. This dependence traces back to the need for a prompt-style
dataset to trigger specific behaviors of language models. In this paper, we
address this gap by creating a prompt dataset with respect to occupations
collected from real-world natural sentences present in Wikipedia. We aim to
understand the differences between using template-based prompts and natural
sentence prompts when studying gender-occupation biases in language models. We
find bias evaluations …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Senior ML Researcher - 3D Geometry Processing | 3D Shape Generation | 3D Mesh Data
@ Promaton | Europe
Senior Data Analyst - SQL
@ Experian | Heredia, Costa Rica
Lead Business Intelligence Developer
@ L.A. Care Health Plan | Los Angeles, CA, US, 90017
(USA) Senior Manager, Data Analytics
@ Walmart | (USA) AR BENTONVILLE Home Office J Street Offices, Suite #2
Autonomous Haulage System Application Specialist
@ Komatsu | Belo Horizonte, BR
Machine Learning Engineer
@ GFT Technologies | Alcobendas, M, ES, 28108