all AI news
"I'm sorry to hear that": finding bias in language models with a holistic descriptor dataset. (arXiv:2205.09209v1 [cs.CL])
May 20, 2022, 1:11 a.m. | Eric Michael Smith, Melissa Hall Melanie Kambadur, Eleonora Presani, Adina Williams (Meta AI)
cs.CL updates on arXiv.org arxiv.org
As language models grow in popularity, their biases across all possible
markers of demographic identity should be measured and addressed in order to
avoid perpetuating existing societal harms. Many datasets for measuring bias
currently exist, but they are restricted in their coverage of demographic axes,
and are commonly used with preset bias tests that presuppose which types of
biases the models exhibit. In this work, we present a new, more inclusive
dataset, HOLISTICBIAS, which consists of nearly 600 descriptor terms …
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
[Job - 14823] Senior Data Scientist (Data Analyst Sr)
@ CI&T | Brazil
Data Engineer
@ WorldQuant | Hanoi
ML Engineer / Toronto
@ Intersog | Toronto, Ontario, Canada
Analista de Business Intelligence (Industry Insights)
@ NielsenIQ | Cotia, Brazil