Web: http://arxiv.org/abs/2010.14448

Jan. 26, 2022, 2:10 a.m. | Xavier Ferrer-Aran, Tom van Nuenen, Natalia Criado, Jose M. Such

cs.CL updates on arXiv.org arxiv.org

Language carries implicit human biases, functioning both as a reflection and
a perpetuation of stereotypes that people carry with them. Recently, ML-based
NLP methods such as word embeddings have been shown to learn such language
biases with striking accuracy. This capability of word embeddings has been
successfully exploited as a tool to quantify and study human biases. However,
previous studies only consider a predefined set of biased concepts to attest
(e.g., whether gender is more or less associated with particular …

arxiv communities online online communities

More from arxiv.org / cs.CL updates on arXiv.org

Data Scientist

@ Fluent, LLC | Boca Raton, Florida, United States

Big Data ETL Engineer

@ Binance.US | Vancouver

Data Scientist / Data Engineer

@ Kin + Carta | Chicago

Data Engineer

@ Craft | Warsaw, Masovian Voivodeship, Poland

Senior Manager, Data Analytics Audit

@ Affirm | Remote US

Data Scientist - Nationwide Opportunities, AWS Professional Services

@ Amazon.com | US, NC, Virtual Location - N Carolina