Oct. 25, 2022, 1:18 a.m. | Sarah Schröder, Alexander Schulz, Philip Kenneweg, Robert Feldhans, Fabian Hinder, Barbara Hammer

cs.CL updates on arXiv.org arxiv.org

Over the last years, word and sentence embeddings have established as text
preprocessing for all kinds of NLP tasks and improved performances in these
tasks significantly. Unfortunately, it has also been shown that these
embeddings inherit various kinds of biases from the training data and thereby
pass on biases present in society to NLP solutions. Many papers attempted to
quantify bias in word or sentence embeddings to evaluate debiasing methods or
compare different embedding models, often with cosine-based scores. However, …

arxiv bias word embeddings

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Engineer

@ Quantexa | Sydney, New South Wales, Australia

Staff Analytics Engineer

@ Warner Bros. Discovery | NY New York 230 Park Avenue South