July 22, 2022, 1:11 a.m. | Oskar van der Wal, Jaap Jumelet, Katrin Schulz, Willem Zuidema

cs.CL updates on arXiv.org arxiv.org

Detecting and mitigating harmful biases in modern language models are widely
recognized as crucial, open problems. In this paper, we take a step back and
investigate how language models come to be biased in the first place. We use a
relatively small language model, using the LSTM architecture trained on an
English Wikipedia corpus. With full access to the data and to the model
parameters as they change during every step while training, we can map in
detail how the …

arxiv bias case case study evolution gender gender bias language language model study

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Director, Clinical Data Science

@ Aura | Remote USA

Research Scientist, AI (PhD)

@ Meta | Menlo Park, CA | New York City