May 25, 2022, 1:11 a.m. | Afra Feyza Akyürek, Muhammed Yusuf Kocyigit, Sejin Paik, Derry Wijaya

cs.CL updates on arXiv.org arxiv.org

Researchers have devised numerous ways to quantify social biases vested in
pretrained language models. As some language models are capable of generating
coherent completions given a set of textual prompts, several prompting datasets
have been proposed to measure biases between social groups -- posing language
generation as a way of identifying biases. In this opinion paper, we analyze
how specific choices of prompt sets, metrics, automatic tools and sampling
strategies affect bias results. We find out that the practice of …

arxiv bias challenges generation language language generation

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Machine Learning Engineer (m/f/d)

@ StepStone Group | Düsseldorf, Germany

2024 GDIA AI/ML Scientist - Supplemental

@ Ford Motor Company | United States