all AI news
Social Bias Probing: Fairness Benchmarking for Language Models
Feb. 20, 2024, 5:52 a.m. | Marta Marchiori Manerba, Karolina Sta\'nczak, Riccardo Guidotti, Isabelle Augenstein
cs.CL updates on arXiv.org arxiv.org
Abstract: Large language models have been shown to encode a variety of social biases, which carries the risk of downstream harms. While the impact of these biases has been recognized, prior methods for bias evaluation have been limited to binary association tests on small datasets, offering a constrained view of the nature of societal biases within language models. In this paper, we propose an original framework for probing language models for societal biases. We collect a …
abstract arxiv association benchmarking bias biases binary cs.cl datasets encode evaluation fairness impact language language models large language large language models prior risk small social tests type
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Sr. VBI Developer II
@ Atos | Texas, US, 75093
Wealth Management - Data Analytics Intern/Co-op Fall 2024
@ Scotiabank | Toronto, ON, CA