Feb. 20, 2024, 5:52 a.m. | Marta Marchiori Manerba, Karolina Sta\'nczak, Riccardo Guidotti, Isabelle Augenstein

cs.CL updates on arXiv.org arxiv.org

arXiv:2311.09090v2 Announce Type: replace
Abstract: Large language models have been shown to encode a variety of social biases, which carries the risk of downstream harms. While the impact of these biases has been recognized, prior methods for bias evaluation have been limited to binary association tests on small datasets, offering a constrained view of the nature of societal biases within language models. In this paper, we propose an original framework for probing language models for societal biases. We collect a …

abstract arxiv association benchmarking bias biases binary cs.cl datasets encode evaluation fairness impact language language models large language large language models prior risk small social tests type

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US