all AI news
Protected group bias and stereotypes in Large Language Models
March 25, 2024, 4:42 a.m. | Hadas Kotek, David Q. Sun, Zidi Xiu, Margit Bowler, Christopher Klein
cs.LG updates on arXiv.org arxiv.org
Abstract: As modern Large Language Models (LLMs) shatter many state-of-the-art benchmarks in a variety of domains, this paper investigates their behavior in the domains of ethics and fairness, focusing on protected group bias. We conduct a two-part study: first, we solicit sentence continuations describing the occupations of individuals from different protected groups, including gender, sexuality, religion, and race. Second, we have the model generate stories about individuals who hold different types of occupations. We collect >10k …
abstract art arxiv behavior benchmarks bias cs.cl cs.cy cs.lg domains ethics fairness language language models large language large language models llms modern paper part state stereotypes study type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Engineer - AWS
@ 3Pillar Global | Costa Rica
Cost Controller/ Data Analyst - India
@ John Cockerill | Mumbai, India, India, India