all AI news
Laissez-Faire Harms: Algorithmic Biases in Generative Language Models
April 12, 2024, 4:42 a.m. | Evan Shieh, Faye-Marie Vassel, Cassidy Sugimoto, Thema Monroe-White
cs.LG updates on arXiv.org arxiv.org
Abstract: The rapid deployment of generative language models (LMs) has raised concerns about social biases affecting the well-being of diverse consumers. The extant literature on generative LMs has primarily examined bias via explicit identity prompting. However, prior research on bias in earlier language-based technology platforms, including search engines, has shown that discrimination can occur even when identity terms are not specified explicitly. Studies of bias in LM responses to open-ended prompts (where identity classifications are left …
abstract arxiv bias biases concerns consumers cs.ai cs.cl cs.cy cs.lg deployment diverse generative however identity language language models literature lms platforms prior prompting research search social technology type via
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Business Data Scientist, gTech Ads
@ Google | Mexico City, CDMX, Mexico
Lead, Data Analytics Operations
@ Zocdoc | Pune, Maharashtra, India