all AI news
Laissez-Faire Harms: Algorithmic Biases in Generative Language Models
April 12, 2024, 4:42 a.m. | Evan Shieh, Faye-Marie Vassel, Cassidy Sugimoto, Thema Monroe-White
cs.LG updates on arXiv.org arxiv.org
Abstract: The rapid deployment of generative language models (LMs) has raised concerns about social biases affecting the well-being of diverse consumers. The extant literature on generative LMs has primarily examined bias via explicit identity prompting. However, prior research on bias in earlier language-based technology platforms, including search engines, has shown that discrimination can occur even when identity terms are not specified explicitly. Studies of bias in LM responses to open-ended prompts (where identity classifications are left …
abstract arxiv bias biases concerns consumers cs.ai cs.cl cs.cy cs.lg deployment diverse generative however identity language language models literature lms platforms prior prompting research search social technology type via
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US