all AI news
Cultural Bias and Cultural Alignment of Large Language Models
June 27, 2024, 4:42 a.m. | Yan Tao, Olga Viberg, Ryan S. Baker, Rene F. Kizilcec
cs.CL updates on arXiv.org arxiv.org
Abstract: Culture fundamentally shapes people's reasoning, behavior, and communication. As people increasingly use generative artificial intelligence (AI) to expedite and automate personal and professional tasks, cultural values embedded in AI models may bias people's authentic expression and contribute to the dominance of certain cultures. We conduct a disaggregated evaluation of cultural bias for five widely used large language models (OpenAI's GPT-4o/4-turbo/4/3.5-turbo/3) by comparing the models' responses to nationally representative survey data. All models exhibit cultural values …
abstract ai models alignment artificial artificial intelligence arxiv authentic automate behavior bias communication cs.ai cs.cl culture embedded generative generative artificial intelligence intelligence language language models large language large language models people professional reasoning replace tasks type values
More from arxiv.org / cs.CL updates on arXiv.org
ReFT: Reasoning with Reinforced Fine-Tuning
1 day, 16 hours ago |
arxiv.org
Exploring Defeasibility in Causal Reasoning
1 day, 16 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Quantitative Researcher – Algorithmic Research
@ Man Group | GB London Riverbank House
Software Engineering Expert
@ Sanofi | Budapest
Senior Bioinformatics Scientist
@ Illumina | US - Bay Area - Foster City
Senior Engineer - Generative AI Product Engineering (Remote-Eligible)
@ Capital One | McLean, VA
Graduate Assistant - Bioinformatics
@ University of Arkansas System | University of Arkansas at Little Rock
Senior AI-HPC Cluster Engineer
@ NVIDIA | US, CA, Santa Clara