all AI news
Measuring Political Bias in Large Language Models: What Is Said and How It Is Said
March 29, 2024, 4:47 a.m. | Yejin Bang, Delong Chen, Nayeon Lee, Pascale Fung
cs.CL updates on arXiv.org arxiv.org
Abstract: We propose to measure political bias in LLMs by analyzing both the content and style of their generated content regarding political issues. Existing benchmarks and measures focus on gender and racial biases. However, political bias exists in LLMs and can lead to polarization and other harms in downstream applications. In order to provide transparency to users, we advocate that there should be fine-grained and explainable measures of political biases generated by LLMs. Our proposed measure …
abstract arxiv benchmarks bias biases cs.ai cs.cl focus gender generated however language language models large language large language models llms measuring political racial style type
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Data Scientist (Database Development)
@ Nasdaq | Bengaluru-Affluence