March 29, 2024, 4:47 a.m. | Yejin Bang, Delong Chen, Nayeon Lee, Pascale Fung

cs.CL updates on

arXiv:2403.18932v1 Announce Type: new
Abstract: We propose to measure political bias in LLMs by analyzing both the content and style of their generated content regarding political issues. Existing benchmarks and measures focus on gender and racial biases. However, political bias exists in LLMs and can lead to polarization and other harms in downstream applications. In order to provide transparency to users, we advocate that there should be fine-grained and explainable measures of political biases generated by LLMs. Our proposed measure …

abstract arxiv benchmarks bias biases focus gender generated however language language models large language large language models llms measuring political racial style type

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

Head of Data Governance - Vice President

@ iCapital | New York City, United States

Analytics Engineer / Data Analyst (Intermediate/Senior)

@ Employment Hero | Ho Chi Minh City, Ho Chi Minh City, Vietnam - Remote

Senior Customer Data Strategy Manager (Remote, San Francisco)

@ Dynatrace | San Francisco, CA, United States

Software Developer - AI/Machine Learning

@ ICF | Nationwide Remote Office (US99)

Senior Data Science Manager - Logistics, Rider (all genders)

@ Delivery Hero | Berlin, Germany