all AI news
Beyond prompt brittleness: Evaluating the reliability and consistency of political worldviews in LLMs
Feb. 28, 2024, 5:49 a.m. | Tanise Ceron, Neele Falk, Ana Bari\'c, Dmitry Nikolaev, Sebastian Pad\'o
cs.CL updates on arXiv.org arxiv.org
Abstract: Due to the widespread use of large language models (LLMs) in ubiquitous systems, we need to understand whether they embed a specific worldview and what these views reflect. Recent studies report that, prompted with political questionnaires, LLMs show left-liberal leanings. However, it is as yet unclear whether these leanings are reliable (robust to prompt variations) and whether the leaning is consistent across policies and political leaning. We propose a series of tests which assess the …
abstract arxiv beyond cs.cl cs.cy embed language language models large language large language models llms political prompt reliability report show studies systems type
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US