Feb. 6, 2024, 5:44 a.m. | Joshua C. Yang Marcin Korecki Damian Dailisan Carina I. Hausladen Dirk Helbing

cs.LG updates on arXiv.org arxiv.org

This paper investigates the voting behaviors of Large Language Models (LLMs), particularly OpenAI's GPT4 and LLaMA2, and their alignment with human voting patterns. Our approach included a human voting experiment to establish a baseline for human preferences and a parallel experiment with LLM agents. The study focused on both collective outcomes and individual preferences, revealing differences in decision-making and inherent biases between humans and LLMs. We observed a trade-off between preference diversity and alignment in LLMs, with a tendency towards …

agents alignment collective cs.ai cs.cl cs.cy cs.lg decision decision making econ.gn experiment gpt4 human language language models large language large language models llama2 llm llms making openai paper patterns q-fin.ec study voting

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Scientist

@ ITE Management | New York City, United States