Feb. 6, 2024, 5:53 a.m. | Aleksandra Sorokovikova Natalia Fedorova Sharwin Rezagholi Ivan P. Yamshchikov

cs.CL updates on arXiv.org arxiv.org

An empirical investigation into the simulation of the Big Five personality traits by large language models (LLMs), namely Llama2, GPT4, and Mixtral, is presented. We analyze the personality traits simulated by these models and their stability. This contributes to the broader understanding of the capabilities of LLMs to simulate personality traits and the respective implications for personalized human-computer interaction.

analyze big capabilities cs.ai cs.cl evidence five gpt4 investigation language language models large language large language models llama2 llms mixtral personality simulation stability the simulation understanding

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Scientist

@ ITE Management | New York City, United States