Feb. 8, 2024, 5:46 a.m. | Chengxing Xie Canyu Chen Feiran Jia Ziyu Ye Kai Shu Adel Bibi Ziniu Hu Philip Torr Ber

cs.CL updates on arXiv.org arxiv.org

Large Language Model (LLM) agents have been increasingly adopted as simulation tools to model humans in applications such as social science. However, one fundamental question remains: can LLM agents really simulate human behaviors? In this paper, we focus on one of the most critical behaviors in human interactions, trust, and aim to investigate whether or not LLM agents can simulate human trust behaviors. We first find that LLM agents generally exhibit trust behaviors, referred to as agent trust, under the …

agents aim applications cs.ai cs.cl cs.hc focus human human interactions humans interactions language language model large language large language model llm paper question science simulation social social science tools trust

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote