March 1, 2024, 5:44 a.m. | Eilam Shapira, Reut Apel, Moshe Tennenholtz, Roi Reichart

cs.LG updates on arXiv.org arxiv.org

arXiv:2305.10361v4 Announce Type: replace
Abstract: Recent advances in Large Language Models (LLMs) have spurred interest in designing LLM-based agents for tasks that involve interaction with human and artificial agents. This paper addresses a key aspect in the design of such agents: Predicting human decision in off-policy evaluation (OPE), focusing on language-based persuasion games, where the agent's goal is to influence its partner's decisions through verbal messages. Using a dedicated application, we collected a dataset of 87K decisions from humans playing …

arxiv cs.ai cs.gt cs.lg evaluation games human language persuasion policy prediction simulation type

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote