May 26, 2022, 1:12 a.m. | Hyunwoo Kim, Youngjae Yu, Liwei Jiang, Ximing Lu, Daniel Khashabi, Gunhee Kim, Yejin Choi, Maarten Sap

cs.CL updates on arXiv.org arxiv.org

Most existing dialogue systems fail to respond properly to potentially unsafe
user utterances by either ignoring or passively agreeing with them. To address
this issue, we introduce ProsocialDialog, the first large-scale multi-turn
dialogue dataset to teach conversational agents to respond to problematic
content following social norms. Covering diverse unethical, problematic,
biased, and toxic situations, ProsocialDialog contains responses that encourage
prosocial behavior, grounded in commonsense social rules (i.e., rules-of-thumb,
RoTs). Created via a human-AI collaborative framework, ProsocialDialog consists
of 58K dialogues, …

agents arxiv conversational conversational agents

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Scientist

@ Publicis Groupe | New York City, United States

Bigdata Cloud Developer - Spark - Assistant Manager

@ State Street | Hyderabad, India