all AI news
Self-playing Adversarial Language Game Enhances LLM Reasoning
April 17, 2024, 4:42 a.m. | Pengyu Cheng, Tianhao Hu, Han Xu, Zhisong Zhang, Yong Dai, Lei Han, Nan Du
cs.LG updates on arXiv.org arxiv.org
Abstract: We explore the self-play training procedure of large language models (LLMs) in a two-player adversarial language game called Adversarial Taboo. In this game, an attacker and a defender communicate with respect to a target word only visible to the attacker. The attacker aims to induce the defender to utter the target word unconsciously, while the defender tries to infer the target word from the attacker's utterances. To win the game, both players should have sufficient …
adversarial arxiv cs.cl cs.lg game language llm llm reasoning playing reasoning type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Business Intelligence Manager
@ Sanofi | Budapest
Principal Engineer, Data (Hybrid)
@ Homebase | Toronto, Ontario, Canada