all AI news
Rethinking the Bounds of LLM Reasoning: Are Multi-Agent Discussions the Key?
Feb. 29, 2024, 5:48 a.m. | Qineng Wang, Zihao Wang, Ying Su, Hanghang Tong, Yangqiu Song
cs.CL updates on arXiv.org arxiv.org
Abstract: Recent progress in LLMs discussion suggests that multi-agent discussion improves the reasoning abilities of LLMs. In this work, we reevaluate this claim through systematic experiments, where we propose a novel group discussion framework to enrich the set of discussion mechanisms. Interestingly, our results show that a single-agent LLM with strong prompts can achieve almost the same performance as the best existing discussion approach on a wide range of reasoning tasks and backbone LLMs. We observe …
abstract agent arxiv claim cs.ai cs.cl discussions framework key llm llm reasoning llms multi-agent novel progress reasoning results set show the key through type work
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne