April 9, 2024, 4:51 a.m. | Wenyang Hui, Yan Wang, Kewei Tu, Chengyue Jiang

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.05449v1 Announce Type: new
Abstract: Large language models (LLMs) have demonstrated impressive capability in reasoning and planning when integrated with tree-search-based prompting methods. However, since these methods ignore the previous search experiences, they often make the same mistakes in the search process. To address this issue, we introduce Reflection on search Trees (RoT), an LLM reflection framework designed to improve the performance of tree-search-based prompting methods. It uses a strong LLM to summarize guidelines from previous tree search experiences to …

abstract arxiv capability cs.cl however issue language language models large language large language models llms mistakes planning process prompting reasoning search tree trees type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne