April 24, 2024, 4:47 a.m. | Yufeng Zhang, Xuepeng Wang, Lingxiang Wu, Jinqiao Wang

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.14812v1 Announce Type: new
Abstract: Chain-of-thought (CoT) prompting can guide language models to engage in complex multi-step reasoning. The quality of provided demonstrations significantly impacts the success of downstream inference tasks. While existing automated methods prioritize accuracy and semantics in these demonstrations, we show that the underlying reasoning patterns play a more crucial role in such tasks. In this paper, we propose Pattern-Aware CoT, a prompting method that considers the diversity of demonstration patterns. By incorporating patterns such as step …

abstract accuracy arxiv automated cs.cl guide impacts inference language language models large language large language models patterns prompting quality reasoning semantics show success tasks thought type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne