March 15, 2024, 4:41 a.m. | Yuncheng Huang, Qianyu He, Yipei Xu, Jiaqing Liang, Yanghua Xiao

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.09479v1 Announce Type: new
Abstract: Current language models have demonstrated their capability to develop basic reasoning, but struggle in more complicated reasoning tasks that require a combination of atomic skills, such as math word problem requiring skills like arithmetic and unit conversion. Previous methods either do not improve the inherent atomic skills of models or not attempt to generalize the atomic skills to complex reasoning tasks. In this paper, we first propose a probing framework to investigate whether the atomic …

abstract arxiv basic capability combination conversion cs.lg current foundation language language models math reasoning skills struggle tasks type word

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne