all AI news
Improving Language Model Reasoning with Self-motivated Learning
April 11, 2024, 4:47 a.m. | Yunlong Feng, Yang Xu, Libo Qin, Yasheng Wang, Wanxiang Che
cs.CL updates on arXiv.org arxiv.org
Abstract: Large-scale high-quality training data is important for improving the performance of models. After trained with data that has rationales (reasoning steps), models gain reasoning capability. However, the dataset with high-quality rationales is relatively scarce due to the high annotation cost. To address this issue, we propose \textit{Self-motivated Learning} framework. The framework motivates the model itself to automatically generate rationales on existing datasets. Based on the inherent rank from correctness across multiple rationales, the model learns …
abstract annotation arxiv capability cost cs.ai cs.cl data dataset however improving issue language language model performance quality reasoning scale training training data type
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne