April 16, 2024, 4:51 a.m. | Xiao Chen, Sihang Zhou, Ke Liang, Xinwang Liu

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.09170v1 Announce Type: new
Abstract: Chain of thought finetuning aims to endow small student models with reasoning capacity to improve their performance towards a specific task by allowing them to imitate the reasoning procedure of large language models (LLMs) beyond simply predicting the answer to the question. However, the existing methods 1) generate rationale before the answer, making their answer correctness sensitive to the hallucination in the rationale;2) force the student model to repeat the exact LLMs rationale expression word-after-word, …

abstract arxiv beyond capacity chain of thought cs.cl finetuning however language language models large language large language models llms performance question reasoning robust semantic small strategy them thinking thought type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Data Scientist, gTech Ads

@ Google | Mexico City, CDMX, Mexico

Lead, Data Analytics Operations

@ Zocdoc | Pune, Maharashtra, India