March 29, 2024, 4:47 a.m. | Yufan Jiang, Qiaozhi He, Xiaomin Zhuang, Zhihua Wu

cs.CL updates on arXiv.org arxiv.org

arXiv:2403.19121v1 Announce Type: new
Abstract: We present Code Comparison Tuning (CCT), a simple and effective tuning method for code large language models (Code LLMs) to better handle subtle code errors. Specifically, we integrate the concept of comparison into instruction tuning, both at the token and sequence levels, enabling the model to discern even the slightest deviations in code. To compare the original code with an erroneous version containing manually added code errors, we use token-level preference loss for detailed token-level …

abstract arxiv code code comparison code llms comparison concept cs.cl enabling errors language language models large language large language models llms simple token type

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Robotics Technician - 3rd Shift

@ GXO Logistics | Perris, CA, US, 92571