March 28, 2024, 4:48 a.m. | Yangruibo Ding, Marcus J. Min, Gail Kaiser, Baishakhi Ray

cs.CL updates on arXiv.org arxiv.org

arXiv:2403.18746v1 Announce Type: cross
Abstract: Pre-trained code language models have achieved promising performance in code generation and improved the programming efficiency of human developers. However, their self-refinement capability is typically overlooked by the existing evaluations of code LMs, which focus only on the accuracy of the one-time prediction. For the cases when code LMs fail to implement the correct program, developers actually find it hard to debug and fix the faulty prediction since it is not written by the developers …

abstract accuracy arxiv capability cases code code generation cs.cl cs.se developers efficiency focus however human language language models lms performance prediction programming refine type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Research Scientist

@ Meta | Menlo Park, CA

Principal Data Scientist

@ Mastercard | O'Fallon, Missouri (Main Campus)