all AI news
CYCLE: Learning to Self-Refine the Code Generation
March 28, 2024, 4:48 a.m. | Yangruibo Ding, Marcus J. Min, Gail Kaiser, Baishakhi Ray
cs.CL updates on arXiv.org arxiv.org
Abstract: Pre-trained code language models have achieved promising performance in code generation and improved the programming efficiency of human developers. However, their self-refinement capability is typically overlooked by the existing evaluations of code LMs, which focus only on the accuracy of the one-time prediction. For the cases when code LMs fail to implement the correct program, developers actually find it hard to debug and fix the faulty prediction since it is not written by the developers …
abstract accuracy arxiv capability cases code code generation cs.cl cs.se developers efficiency focus however human language language models lms performance prediction programming refine type
More from arxiv.org / cs.CL updates on arXiv.org
Benchmarking LLMs via Uncertainty Quantification
2 days, 4 hours ago |
arxiv.org
CARE: Extracting Experimental Findings From Clinical Literature
2 days, 4 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Research Scientist
@ Meta | Menlo Park, CA
Principal Data Scientist
@ Mastercard | O'Fallon, Missouri (Main Campus)