April 16, 2024, 4:43 a.m. | Junjielong Xu, Ying Fu, Shin Hwei Tan, Pinjia He

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.08877v1 Announce Type: cross
Abstract: Large language models (LLMs) have achieved decent results on automated program repair (APR). However, the next token prediction training objective of decoder-only LLMs (e.g., GPT-4) is misaligned with the masked span prediction objective of current infilling-style methods, which impedes LLMs from fully leveraging pre-trained knowledge for program repair. In addition, while some LLMs are capable of locating and repairing bugs end-to-end when using the related artifacts (e.g., test cases) as input, existing methods regard them …

abstract arxiv automated cs.cl cs.lg cs.se current decoder free gpt gpt-4 however knowledge language language models large language large language models llms next prediction repair results style token training type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne