Feb. 5, 2024, 3:48 p.m. | Shihan Dou Yan Liu Haoxiang Jia Limao Xiong Enyu Zhou Junjie Shan Caishuang Huang Wei Shen

cs.CL updates on arXiv.org arxiv.org

The advancement of large language models (LLMs) has significantly propelled the field of code generation. Previous work integrated reinforcement learning (RL) with compiler feedback for exploring the output space of LLMs to enhance code generation quality. However, the lengthy code generated by LLMs in response to complex human requirements makes RL exploration a challenge. Also, since the unit tests may not cover the complicated code, optimizing LLMs by using these unexecuted code snippets is ineffective. To tackle these challenges, we …

advancement code code generation compiler cs.cl cs.se feedback generated human language language models large language large language models llms quality reinforcement reinforcement learning requirements space work

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US