Jan. 31, 2024, 4:40 p.m. | Elias Stengel-Eskin, Archiki Prasad, Mohit Bansal

cs.CL updates on arXiv.org arxiv.org

While large language models (LLMs) are increasingly being used for program
synthesis, they lack the global view needed to develop useful abstractions;
they generally predict programs one at a time, often repeating the same
functionality. Generating redundant code from scratch is both inefficient and
error-prone. To address this, we propose Refactoring for Generalizable
Abstraction Learning (ReGAL), a gradient-free method for learning a library of
reusable functions via code refactorization, i.e. restructuring code without
changing its execution output. ReGAL learns from …

abstractions arxiv code cs.se error global language language models large language large language models llms refactoring synthesis view

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Principal, Product Strategy Operations, Cloud Data Analytics

@ Google | Sunnyvale, CA, USA; Austin, TX, USA

Data Scientist - HR BU

@ ServiceNow | Hyderabad, India