Jan. 31, 2024, 4:40 p.m. | Elias Stengel-Eskin, Archiki Prasad, Mohit Bansal

cs.CL updates on arXiv.org arxiv.org

While large language models (LLMs) are increasingly being used for program
synthesis, they lack the global view needed to develop useful abstractions;
they generally predict programs one at a time, often repeating the same
functionality. Generating redundant code from scratch is both inefficient and
error-prone. To address this, we propose Refactoring for Generalizable
Abstraction Learning (ReGAL), a gradient-free method for learning a library of
reusable functions via code refactorization, i.e. restructuring code without
changing its execution output. ReGAL learns from …

abstractions arxiv code cs.se error global language language models large language large language models llms refactoring synthesis view

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US