Feb. 8, 2024, 5:46 a.m. | Zhuang Li Levon Haroutunian Raj Tumuluri Philip Cohen Gholamreza Haffari

cs.CL updates on arXiv.org arxiv.org

Post-editing has proven effective in improving the quality of text generated by large language models (LLMs) such as GPT-3.5 or GPT-4, particularly when direct updating of their parameters to enhance text quality is infeasible or expensive. However, relying solely on smaller language models for post-editing can limit the LLMs' ability to generalize across domains. Moreover, the editing strategies in these methods are not optimally designed for text-generation tasks. To address these limitations, we propose a neural programmer-interpreter approach that preserves …

cs.cl domain editing generated gpt gpt-3 gpt-3.5 gpt-4 interpreter language language models large language large language models llm llms low parameters programmer quality text text generation through

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US