March 20, 2024, 4:48 a.m. | Xingyao Wang, Hao Peng, Reyhaneh Jabbarvand, Heng Ji

cs.CL updates on arXiv.org arxiv.org

arXiv:2305.10314v2 Announce Type: replace
Abstract: Fine-tuning pre-trained language models (LMs) is essential for enhancing their capabilities. Existing techniques commonly fine-tune on input-output pairs (e.g., instruction tuning) or with numerical rewards that gauge the output quality (e.g., RLHF). We explore LMs' potential to learn from textual interactions (LETI) that not only check their correctness with binary labels but also pinpoint and explain errors in their outputs through textual feedback. Our focus is the code generation task, where the model produces code …

abstract arxiv capabilities check cs.ai cs.cl cs.se explore fine-tuning generate input-output interactions language language models learn lms numerical quality rlhf textual type

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US