Feb. 26, 2024, 5:43 a.m. | Michael J. Ryan, William Held, Diyi Yang

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.15018v1 Announce Type: cross
Abstract: Before being deployed for user-facing applications, developers align Large Language Models (LLMs) to user preferences through a variety of procedures, such as Reinforcement Learning From Human Feedback (RLHF) and Direct Preference Optimization (DPO). Current evaluations of these procedures focus on benchmarks of instruction following, reasoning, and truthfulness. However, human preferences are not universal, and aligning to specific preference sets may have unintended effects. We explore how alignment impacts performance along three axes of global representation: …

abstract alignment applications arxiv benchmarks cs.cl cs.cy cs.lg current developers direct preference optimization feedback focus global human human feedback impacts language language models large language large language models llm llms optimization reasoning reinforcement reinforcement learning representation rlhf through type

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US