March 6, 2024, 5:41 a.m. | Zixuan Liu, Xiaolin Sun, Zizhan Zheng

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.02475v1 Announce Type: new
Abstract: The rapidly increasing capabilities of large language models (LLMs) raise an urgent need to align AI systems with diverse human preferences to simultaneously enhance their usefulness and safety, despite the often conflicting nature of these goals. To address this important problem, a promising approach is to enforce a safety constraint at the fine-tuning stage through a constrained Reinforcement Learning from Human Feedback (RLHF) framework. This approach, however, is computationally expensive and often unstable. In this …

abstract ai systems arxiv capabilities cs.cl cs.lg direct preference optimization diverse human language language models large language large language models llm llms nature optimization raise safety systems type via

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US