all AI news
Preference as Reward, Maximum Preference Optimization with Importance Sampling
March 26, 2024, 4:44 a.m. | Zaifan Jiang, Xing Huang, Chao Wei
cs.LG updates on arXiv.org arxiv.org
Abstract: Preference learning is a key technology for aligning language models with human values. Reinforcement Learning from Human Feedback (RLHF) is a model-based algorithm to optimize preference learning, which first fits a reward model for preference scores and then optimizes the generating policy with an on-policy PPO algorithm to maximize the reward. The processing of RLHF is complex, time-consuming, and unstable. The Direct Preference Optimization (DPO) algorithm uses an off-policy algorithm to directly optimize the generating …
abstract algorithm arxiv cs.ai cs.lg feedback human human feedback importance key language language models optimization policy ppo reinforcement reinforcement learning reward model rlhf sampling technology type values
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Lead Data Scientist, Commercial Analytics
@ Checkout.com | London, United Kingdom
Data Engineer I
@ Love's Travel Stops | Oklahoma City, OK, US, 73120