all AI news
Regularized Best-of-N Sampling to Mitigate Reward Hacking for Language Model Alignment
April 2, 2024, 7:52 p.m. | Yuu Jinnai, Tetsuro Morimura, Kaito Ariu, Kenshi Abe
cs.CL updates on arXiv.org arxiv.org
Abstract: Best-of-N (BoN) sampling with a reward model has been shown to be an effective strategy for aligning Large Language Models (LLMs) to human preferences at the time of decoding. BoN sampling is susceptible to a problem known as reward hacking. Because the reward model is an imperfect proxy for the true objective, over-optimizing its value can compromise its performance on the true objective. A common solution to prevent reward hacking in preference learning techniques is …
abstract alignment arxiv cs.ai cs.cl decoding hacking human language language model language models large language large language models llms reward model sampling strategy type
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Business Data Analyst
@ Alstom | Johannesburg, GT, ZA