all AI news
Mixed Preference Optimization: Reinforcement Learning with Data Selection and Better Reference Model
March 29, 2024, 4:48 a.m. | Qi Gou, Cam-Tu Nguyen
cs.CL updates on arXiv.org arxiv.org
Abstract: Large Language Models (LLMs) have become increasingly popular due to their ability to process and generate natural language. However, as they are trained on massive datasets of text, LLMs can inherit harmful biases and produce outputs that are not aligned with human values. This paper studies two main approaches to LLM alignment: Reinforcement Learning with Human Feedback (RLHF) and contrastive learning-based methods like Direct Preference Optimization (DPO). By analyzing the stability and robustness of RLHF …
abstract arxiv become biases cs.cl data datasets generate however human language language models large language large language models llms massive mixed natural natural language optimization popular process reference reinforcement reinforcement learning text type values
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Senior Data Engineer
@ Quantexa | Sydney, New South Wales, Australia
Staff Analytics Engineer
@ Warner Bros. Discovery | NY New York 230 Park Avenue South