April 9, 2024, 4:43 a.m. | Tim Baumg\"artner, Yang Gao, Dana Alon, Donald Metzler

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.05530v1 Announce Type: cross
Abstract: Reinforcement Learning from Human Feedback (RLHF) is a popular method for aligning Language Models (LM) with human values and preferences. RLHF requires a large number of preference pairs as training data, which are often used in both the Supervised Fine-Tuning and Reward Model training, and therefore publicly available datasets are commonly used. In this work, we study to what extent a malicious actor can manipulate the LMs generations by poisoning the preferences, i.e., injecting poisonous …

abstract arxiv cs.ai cs.cl cs.cr cs.lg data feedback fine-tuning human human feedback language language models popular reinforcement reinforcement learning reward model rlhf supervised fine-tuning training training data type values

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York