Feb. 6, 2024, 5:48 a.m. | Yong Lin Hangyu Lin Wei Xiong Shizhe Diao Jianmeng Liu Jipeng Zhang Rui Pan Haoxiang Wang

cs.LG updates on arXiv.org arxiv.org

LLMs acquire a wide range of abilities during pre-training, but aligning LLMs under Reinforcement Learning with Human Feedback (RLHF) can lead to forgetting, which is also known as the alignment tax. To empirically verify this hypothesis, we conducted experiments with existing RLHF algorithms using OpenLLaMA-3B, which revealed a pronounced alignment tax in NLP tasks. On the other hand, despite various techniques to mitigate forgetting, they are often at odds with the RLHF performance, leading to a trade-off between reward maximization …

algorithms alignment cs.lg feedback human human feedback hypothesis llms nlp pre-training reinforcement reinforcement learning rlhf tasks tax training verify

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York