April 2, 2024, 7:45 p.m. | Itay Itzhak, Gabriel Stanovsky, Nir Rosenfeld, Yonatan Belinkov

cs.LG updates on arXiv.org arxiv.org

arXiv:2308.00225v2 Announce Type: replace-cross
Abstract: Recent studies show that instruction tuning (IT) and reinforcement learning from human feedback (RLHF) improve the abilities of large language models (LMs) dramatically. While these tuning methods can help align models with human objectives and generate high-quality text, not much is known about their potential adverse effects. In this work, we investigate the effect of IT and RLHF on decision making and reasoning in LMs, focusing on three cognitive biases - the decoy effect, the …

abstract arxiv bias cognitive cs.ai cs.cy cs.lg feedback generate human human feedback instruction-tuned language language models large language large language models lms quality reinforcement reinforcement learning rlhf show studies text type

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US