April 17, 2024, 4:41 a.m. | Vincent Conitzer, Rachel Freedman, Jobst Heitzig, Wesley H. Holliday, Bob M. Jacobs, Nathan Lambert, Milan Moss\'e, Eric Pacuit, Stuart Russell, Haile

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.10271v1 Announce Type: new
Abstract: Foundation models such as GPT-4 are fine-tuned to avoid unsafe or otherwise problematic behavior, so that, for example, they refuse to comply with requests for help with committing crimes or with producing racist text. One approach to fine-tuning, called reinforcement learning from human feedback, learns from humans' expressed preferences over multiple outputs. Another approach is constitutional AI, in which the input from humans is a list of high-level principles. But how do we deal with …

abstract ai alignment alignment arxiv behavior cs.ai cs.cl cs.cy cs.gt cs.lg diverse example feedback fine-tuning foundation gpt gpt-4 human human feedback racist reinforcement reinforcement learning social text type

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York