all AI news
Weak-to-Strong Extrapolation Expedites Alignment
April 26, 2024, 4:42 a.m. | Chujie Zheng, Ziqi Wang, Heng Ji, Minlie Huang, Nanyun Peng
cs.LG updates on arXiv.org arxiv.org
Abstract: Although the capabilities of large language models (LLMs) ideally scale up with increasing data and compute, they are inevitably constrained by limited resources in reality. Suppose we have a moderately trained LLM (e.g., trained to align with human preference) in hand, can we further exploit its potential and cheaply acquire a stronger model? In this paper, we propose a simple method called ExPO to boost LLMs' alignment with human preference. ExPO assumes that a medium-aligned …
abstract alignment arxiv capabilities compute cs.ai cs.cl cs.lg data exploit human language language models large language large language models llm llms reality resources scale type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote