April 9, 2024, 4:50 a.m. | Duanyu Feng, Bowen Qin, Chen Huang, Zheng Zhang, Wenqiang Lei

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.04626v1 Announce Type: new
Abstract: Direct Preference Optimization (DPO), which derives reward signals directly from pairwise preference data, has shown its effectiveness on aligning Large Language Models (LLMs) with human preferences. Despite its widespread use across various tasks, DPO has been criticized for its sensitivity to the SFT's effectiveness and its hindrance to the learning capacity towards human-preferred responses, leading to less satisfactory performance. To overcome those limitations, the theoretical understanding of DPO are indispensable but still lacking. To this …

abstract arxiv cs.ai cs.cl data direct preference optimization human language language models large language large language models limitations llms optimization perspective sensitivity sft tasks type understanding

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US