all AI news
[D] what's the proper way of doing direct preference optimization (DPO) and why?
Jan. 29, 2024, 5:30 a.m. | /u/aaaprocrastinating
Machine Learning www.reddit.com
https://preview.redd.it/6c9z61o4bbfc1.png?width=2164&format=png&auto=webp&s=c6b5ed46937da04e5912023e2f46ae7821a9a446
My question is: why does it matter so much that the preference data distribution aligns with the reference model output distribution? My understanding is that during training, the parameters of the sft are updated such that chosen responses (y\_w) have a higher probability of being generated, and rejected responses (y\_l) have a lower probability of being generated, …
data direct preference optimization distribution machinelearning matter mind optimization paper question reason reference understanding
More from www.reddit.com / Machine Learning
Jobs in AI, ML, Big Data
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US