all AI news
RAFT: Reward rAnked FineTuning for Generative Foundation Model Alignment. (arXiv:2304.06767v1 [cs.LG])
cs.LG updates on arXiv.org arxiv.org
Generative foundation models are susceptible to implicit biases that can
arise from extensive unsupervised training data. Such biases can produce
suboptimal samples, skewed outcomes, and unfairness, with potentially
significant repercussions. Consequently, aligning these models with human
ethics and preferences is an essential step toward ensuring their responsible
and effective deployment in real-world applications. Prior research has
primarily employed Reinforcement Learning from Human Feedback (RLHF) as a means
of addressing this problem, wherein generative models are fine-tuned using RL
algorithms guided …
algorithms alignment applications arxiv biases data deployment ethics feedback finetuning foundation foundation model generative generative models human human feedback prior raft reinforcement reinforcement learning repercussions research rlhf training training data unsupervised world