all AI news
GFlowNets with Human Feedback. (arXiv:2305.07036v1 [cs.LG])
cs.LG updates on arXiv.org arxiv.org
We propose the GFlowNets with Human Feedback (GFlowHF) framework to improve
the exploration ability when training AI models. For tasks where the reward is
unknown, we fit the reward function through human evaluations on different
trajectories. The goal of GFlowHF is to learn a policy that is strictly
proportional to human ratings, instead of only focusing on human favorite
ratings like RLHF. Experiments show that GFlowHF can achieve better exploration
ability than RLHF.
ai models arxiv exploration feedback framework function human human feedback learn policy through training training ai training ai models