May 15, 2023, 12:43 a.m. | Yinchuan Li, Shuang Luo, Yunfeng Shao, Jianye Hao

cs.LG updates on arXiv.org arxiv.org

We propose the GFlowNets with Human Feedback (GFlowHF) framework to improve
the exploration ability when training AI models. For tasks where the reward is
unknown, we fit the reward function through human evaluations on different
trajectories. The goal of GFlowHF is to learn a policy that is strictly
proportional to human ratings, instead of only focusing on human favorite
ratings like RLHF. Experiments show that GFlowHF can achieve better exploration
ability than RLHF.

ai models arxiv exploration feedback framework function human human feedback learn policy through training training ai training ai models

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Lead Data Scientist, Commercial Analytics

@ Checkout.com | London, United Kingdom

Data Engineer I

@ Love's Travel Stops | Oklahoma City, OK, US, 73120