April 17, 2023, 8:05 p.m. | Hanze Dong, Wei Xiong, Deepanshu Goyal, Rui Pan, Shizhe Diao, Jipeng Zhang, Kashun Shum, Tong Zhang

stat.ML updates on arXiv.org arxiv.org

Generative foundation models are susceptible to implicit biases that can
arise from extensive unsupervised training data. Such biases can produce
suboptimal samples, skewed outcomes, and unfairness, with potentially
significant repercussions. Consequently, aligning these models with human
ethics and preferences is an essential step toward ensuring their responsible
and effective deployment in real-world applications. Prior research has
primarily employed Reinforcement Learning from Human Feedback (RLHF) as a means
of addressing this problem, wherein generative models are fine-tuned using RL
algorithms guided …

algorithms alignment applications arxiv biases data deployment ethics feedback finetuning foundation foundation model generative generative models human human feedback prior raft reinforcement reinforcement learning repercussions research rlhf training training data unsupervised world

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US