all AI news
RLHF: Reinforcement Learning from Human Feedback
May 2, 2023, midnight |
Chip Huyen huyenchip.com
In literature discussing why ChatGPT is able to capture so much of our imagination, I often come across two narratives:
- Scaling up: OpenAI threw more data and compute at it.
- UX: moving from a prompt interface to a more natural chat interface.
A narrative that is often glossed over in the demo frenzy is the incredible technical creativity that went into making models like ChatGPT work. One such cool idea is RLHF (Reinforcement Learning from Human Feedback): incorporating reinforcement learning …
chat chatgpt compute data demo feedback human human feedback literature moving narrative natural openai prompt reinforcement reinforcement learning rlhf scaling scaling up
More from huyenchip.com / Chip Huyen
Predictive Human Preference: From Model Ranking to Model Routing
2 months, 3 weeks ago |
huyenchip.com
Multimodality and Large Multimodal Models (LMMs)
7 months, 1 week ago |
huyenchip.com
Generative AI Strategy
11 months, 1 week ago |
huyenchip.com
Building LLM applications for production
1 year, 1 month ago |
huyenchip.com
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US