all AI news
RLHF: Reinforcement Learning from Human Feedback
May 2, 2023, midnight |
Chip Huyen huyenchip.com
In literature discussing why ChatGPT is able to capture so much of our imagination, I often come across two narratives:
- Scaling up: OpenAI threw more data and compute at it.
- UX: moving from a prompt interface to a more natural chat interface.
A narrative that is often glossed over in the demo frenzy is the incredible technical creativity that went into making models like ChatGPT work. One such cool idea is RLHF (Reinforcement Learning from Human Feedback): incorporating reinforcement learning …
chat chatgpt compute data demo feedback human human feedback literature moving narrative natural openai prompt reinforcement reinforcement learning rlhf scaling scaling up
More from huyenchip.com / Chip Huyen
Sampling for Text Generation
3 months, 2 weeks ago |
huyenchip.com
Multimodality and Large Multimodal Models (LMMs)
6 months, 3 weeks ago |
huyenchip.com
Open challenges in LLM research
8 months, 2 weeks ago |
huyenchip.com
Generative AI Strategy
10 months, 3 weeks ago |
huyenchip.com
RLHF: Reinforcement Learning from Human Feedback
11 months, 4 weeks ago |
huyenchip.com
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Software Engineer, Data Tools - Full Stack
@ DoorDash | Pune, India
Senior Data Analyst
@ Artsy | New York City