May 2, 2023, midnight |

Chip Huyen huyenchip.com

In literature discussing why ChatGPT is able to capture so much of our imagination, I often come across two narratives:



  1. Scaling up: OpenAI threw more data and compute at it.

  2. UX: moving from a prompt interface to a more natural chat interface.


A narrative that is often glossed over in the demo frenzy is the incredible technical creativity that went into making models like ChatGPT work. One such cool idea is RLHF (Reinforcement Learning from Human Feedback): incorporating reinforcement learning …

chat chatgpt compute data demo feedback human human feedback literature moving narrative natural openai prompt reinforcement reinforcement learning rlhf scaling scaling up

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Software Engineer, Data Tools - Full Stack

@ DoorDash | Pune, India

Senior Data Analyst

@ Artsy | New York City