all AI news
Reinforced Self-Training (ReST) for Language Modeling (Paper Explained)
Sept. 3, 2023, 12:06 p.m. | Yannic Kilcher
Yannic Kilcher www.youtube.com
ReST uses a bootsrap-like method to produce its own extended dataset and trains on ever higher-quality subsets of it to improve its own reward. The method allows for re-using the same generated data multiple times and thus has an efficiency advantage with respect to Online RL techniques like PPO.
Paper: https://arxiv.org/abs/2308.08998
Abstract:
Reinforcement learning from human feedback (RLHF) can improve the quality of large language model's (LLM) outputs by aligning them with human preferences. We propose a …
data dataset efficiency explained generated language llm modeling multiple paper ppo quality rest rlhf self-training training trains
More from www.youtube.com / Yannic Kilcher
[ML News] Chips, Robots, and Models
2 weeks, 5 days ago |
www.youtube.com
[ML News] Llama 3 changes the game
3 weeks, 5 days ago |
www.youtube.com
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US