all AI news
[N] Feds appoint “AI doomer” to run US AI safety institute
April 17, 2024, 10:49 p.m. | /u/bregav
Machine Learning www.reddit.com
Article intro:
*Appointed as head of AI safety is Paul Christiano, a former OpenAI researcher who pioneered a foundational AI safety technique called reinforcement learning from human feedback (RLHF), but is also known for predicting that "there's a 50 percent chance AI development could end in 'doom.'" While Christiano's research background is impressive, some fear that by appointing a so-called "AI doomer," NIST may be risking encouraging non-scientific thinking that many critics view as sheer speculation.*
ai development article chance development doom fear feedback foundational head human human feedback intro machinelearning openai reinforcement reinforcement learning research researcher rlhf safety
More from www.reddit.com / Machine Learning
[D] ECCV 2024 Review Discussion
1 day, 1 hour ago |
www.reddit.com
[D] Is it a good idea for a 3rd year PhD student to start a …
1 day, 3 hours ago |
www.reddit.com
[D] Use VQ-VAEs for SSL?
1 day, 4 hours ago |
www.reddit.com
Jobs in AI, ML, Big Data
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US