April 8, 2024, 9:03 p.m. | Sam Charrington

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) twimlai.com

Today we're joined by Peter Hase, a fifth-year PhD student at the University of North Carolina NLP lab. We discuss "scalable oversight", and the importance of developing a deeper understanding of how large neural networks make decisions. We learn how matrices are probed by interpretability researchers, and explore the two schools of thought regarding how LLMs store knowledge. Finally, we discuss the importance of deleting sensitive information from model weights, and how "easy-to-hard generalization" could increase the risk of releasing …

decisions discuss editing explore importance interpretability knowledge lab learn llms networks neural networks nlp oversight phd researchers scalable understanding university

More from twimlai.com / The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Sr. VBI Developer II

@ Atos | Texas, US, 75093

Wealth Management - Data Analytics Intern/Co-op Fall 2024

@ Scotiabank | Toronto, ON, CA