Feb. 27, 2023, 6:26 p.m. | Sam Charrington

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) twimlai.com

Today we’re joined by Nicholas Carlini, a research scientist at Google Brain. Nicholas works at the intersection of machine learning and computer security, and his recent paper “Extracting Training Data from LLMs” has generated quite a buzz within the ML community. In our conversation, we discuss the current state of adversarial machine learning research, the dynamic of dealing with privacy issues in black box vs accessible models, what privacy attacks in vision models like diffusion models look like, and the …

adversarial machine learning attacks black box brain community computer computer security conversation data diffusion diffusion models discuss generated google google brain llms machine machine learning paper privacy research scale security stable diffusion state training training data vision

More from twimlai.com / The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Lead Software Engineer - Artificial Intelligence, LLM

@ OpenText | Hyderabad, TG, IN

Lead Software Engineer- Python Data Engineer

@ JPMorgan Chase & Co. | GLASGOW, LANARKSHIRE, United Kingdom

Data Analyst (m/w/d)

@ Collaboration Betters The World | Berlin, Germany

Data Engineer, Quality Assurance

@ Informa Group Plc. | Boulder, CO, United States

Director, Data Science - Marketing

@ Dropbox | Remote - Canada