March 28, 2023, 6:26 p.m. | Google AI (noreply@blogger.com)

Google AI Blog ai.googleblog.com

Posted by Harsh Mehta, Software Engineer, and Walid Krichene, Research Scientist, Google Research


Large deep learning models are becoming the workhorse of a variety of critical machine learning (ML) tasks. However, it has been shown that without any protection it is plausible for bad actors to attack a variety of models, across modalities, to reveal information from individual training examples. As such, it’s essential to protect against this sort of information leakage.



Differential privacy (DP) provides formal protection against an …

classification data deep learning differential privacy engineer examples extract google google research image information machine machine learning privacy protection research scale security and privacy software software engineer supervised learning training training data transfer transfer learning

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York