all AI news
Variational Model Inversion Attacks. (arXiv:2201.10787v1 [cs.LG])
Web: http://arxiv.org/abs/2201.10787
Jan. 27, 2022, 2:10 a.m. | Kuan-Chieh Wang, Yan Fu, Ke Li, Ashish Khisti, Richard Zemel, Alireza Makhzani
cs.LG updates on arXiv.org arxiv.org
Given the ubiquity of deep neural networks, it is important that these models
do not reveal information about sensitive data that they have been trained on.
In model inversion attacks, a malicious user attempts to recover the private
dataset used to train a supervised neural network. A successful model inversion
attack should generate realistic and diverse samples that accurately describe
each of the classes in the private dataset. In this work, we provide a
probabilistic interpretation of model inversion attacks, …
More from arxiv.org / cs.LG updates on arXiv.org
Latest AI/ML/Big Data Jobs
Director, Data Engineering and Architecture
@ Chainalysis | California | New York | Washington DC | Remote - USA
Deep Learning Researcher
@ Topaz Labs | Dallas, TX
Sr Data Engineer (Contractor)
@ SADA | US - West
Senior Cloud Database Administrator
@ Findhelp | Remote
Senior Data Analyst
@ System1 | Remote
Speech Machine Learning Research Engineer
@ Samsung Research America | Mountain View, CA