May 8, 2023, 12:46 a.m. | Gon Buzaglo, Niv Haim, Gilad Yehudai, Gal Vardi, Michal Irani

cs.CV updates on arXiv.org arxiv.org

Reconstructing samples from the training set of trained neural networks is a
major privacy concern. Haim et al. (2022) recently showed that it is possible
to reconstruct training samples from neural network binary classifiers, based
on theoretical results about the implicit bias of gradient methods. In this
work, we present several improvements and new insights over this previous work.
As our main improvement, we show that training-data reconstruction is possible
in the multi-class setting and that the reconstruction quality is …

arxiv bias binary classifiers data gradient major network networks neural network neural networks privacy set training training data work

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Enterprise AI Architect

@ Oracle | Broomfield, CO, United States

Cloud Data Engineer France H/F (CDI - Confirmé)

@ Talan | Nantes, France