all AI news
Privacy Re-identification Attacks on Tabular GANs
April 2, 2024, 7:43 p.m. | Abdallah Alshantti, Adil Rasheed, Frank Westad
cs.LG updates on arXiv.org arxiv.org
Abstract: Generative models are subject to overfitting and thus may potentially leak sensitive information from the training data. In this work. we investigate the privacy risks that can potentially arise from the use of generative adversarial networks (GANs) for creating tabular synthetic datasets. For the purpose, we analyse the effects of re-identification attacks on synthetic data, i.e., attacks which aim at selecting samples that are predicted to correspond to memorised training samples based on their proximity …
abstract adversarial arxiv attacks cs.cr cs.lg data datasets effects gans generative generative adversarial networks generative models identification information leak networks overfitting privacy risks synthetic tabular training training data type work
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Engineer - AWS
@ 3Pillar Global | Costa Rica
Cost Controller/ Data Analyst - India
@ John Cockerill | Mumbai, India, India, India