all AI news
When Machine Learning Models Leak: An Exploration of Synthetic Training Data
March 8, 2024, 5:42 a.m. | Manel Slokom, Peter-Paul de Wolf, Martha Larson
cs.LG updates on arXiv.org arxiv.org
Abstract: We investigate an attack on a machine learning model that predicts whether a person or household will relocate in the next two years, i.e., a propensity-to-move classifier. The attack assumes that the attacker can query the model to obtain predictions and that the marginal distribution of the data on which the model was trained is publicly available. The attack also assumes that the attacker has obtained the values of non-sensitive attributes for a certain number …
abstract arxiv classifier cs.lg data exploration leak machine machine learning machine learning model machine learning models next person predictions query synthetic training training data type will
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Robotics Technician - 3rd Shift
@ GXO Logistics | Perris, CA, US, 92571