March 5, 2024, 2:44 p.m. | Jan Aalmoes, Vasisht Duddu, Antoine Boutet

cs.LG updates on arXiv.org arxiv.org

arXiv:2211.10209v2 Announce Type: replace
Abstract: Machine learning (ML) models have been deployed for high-stakes applications, e.g., healthcare and criminal justice. Prior work has shown that ML models are vulnerable to attribute inference attacks where an adversary, with some background knowledge, trains an ML attack model to infer sensitive attributes by exploiting distinguishable model predictions. However, some prior attribute inference attacks have strong assumptions about adversary's background knowledge (e.g., marginal distribution of sensitive attribute) and pose no more privacy risk than …

abstract algorithmic fairness applications arxiv attacks blackbox cs.cr cs.lg fairness healthcare inference justice knowledge machine machine learning ml models prior trains type vulnerable work

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Research Scientist

@ Meta | Menlo Park, CA

Principal Data Scientist

@ Mastercard | O'Fallon, Missouri (Main Campus)