Nov. 21, 2022, 2:11 a.m. | Jan Aalmoes, Vasisht Duddu, Antoine Boutet

cs.LG updates on arXiv.org arxiv.org

Machine learning (ML) models have been deployed for high-stakes applications,
e.g., healthcare and criminal justice. Prior work has shown that ML models are
vulnerable to attribute inference attacks where an adversary, with some
background knowledge, trains an ML attack model to infer sensitive attributes
by exploiting distinguishable model predictions. However, some prior attribute
inference attacks have strong assumptions about adversary's background
knowledge (e.g., marginal distribution of sensitive attribute) and pose no more
privacy risk than statistical inference. Moreover, none of …

algorithmic fairness arxiv attacks fairness inference

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote