March 21, 2024, 4:43 a.m. | Xiao Li, Qiongxiu Li, Zhanhao Hu, Xiaolin Hu

cs.LG updates on arXiv.org arxiv.org

arXiv:2208.08270v3 Announce Type: replace
Abstract: Machine learning poses severe privacy concerns as it has been shown that the learned models can reveal sensitive information about their training data. Many works have investigated the effect of widely adopted data augmentation and adversarial training techniques, termed data enhancement in the paper, on the privacy leakage of machine learning models. Such privacy effects are often measured by membership inference attacks (MIAs), which aim to identify whether a particular example belongs to the training …

abstract adversarial adversarial training arxiv augmentation concerns cs.cr cs.cv cs.lg data information machine machine learning privacy training training data type via

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York