Feb. 9, 2024, 5:42 a.m. | Zhenlong Liu Lei Feng Huiping Zhuang Xiaofeng Cao Hongxin Wei

cs.LG updates on arXiv.org arxiv.org

Machine learning models are susceptible to membership inference attacks (MIAs), which aim to infer whether a sample is in the training set. Existing work utilizes gradient ascent to enlarge the loss variance of training data, alleviating the privacy risk. However, optimizing toward a reverse direction may cause the model parameters to oscillate near local minima, leading to instability and suboptimal performance. In this work, we propose a novel method -- Convex-Concave Loss, which enables a high variance of training loss …

aim attacks cs.cr cs.lg data gradient inference loss machine machine learning machine learning models parameters privacy risk sample set training training data variance work

Research Scholar (Technical Research)

@ Centre for the Governance of AI | Hybrid; Oxford, UK

HPC Engineer (x/f/m) - DACH

@ Meshcapade GmbH | Remote, Germany

Data Scientist AI / ML - Associate 2 -Bangalore

@ PwC | Bengaluru (SDC) - Bagmane Tech Park

Staff ML Engineer - Machine Learning

@ Visa | Bengaluru, India

Senior Data Scientist

@ IQVIA | Dublin, Ireland

Data Analyst ETL Expert

@ Bosch Group | Bengaluru, India