Feb. 27, 2024, 5:44 a.m. | Sajjad Zarifzadeh, Philippe Liu, Reza Shokri

cs.LG updates on arXiv.org arxiv.org

arXiv:2312.03262v2 Announce Type: replace-cross
Abstract: Membership inference attacks (MIA) aim to detect if a particular data point was used in training a machine learning model. Recent strong attacks have high computational costs and inconsistent performance under varying conditions, rendering them unreliable for practical privacy risk assessment. We design a novel, efficient, and robust membership inference attack (RMIA) which accurately differentiates between population data and training data of a model, with minimal computational overhead. We achieve this by a more accurate …

abstract aim arxiv assessment attacks computational cost costs cs.cr cs.lg data design inference low machine machine learning machine learning model novel performance power practical privacy rendering risk risk assessment robust stat.ml them training type

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

Aumni - Site Reliability Engineer III - MLOPS

@ JPMorgan Chase & Co. | Salt Lake City, UT, United States

Senior Data Analyst

@ Teya | Budapest, Hungary

Technical Analyst (Data Analytics)

@ Contact Government Services | Chicago, IL

Engineer, AI/Machine Learning

@ Masimo | Irvine, CA, United States

Private Bank - Executive Director: Data Science and Client / Business Intelligence

@ JPMorgan Chase & Co. | Mumbai, Maharashtra, India