June 29, 2022, 1:11 a.m. | Zihan Wang, Na Huang, Fei Sun, Pengjie Ren, Zhumin Chen, Hengliang Luo, Maarten de Rijke, Zhaochun Ren

cs.LG updates on arXiv.org arxiv.org

Learned recommender systems may inadvertently leak information about their
training data, leading to privacy violations. We investigate privacy threats
faced by recommender systems through the lens of membership inference. In such
attacks, an adversary aims to infer whether a user's data is used to train the
target recommender. To achieve this, previous work has used a shadow
recommender to derive training data for the attack model, and then predicts the
membership by calculating difference vectors between users' historical
interactions and …

arxiv attacks inference learning recommender systems systems

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Engineer

@ Parker | New York City

Sr. Data Analyst | Home Solutions

@ Three Ships | Raleigh or Charlotte, NC