March 14, 2024, 4:43 a.m. | Ana-Maria Cretu, Daniel Jones, Yves-Alexandre de Montjoye, Shruti Tople

cs.LG updates on arXiv.org arxiv.org

arXiv:2306.05093v2 Announce Type: replace-cross
Abstract: Machine learning models have been shown to leak sensitive information about their training datasets. Models are increasingly deployed on devices, raising concerns that white-box access to the model parameters increases the attack surface compared to black-box access which only provides query access. Directly extending the shadow modelling technique from the black-box to the white-box setting has been shown, in general, not to perform better than black-box only attacks. A potential reason is misalignment, a known …

abstract arxiv box concerns cs.cr cs.lg datasets devices information leak machine machine learning machine learning models parameters privacy query surface training training datasets type

Senior Data Engineer

@ Displate | Warsaw

Junior Data Analyst - ESG Data

@ Institutional Shareholder Services | Mumbai

Intern Data Driven Development in Sensor Fusion for Autonomous Driving (f/m/x)

@ BMW Group | Munich, DE

Senior MLOps Engineer, Machine Learning Platform

@ GetYourGuide | Berlin

Data Engineer, Analytics

@ Meta | Menlo Park, CA

Data Engineer

@ Meta | Menlo Park, CA