Web: http://arxiv.org/abs/2209.12046

Sept. 28, 2022, 1:13 a.m. | Xin Yang, Omid Ardakanian

cs.LG updates on arXiv.org arxiv.org

This paper proposes a sensor data anonymization model that is trained on
decentralized data and strikes a desirable trade-off between data utility and
privacy, even in heterogeneous settings where the collected sensor data have
different underlying distributions. Our anonymization model, dubbed Blinder, is
based on a variational autoencoder and discriminator networks trained in an
adversarial fashion. We use the model-agnostic meta-learning framework to adapt
the anonymization model trained via federated learning to each user's data
distribution. We evaluate Blinder under …

arxiv federated learning personalized privacy protection sensing systems

More from arxiv.org / cs.LG updates on arXiv.org


@ METRO/MAKRO | Nanterre, France

Data Analyst

@ Netcentric | Barcelona, Spain

Power BI Developer

@ Lendi Group | Sydney, Australia

Staff Data Scientist - Merchant Services (Remote, North America)

@ Shopify | Dallas, TX, United States

Machine Learning / Data Engineer

@ WATI | Vietnam - Remote

F/H Data Manager

@ Bosch Group | Saint-Ouen-sur-Seine, France

[Fixed-term contract until July 2023] Data Quality Controller - Space Industry Luxembourg (m/f/o)

@ LuxSpace Sarl | Betzdorf, Luxembourg

Senior Data Engineer (Azure DataBricks/datalake)

@ SpectraMedix | East Windsor, NJ, United States

Abschlussarbeit im Bereich Data Analytics (w/m/div.)

@ Bosch Group | Rülzheim, Germany

Data Engineer - Marketing

@ Publicis Groupe | London, United Kingdom

Data Engineer (Consulting division)

@ Starschema | Budapest, Hungary

Team Leader, Master Data Management - Support CN, HK & TW

@ Publicis Groupe | Kuala Lumpur, Malaysia