Jan. 31, 2024, 4:46 p.m. | Krishna Acharya, Franziska Boenisch, Rakshit Naidu, Juba Ziani

cs.LG updates on arXiv.org arxiv.org

The increased application of machine learning (ML) in sensitive domains
requires protecting the training data through privacy frameworks, such as
differential privacy (DP). DP requires to specify a uniform privacy level
$\varepsilon$ that expresses the maximum privacy loss that each data point in
the entire dataset is willing to tolerate. Yet, in practice, different data
points often have different privacy requirements. Having to set one uniform
privacy level is usually too restrictive, often forcing a learner to guarantee
the stringent …

application arxiv cs.lg data dataset differential differential privacy domains frameworks loss machine machine learning personalized privacy regression ridge through training training data uniform

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

AI Engineering Manager

@ M47 Labs | Barcelona, Catalunya [Cataluña], Spain