March 6, 2024, 5:43 a.m. | Jie Xu, Zihan Wu, Cong Wang, Xiaohua Jia

cs.LG updates on arXiv.org arxiv.org

arXiv:2308.07061v2 Announce Type: replace
Abstract: Machine learning models may inadvertently memorize sensitive, unauthorized, or malicious data, posing risks of privacy breaches, security vulnerabilities, and performance degradation. To address these issues, machine unlearning has emerged as a critical technique to selectively remove specific training data points' influence on trained models. This paper provides a comprehensive taxonomy and analysis of the solutions in machine unlearning. We categorize existing solutions into exact unlearning approaches that remove data influence thoroughly and approximate unlearning approaches …

abstract arxiv breaches challenges cs.ai cs.lg data influence machine machine learning machine learning models paper performance privacy risks security security vulnerabilities solutions taxonomy training training data type unlearning vulnerabilities

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Engineer

@ Quantexa | Sydney, New South Wales, Australia

Staff Analytics Engineer

@ Warner Bros. Discovery | NY New York 230 Park Avenue South