March 6, 2024, 5:43 a.m. | Jie Xu, Zihan Wu, Cong Wang, Xiaohua Jia

cs.LG updates on arXiv.org arxiv.org

arXiv:2308.07061v2 Announce Type: replace
Abstract: Machine learning models may inadvertently memorize sensitive, unauthorized, or malicious data, posing risks of privacy breaches, security vulnerabilities, and performance degradation. To address these issues, machine unlearning has emerged as a critical technique to selectively remove specific training data points' influence on trained models. This paper provides a comprehensive taxonomy and analysis of the solutions in machine unlearning. We categorize existing solutions into exact unlearning approaches that remove data influence thoroughly and approximate unlearning approaches …

abstract arxiv breaches challenges cs.ai cs.lg data influence machine machine learning machine learning models paper performance privacy risks security security vulnerabilities solutions taxonomy training training data type unlearning vulnerabilities

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US