all AI news
Partially Blinded Unlearning: Class Unlearning for Deep Networks a Bayesian Perspective
March 26, 2024, 4:42 a.m. | Subhodip Panda, Shashwat Sourav, Prathosh A. P
cs.LG updates on arXiv.org arxiv.org
Abstract: In order to adhere to regulatory standards governing individual data privacy and safety, machine learning models must systematically eliminate information derived from specific subsets of a user's training data that can no longer be utilized. The emerging discipline of Machine Unlearning has arisen as a pivotal area of research, facilitating the process of selectively discarding information designated to specific sets or classes of data from a pre-trained model, thereby eliminating the necessity for extensive retraining …
abstract arxiv bayesian class cs.cv cs.lg data data privacy information machine machine learning machine learning models networks perspective privacy regulatory safety standards subsets training training data type unlearning
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US