all AI news
Partially Blinded Unlearning: Class Unlearning for Deep Networks a Bayesian Perspective
March 26, 2024, 4:42 a.m. | Subhodip Panda, Shashwat Sourav, Prathosh A. P
cs.LG updates on arXiv.org arxiv.org
Abstract: In order to adhere to regulatory standards governing individual data privacy and safety, machine learning models must systematically eliminate information derived from specific subsets of a user's training data that can no longer be utilized. The emerging discipline of Machine Unlearning has arisen as a pivotal area of research, facilitating the process of selectively discarding information designated to specific sets or classes of data from a pre-trained model, thereby eliminating the necessity for extensive retraining …
abstract arxiv bayesian class cs.cv cs.lg data data privacy information machine machine learning machine learning models networks perspective privacy regulatory safety standards subsets training training data type unlearning
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Director, Clinical Data Science
@ Aura | Remote USA
Research Scientist, AI (PhD)
@ Meta | Menlo Park, CA | New York City