March 26, 2024, 4:42 a.m. | Subhodip Panda, Shashwat Sourav, Prathosh A. P

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.16246v1 Announce Type: new
Abstract: In order to adhere to regulatory standards governing individual data privacy and safety, machine learning models must systematically eliminate information derived from specific subsets of a user's training data that can no longer be utilized. The emerging discipline of Machine Unlearning has arisen as a pivotal area of research, facilitating the process of selectively discarding information designated to specific sets or classes of data from a pre-trained model, thereby eliminating the necessity for extensive retraining …

abstract arxiv bayesian class cs.cv cs.lg data data privacy information machine machine learning machine learning models networks perspective privacy regulatory safety standards subsets training training data type unlearning

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Director, Clinical Data Science

@ Aura | Remote USA

Research Scientist, AI (PhD)

@ Meta | Menlo Park, CA | New York City