Feb. 5, 2024, 3:42 p.m. | Jack Foster Kyle Fogarty Stefan Schoepf Cengiz \"Oztireli Alexandra Brintrup

cs.LG updates on arXiv.org arxiv.org

To comply with AI and data regulations, the need to forget private or copyrighted information from trained machine learning models is increasingly important. The key challenge in unlearning is forgetting the necessary data in a timely manner, while preserving model performance. In this work, we address the zero-shot unlearning scenario, whereby an unlearning algorithm must be able to remove data given only a trained model and the data to be forgotten. Under such a definition, existing state-of-the-art methods are insufficient. …

challenge cs.ai cs.lg data data regulations information key machine machine learning machine learning models performance regularization regulations scale stat.ml the key unlearning via work zero-shot

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Sr. BI Analyst

@ AkzoNobel | Pune, IN