April 29, 2024, 4:42 a.m. | Jiaeli Shi, Najah Ghalyan, Kostis Gourgoulias, John Buford, Sean Moran

cs.LG updates on arXiv.org arxiv.org

arXiv:2311.10448v2 Announce Type: replace
Abstract: Machine learning models trained on sensitive or private data can inadvertently memorize and leak that information. Machine unlearning seeks to retroactively remove such details from model weights to protect privacy. We contribute a lightweight unlearning algorithm that leverages the Fisher Information Matrix (FIM) for selective forgetting. Prior work in this area requires full retraining or large matrix inversions, which are computationally expensive. Our key insight is that the diagonal elements of the FIM, which measure …

abstract algorithm arxiv cs.cr cs.cv cs.lg data fisher information leak machine machine learning machine learning models privacy private data protect type unlearning

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US