April 17, 2023, 8:03 p.m. | Jinghan Jia, Jiancheng Liu, Parikshit Ram, Yuguang Yao, Gaowen Liu, Yang Liu, Pranay Sharma, Sijia Liu

cs.LG updates on arXiv.org arxiv.org

Recent data regulations necessitate machine unlearning (MU): The removal of
the effect of specific examples from the model. While exact unlearning is
possible by conducting a model retraining with the remaining data from scratch,
its computational cost has led to the development of approximate but efficient
unlearning schemes. Beyond data-centric MU solutions, we advance MU through a
novel model-based viewpoint: sparsification via weight pruning. Our results in
both theory and practice indicate that model sparsity can boost the
multi-criteria unlearning …

advance approximation arxiv beyond boost computational cost data data-centric data regulations development examples gap machine novel performance practice pruning regulations solutions sparsity theory

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Sr. BI Analyst

@ AkzoNobel | Pune, IN