April 3, 2024, 4:41 a.m. | Xinbao Qiao, Meng Zhang, Ming Tang, Ermin Wei

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.01712v1 Announce Type: new
Abstract: Machine unlearning strives to uphold the data owners' right to be forgotten by enabling models to selectively forget specific data. Recent methods suggest that one approach of data forgetting is by precomputing and storing statistics carrying second-order information to improve computational and memory efficiency. However, they rely on restrictive assumptions and the computation/storage suffer from the curse of model parameter dimensionality, making it challenging to apply to most deep neural networks. In this work, we …

abstract arxiv computational cs.ai cs.lg data efficiency enabling free however information machine memory statistics type unlearning via

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US