April 24, 2024, 4:43 a.m. | T\^ania Carvalho, Nuno Moniz, Lu\'is Antunes, Nitesh Chawla

cs.LG updates on arXiv.org arxiv.org

arXiv:2212.00484v3 Announce Type: replace
Abstract: Protecting user data privacy can be achieved via many methods, from statistical transformations to generative models. However, all of them have critical drawbacks. For example, creating a transformed data set using traditional techniques is highly time-consuming. Also, recent deep learning-based solutions require significant computational resources in addition to long training phases, and differentially private-based solutions may undermine data utility. In this paper, we propose $\epsilon$-PrivateSMOTE, a technique designed for safeguarding against re-identification and linkage attacks, …

abstract arxiv computational control cs.cr cs.lg data data privacy data set deep learning example generative generative models however identification privacy private data resources risk set solutions statistical them type user data via

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US