April 26, 2024, 4:42 a.m. | David Winderl, Nicola Franco, Jeanette Miriam Lorenz

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.16417v1 Announce Type: cross
Abstract: With the rapid advancement of Quantum Machine Learning (QML), the critical need to enhance security measures against adversarial attacks and protect QML models becomes increasingly evident. In this work, we outline the connection between quantum noise channels and differential privacy (DP), by constructing a family of noise channels which are inherently $\epsilon$-DP: $(\alpha, \gamma)$-channels. Through this approach, we successfully replicate the $\epsilon$-DP bounds observed for depolarizing and random rotation channels, thereby affirming the broad generality …

abstract advancement adversarial adversarial attacks arxiv attacks channels cs.ai cs.lg differential differential privacy machine machine learning noise privacy protect qml quant-ph quantum robustness security type work

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote