March 5, 2024, 2:42 p.m. | Hiroaki Maeshima, Akira Otsuka

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.01896v1 Announce Type: new
Abstract: Adversarial example (AE) is an attack method for machine learning, which is crafted by adding imperceptible perturbation to the data inducing misclassification. In the current paper, we investigated the upper bound of the probability of successful AEs based on the Gaussian Process (GP) classification. We proved a new upper bound that depends on AE's perturbation norm, the kernel function used in GP, and the distance of the closest pair with different labels in the training …

abstract adversarial adversarial examples arxiv classification cs.cr cs.lg current data example examples machine machine learning paper practice probability process robustness stat.ml theory type

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US