Oct. 13, 2022, 1:13 a.m. | Anna Kuzina, Max Welling, Jakub M. Tomczak

cs.LG updates on arXiv.org arxiv.org

Variational autoencoders (VAEs) are latent variable models that can generate
complex objects and provide meaningful latent representations. Moreover, they
could be further used in downstream tasks such as classification. As previous
work has shown, one can easily fool VAEs to produce unexpected latent
representations and reconstructions for a visually slightly modified input.
Here, we examine several objective functions for adversarial attack
construction proposed previously and present a solution to alleviate the effect
of these attacks. Our method utilizes the Markov …

adversarial attacks arxiv attacks mcmc variational autoencoders

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Consultant - Artificial Intelligence & Data (Google Cloud Data Engineer) - MY / TH

@ Deloitte | Kuala Lumpur, MY