all AI news
Adversarially trained neural representations may already be as robust as corresponding biological neural representations. (arXiv:2206.11228v1 [q-bio.NC])
Web: http://arxiv.org/abs/2206.11228
June 23, 2022, 1:11 a.m. | Chong Guo, Michael J. Lee, Guillaume Leclerc, Joel Dapello, Yug Rao, Aleksander Madry, James J. DiCarlo
cs.LG updates on arXiv.org arxiv.org
Visual systems of primates are the gold standard of robust perception. There
is thus a general belief that mimicking the neural representations that
underlie those systems will yield artificial visual systems that are
adversarially robust. In this work, we develop a method for performing
adversarial visual attacks directly on primate brain activity. We then leverage
this method to demonstrate that the above-mentioned belief might not be well
founded. Specifically, we report that the biological neurons that make up
visual systems …
More from arxiv.org / cs.LG updates on arXiv.org
Latest AI/ML/Big Data Jobs
Machine Learning Researcher - Saalfeld Lab
@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia
Project Director, Machine Learning in US Health
@ ideas42.org | Remote, US
Data Science Intern
@ NannyML | Remote
Machine Learning Engineer NLP/Speech
@ Play.ht | Remote
Research Scientist, 3D Reconstruction
@ Yembo | Remote, US
Clinical Assistant or Associate Professor of Management Science and Systems
@ University at Buffalo | Buffalo, NY