March 1, 2024, 5:42 a.m. | Qiao Han, yong huang, xinling Guo, Yiteng Zhai, Yu Qin, Yao Yang

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.18787v1 Announce Type: new
Abstract: Recent studies have revealed the vulnerability of Deep Neural Networks (DNNs) to adversarial examples, which can easily fool DNNs into making incorrect predictions. To mitigate this deficiency, we propose a novel adversarial defense method called "Immunity" (Innovative MoE with MUtual information \& positioN stabilITY) based on a modified Mixture-of-Experts (MoE) architecture in this work. The key enhancements to the standard MoE are two-fold: 1) integrating of Random Switch Gates (RSGs) to obtain diverse network structures …

abstract adversarial adversarial examples arxiv cs.cr cs.lg defense examples experts information making moe networks neural networks novel predictions stability studies type vulnerability

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Global Data Architect, AVP - State Street Global Advisors

@ State Street | Boston, Massachusetts

Data Engineer

@ NTT DATA | Pune, MH, IN