Feb. 6, 2024, 5:47 a.m. | Tianjin Huang Shiwei Liu Tianlong Chen Meng Fang Li Shen Vlaod Menkovski Lu Yin Yulong Pei

cs.LG updates on arXiv.org arxiv.org

Despite the fact that adversarial training has become the de facto method for improving the robustness of deep neural networks, it is well-known that vanilla adversarial training suffers from daunting robust overfitting, resulting in unsatisfactory robust generalization. A number of approaches have been proposed to address these drawbacks such as extra regularization, adversarial weights perturbation, and training with more data over the last few years. However, the robust generalization improvement is yet far from satisfactory. In this paper, we approach …

adversarial adversarial training become cs.ai cs.lg extra networks neural networks optimization overfitting regularization robust robustness training trajectory via

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US