May 6, 2024, 4:42 a.m. | Jiancong Xiao, Jiawei Zhang, Zhi-Quan Luo, Asuman Ozdaglar

cs.LG updates on arXiv.org arxiv.org

arXiv:2405.01817v1 Announce Type: new
Abstract: In adversarial machine learning, neural networks suffer from a significant issue known as robust overfitting, where the robust test accuracy decreases over epochs (Rice et al., 2020). Recent research conducted by Xing et al.,2021; Xiao et al., 2022 has focused on studying the uniform stability of adversarial training. Their investigations revealed that SGD-based adversarial training fails to exhibit uniform stability, and the derived stability bounds align with the observed phenomenon of robust overfitting in experiments. …

abstract accuracy adversarial adversarial machine learning adversarial training algorithms arxiv beyond cs.lg issue machine machine learning networks neural networks overfitting research robust stability studying test training type uniform

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US