all AI news
Effective and Robust Adversarial Training against Data and Label Corruptions
May 8, 2024, 4:42 a.m. | Peng-Fei Zhang, Zi Huang, Xin-Shun Xu, Guangdong Bai
cs.LG updates on arXiv.org arxiv.org
Abstract: Corruptions due to data perturbations and label noise are prevalent in the datasets from unreliable sources, which poses significant threats to model training. Despite existing efforts in developing robust models, current learning methods commonly overlook the possible co-existence of both corruptions, limiting the effectiveness and practicability of the model.
In this paper, we develop an Effective and Robust Adversarial Training (ERAT) framework to simultaneously handle two types of corruption (i.e., data and label) without prior …
abstract adversarial adversarial training arxiv cs.cv cs.lg current data datasets noise robust robust models threats training type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer
@ GPTZero | Toronto, Canada
ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)
@ HelloBetter | Remote
Doctoral Researcher (m/f/div) in Automated Processing of Bioimages
@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena
Seeking Developers and Engineers for AI T-Shirt Generator Project
@ Chevon Hicks | Remote
Senior Principal Data Engineer
@ GSK | Bengaluru
Senior Principal Data Engineering
@ GSK | Bengaluru