March 14, 2024, 4:42 a.m. | Sangamesh Kodge, Deepak Ravikumar, Gobinda Saha, Kaushik Roy

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.08618v1 Announce Type: new
Abstract: Label corruption, where training samples have incorrect labels, can significantly degrade the performance of machine learning models. This corruption often arises from non-expert labeling or adversarial attacks. Acquiring large, perfectly labeled datasets is costly, and retraining large models from scratch when a clean dataset becomes available is computationally expensive. To address this challenge, we propose Post-Training Correction, a new paradigm that adjusts model parameters after initial training to mitigate label noise, eliminating the need for …

abstract adversarial adversarial attacks arxiv attacks corruption cs.ai cs.lg dataset datasets expert labeling labels large models machine machine learning machine learning models noise performance retraining robustness samples scratch stat.ml training type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

RL Analytics - Content, Data Science Manager

@ Meta | Burlingame, CA

Research Engineer

@ BASF | Houston, TX, US, 77079