all AI news
Verifix: Post-Training Correction to Improve Label Noise Robustness with Verified Samples
March 14, 2024, 4:42 a.m. | Sangamesh Kodge, Deepak Ravikumar, Gobinda Saha, Kaushik Roy
cs.LG updates on arXiv.org arxiv.org
Abstract: Label corruption, where training samples have incorrect labels, can significantly degrade the performance of machine learning models. This corruption often arises from non-expert labeling or adversarial attacks. Acquiring large, perfectly labeled datasets is costly, and retraining large models from scratch when a clean dataset becomes available is computationally expensive. To address this challenge, we propose Post-Training Correction, a new paradigm that adjusts model parameters after initial training to mitigate label noise, eliminating the need for …
abstract adversarial adversarial attacks arxiv attacks corruption cs.ai cs.lg dataset datasets expert labeling labels large models machine machine learning machine learning models noise performance retraining robustness samples scratch stat.ml training type
More from arxiv.org / cs.LG updates on arXiv.org
Digital Over-the-Air Federated Learning in Multi-Antenna Systems
2 days, 13 hours ago |
arxiv.org
Bagging Provides Assumption-free Stability
2 days, 13 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
RL Analytics - Content, Data Science Manager
@ Meta | Burlingame, CA
Research Engineer
@ BASF | Houston, TX, US, 77079