April 23, 2024, 4:42 a.m. | Jingwen Ye, Ruonan Yu, Songhua Liu, Xinchao Wang

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.14006v1 Announce Type: new
Abstract: The proliferation of large-scale AI models trained on extensive datasets has revolutionized machine learning. With these models taking on increasingly central roles in various applications, the need to understand their behavior and enhance interpretability has become paramount. To investigate the impact of changes in training data on a pre-trained model, a common approach is leave-one-out retraining. This entails systematically altering the training dataset by removing specific samples to observe resulting changes within the model. However, …

abstract ai models applications arxiv become behavior cs.cv cs.lg data datasets gradient impact interpretability machine machine learning roles scale scale ai training training data type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne