Feb. 26, 2024, 5:44 a.m. | Yongchan Kwon, Eric Wu, Kevin Wu, James Zou

cs.LG updates on arXiv.org arxiv.org

arXiv:2310.00902v2 Announce Type: replace
Abstract: Quantifying the impact of training data points is crucial for understanding the outputs of machine learning models and for improving the transparency of the AI pipeline. The influence function is a principled and popular data attribution method, but its computational cost often makes it challenging to use. This issue becomes more pronounced in the setting of large language models and text-to-image models. In this work, we propose DataInf, an efficient influence approximation method that is …

abstract arxiv attribution computational cost cs.lg data diffusion diffusion models function impact influence llms lora machine machine learning machine learning models pipeline popular stat.ml training training data transparency type understanding

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US