all AI news
Generalization Error Bounds for Learning under Censored Feedback
April 16, 2024, 4:41 a.m. | Yifan Yang, Ali Payani, Parinaz Naghizadeh
cs.LG updates on arXiv.org arxiv.org
Abstract: Generalization error bounds from learning theory provide statistical guarantees on how well an algorithm will perform on previously unseen data. In this paper, we characterize the impacts of data non-IIDness due to censored feedback (a.k.a. selective labeling bias) on such bounds. We first derive an extension of the well-known Dvoretzky-Kiefer-Wolfowitz (DKW) inequality, which characterizes the gap between empirical and theoretical CDFs given IID data, to problems with non-IID data due to censored feedback. We then …
abstract algorithm arxiv bias cs.lg data error extension feedback impacts labeling paper statistical stat.ml theory type will
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)
@ HelloBetter | Remote
Doctoral Researcher (m/f/div) in Automated Processing of Bioimages
@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena
Seeking Developers and Engineers for AI T-Shirt Generator Project
@ Chevon Hicks | Remote
Cloud Data Platform Engineer
@ First Central | Home Office (Remote)
Associate Director, Data Science
@ MSD | USA - New Jersey - Rahway
Data Scientist Sr.
@ MSD | CHL - Santiago - Santiago (Calle Mariano)