all AI news
Testable Learning with Distribution Shift
May 22, 2024, 4:43 a.m. | Adam R. Klivans, Konstantinos Stavropoulos, Arsen Vasilyan
cs.LG updates on arXiv.org arxiv.org
Abstract: We revisit the fundamental problem of learning with distribution shift, in which a learner is given labeled samples from training distribution $D$, unlabeled samples from test distribution $D'$ and is asked to output a classifier with low test error. The standard approach in this setting is to bound the loss of a classifier in terms of some notion of distance between $D$ and $D'$. These distances, however, seem difficult to compute and do not lead …
abstract arxiv classifier cs.ds cs.lg distribution error fundamental loss low replace samples shift standard test training type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Data Engineer
@ Displate | Warsaw
Engineer III, Back-End Server (mult.)
@ Samsung Electronics | 645 Clyde Avenue, Mountain View, CA, USA
Senior Product Security Engineer - Cyber Security Researcher
@ Boeing | USA - Arlington, VA
Senior Manager, Software Engineering, DevOps
@ Capital One | Richmond, VA
PGIM Quantitative Solutions, Investment Multi-Asset Research (Hybrid)
@ Prudential Financial | Prudential Tower, 655 Broad Street, Newark, NJ
Cyber Security Engineer
@ HP | FTC02 - Fort Collins, CO East Link (FTC02)