all AI news
Testable Learning with Distribution Shift
May 22, 2024, 4:43 a.m. | Adam R. Klivans, Konstantinos Stavropoulos, Arsen Vasilyan
cs.LG updates on arXiv.org arxiv.org
Abstract: We revisit the fundamental problem of learning with distribution shift, in which a learner is given labeled samples from training distribution $D$, unlabeled samples from test distribution $D'$ and is asked to output a classifier with low test error. The standard approach in this setting is to bound the loss of a classifier in terms of some notion of distance between $D$ and $D'$. These distances, however, seem difficult to compute and do not lead …
abstract arxiv classifier cs.ds cs.lg distribution error fundamental loss low replace samples shift standard test training type
More from arxiv.org / cs.LG updates on arXiv.org
Machine-learned models for magnetic materials
1 day, 19 hours ago |
arxiv.org
Revisiting RIP guarantees for sketching operators on mixture models
1 day, 19 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Senior Data Engineer
@ Displate | Warsaw
Analyst, Data Analytics
@ T. Rowe Price | Owings Mills, MD - Building 4
Regulatory Data Analyst
@ Federal Reserve System | San Francisco, CA
Sr. Data Analyst
@ Bank of America | Charlotte
Data Analyst- Tech Refresh
@ CACI International Inc | 1J5 WASHINGTON DC (BOLLING AFB)
Senior AML/CFT & Data Analyst
@ Ocorian | Ebène, Mauritius